id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
267545604 | pes2o/s2orc | v3-fos-license | IFNγ: Priming for death
TNF signaling does not result in cell death unless multiple inhibitory signals are overcome, which can be accomplished by simultaneous signaling through IFNγ. In this issue, Deng and colleagues dissect the mechanisms by which IFNγ signaling combines with TNF to mediate cell death through caspase-8, discussed by James E. Vince.
program in a variety of cell types, details are still being uncovered on the mechanisms by which IFNγ allows TNF and TNFR1 to live up to their status as a death ligand and death receptor.This is important to understand, because cell death induced by IFNγ and TNF (or TLR ligands) has been implicated in cytokine shock syndromes associated with infections (2,3) and in tumor cell susceptibility to cancer immunotherapy (4,5).Moreover, these cytokines can co-mingle in a number of conditions associated with exacerbated immune responses and cell death, such as inflammatory bowel disease where anti-TNF and JAK inhibitors are used therapeutically.
In this issue, Buhao Deng and colleagues aimed to identify IFNγ responsive genes that license the TNF-induced cell death response (6).RNA sequencing, and subsequent qPCR and western blot validation of IFNγ-treated cancer cell lines, identified increased expression of the cell death regulators caspase-8, caspase-7, and cylindromatosis (CYLD).Moreover, melanoma patients that responded favorably to anti-PD-1 therapy showed higher expression of IFNγ and TNF, as well as caspase-7, caspase-8, and CYLD.This suggested that the in vitro IFNγ and TNF killing mechanism might be one means of tumor cell elimination in patients following immune checkpoint blockade.
Caspase-8 is a death receptor apoptotic initiator caspase that undergoes proximityinduced activation upon recruitment to death receptor signaling complexes (7).When activated by TNFR1, caspase-8 can cleave the apoptotic effector caspases, caspase-3 and -7, which leads to cell death.CYLD is a de-ubiquitylating enzyme that has been reported to remove ubiquitin chains from TNFR1 complexes that are important for the TNFR1 pro-survival signal (8), and therefore its increased activity can favor a TNFR1-driven death response (Fig. 1).
To examine the functional significance of IFNγ-induced caspase-8 and CYLD expression, the authors performed genetic knockout, knockdown, and overexpression studies in order to mimic or prevent IFNγinduced caspase-8 and CYLD.Importantly, these gene dosage titrations confirmed that cancer cell lines are exquisitely sensitive to caspase-8 and CYLD levels when it comes to TNF signaling responses: reduced amounts protected from IFNγ and TNF killing, while increased amounts sensitized to TNFinduced cell death, with co-depletion or coexpression of caspase-8 and CYLD having an additive impact.
Next, the authors asked how IFNγ induced caspase-8 and CYLD expression.Both increased caspase-8 and CYLD expression, and IFNγ and TNF killing, were abolished by genetic loss of the transcription factor IFN regulatory factor 1 (IRF1).Meanwhile IRF1 overexpression alone sufficed to induce caspase-8 and CYLD and sensitize cells to TNF, indicating that IFNγ-induced production of IRF1 drives caspase-8 and CYLD production.Consistent with this, IRF1 bound to the promoter regions of caspase-8 and CYLD, and IFN-stimulated response elements (ISREs) were identified whereby CRISPR/Cas9-mediated mutation of these caspase-8 and CYLD ISREs prevented IFNγinduced caspase-8 and CYLD expression (Fig. 1).
Having defined key IFNγ-responsive cell death regulators that sensitize cells to TNF, the authors focused on the role of ELAV-like RNA binding protein 1 (ELAVL1, also known as human antigen R, HuR)-an mRNA binding protein that the authors identified from a CRISPR/Cas9 screen as being required for IFNγ and TNF killing.Although ELAVL1 expression was not induced by IFNγ treatment, ELAVL1 binding to caspase-8 mRNA was critical for its pro-cell death functions (Fig. 1).The loss of ELAVL1 specifically de-stabilized caspase-8 mRNA, not other important TNFR1 complex cell death regulators, and also prevented increased caspase-8 levels when cells were treated with IFNγ.In fact, the levels of caspase-8 protein in ELAVL1-deleted cells were markedly reduced and, consequently, this conferred some protection from cell death induced by other activators of TNFR1 killing (TNF co-treatment with IAP antagonists or cycloheximide).
The discoveries from this study have broad relevance to our understanding of the physiological scenarios by which TNF's capacity for inducing cell death is unleashed.Although findings provide one explanation for how IFNγ can prime cells for TNF killing, via increasing caspase-8 and CYLD expression, the authors also observed apoptotic caspase-7 induction, and the significance of this was not further explored.Similarly, it will be of interest to examine the relevance of the other important death receptor initiator caspase, caspase-10.This is because in other cancer cell lines, such as HT29 cells, the expression of caspase-10 was induced by IFNγ treatment to a higher level than caspase-8, and cell death caused by IFNγ and IAP protein antagonist treatment could only be blocked when both death receptor initiator caspases, caspase-8 and caspase-10, were co-deleted (on a necroptotic deficient background) (9).On the other hand, recent research has implicated non-enzymatic caspase-8 activity in the cell death caused by IFNγ and TNF treatment of intestinal epithelial cells, although this conclusion requires genetic testing (10).
The circumstances and cellular context of IFNγ challenge will influence the genes that are expressed, and the mode of cell death subsequently engaged, following TLR or TNFR1 activation or pathogen sensing.For example, while the current study primarily focused on cancer cell lines, primary cells can behave differently.As shown by the authors themselves and other labs (2, 3), in mouse macrophages, but not cancer cell lines, IFNγ primes for TNF and/or TLR killing via the production of inducible nitric oxide synthase (iNOS).Why these cell type-specific discrepancies in killing mechanisms occur remains unknown-although the critical anti-pathogen roles of innate immune cells may have endowed them with unique cytokine responses and sensitivities to free radicals, such as iNOS generated nitric oxide.Similarly, ELAVL1 can act to limit cell death in some circumstances by, for example, repressing caspase-2 levels in cancer cell lines (11), while in bone marrow progenitor cells ELAVL1 deletion increases levels of pro-apoptotic proteins, including caspase-8, caspase-9, NOXA, and PUMA (12).Therefore, how broadly ELAVL1 acts to stabilize caspase-8 mRNA and increase its translation across diverse cell types to allow for efficient death receptor killing will be important to define.
Collectively, building on the discoveries from Buhao Deng et al., further explorations are warranted into the differential mechanisms of IFNγ-and TNF-induced cell death in primary cell types versus cancer cells.Such findings may expose cancer cell vulnerabilities that can be exploited to induce selective tumor cell death or identify targets for therapeutic intervention in autoinflammatory conditions.IFNγ signaling activates STAT1, which induces production of the transcription factor IRF1, capable of binding ISREs in the capase-8 and CYLD promoters.Consequently, CYLD and caspase-8 expression is increased and, in the presence of TNF signaling, heightened CYLD levels and activity remove ubiquitin chains from the pro-survival TNFR1 complex.This de-ubiquitination of TNFR1 complex components, such as RIPK1, promotes formation of a cytosolic death signaling complex containing apoptotic caspase-8.Increased caspase-8 not only results from IRF1-driven de novo gene transcription, but also via ELAVL1 binding caspase-8 mRNA to stabilize it and enhance its translation.IFNGR, IFNγ receptor; GAS, γ IFN activation site; TRADD, TNFR1-associated death domain protein; FADD, FAS-associated death domain protein; LUBAC, linear ubiquitin chain assembly complex; SPATA2, spermatogenesis associated protein 2; TRAF2, TNF receptor associated factor 2; cIAP, cellular IAP; Ub, ubiquitin.
Figure 1 .
Figure 1.IFNγ licenses TNF-induced cell death through increased CYLD and caspase-8 expression.IFNγ signaling activates STAT1, which induces production of the transcription factor IRF1, capable of binding ISREs in the capase-8 and CYLD promoters.Consequently, CYLD and caspase-8 expression is increased and, in the presence of TNF signaling, heightened CYLD levels and activity remove ubiquitin chains from the pro-survival TNFR1 complex.This de-ubiquitination of TNFR1 complex components, such as RIPK1, promotes formation of a cytosolic death signaling complex containing apoptotic caspase-8.Increased caspase-8 not only results from IRF1-driven de novo gene transcription, but also via ELAVL1 binding caspase-8 mRNA to stabilize it and enhance its translation.IFNGR, IFNγ receptor; GAS, γ IFN activation site; TRADD, TNFR1-associated death domain protein; FADD, FAS-associated death domain protein; LUBAC, linear ubiquitin chain assembly complex; SPATA2, spermatogenesis associated protein 2; TRAF2, TNF receptor associated factor 2; cIAP, cellular IAP; Ub, ubiquitin. | 2024-02-09T06:17:36.236Z | 2024-02-08T00:00:00.000 | {
"year": 2024,
"sha1": "341544309388050b039f35118a1f483a000c1bc2",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/223/3/e202401127/1923921/jcb_202401127.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "b06d3ee6da52bdf0d4f18ec34f2d68814c93de00",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6238899 | pes2o/s2orc | v3-fos-license | Herpes Simplex Virus Type 1 and Type 2 Infection Increases Atherosclerosis Risk: Evidence Based on a Meta-Analysis
Objective. The aim of our study was to evaluate the relation of herpes simplex virus type 1 (HSV-1) and type 2 (HSV-2) infection with the risk of atherosclerosis (AS). Methods. A systematic literature search was performed through three electronic databases. The pooled odds ratio (OR) and corresponding 95% confidence interval (CI) were used to assess the effect of HSV-1 and HSV-2 infection on AS risk. Results. 17 studies were available for meta-analysis of HSV-1 infection and AS risk and seven studies for meta-analysis of HSV-2 infection and AS risk. Subjects exposed to HSV-1 infection exhibited an increased risk of AS (OR = 1.77; 95% CI: 1.40–2.23; P < 0.001). And consistent elevated AS risks for HSV-1 positive subjects were found in all subgroup analysis of disease type, region, male proportion, and age. HSV-2 positive subjects demonstrated significantly increased AS risk (OR = 1.37; 95% CI: 1.13–1.67; P < 0.005). In subgroup analysis, elevated AS risks were only observed in myocardial ischemia group, male proportion >60% group, and age ≤60-year-old group. Conclusion. Our meta-analysis indicated that HSV-1 and HSV-2 infection could increase the risk of contracting AS.
Background
Atherosclerosis (AS) is a major public health problem worldwide that leads to various life-threatening complications, such as coronary artery disease, stroke, and peripheral artery disease [1,2]. Traditional risk factors include hyperlipidemia, hypertension, diabetes mellitus, smoking, and a positive family history, but these do not fully explain the extent and severity of the conditions [3]. In recent years, numerous studies have implicated that pathogen burden might play an important role in the pathogenesis of atherosclerosis, for example, Helicobacter pylori, Chlamydia pneumoniae, and herpes simplex virus (HSV) [4][5][6].
HSV infection was widespread in the developed countries, with a prevalence of between 35% and 40% [7]. The data showed that the prevalence of HSV-1 and HSV-2 is 37% and 28%, respectively [7]. HSV was first proposed to be a risk factor for AS several decades ago when a chicken herpesvirus led to occlusive AS of large muscular arteries in an animal model [8]. Subsequent molecular biology and epidemiology studies have strengthened the hypothesis that HSV is an important risk factor in the development of AS in humans. The atherogenic mechanisms of HSV may involve increasing adherence of leukocytes to endothelium, inducing lipid accumulation in vascular smooth muscle cells (VSMCs), and contributing to deposition of thrombin in atherosclerotic plaques [9][10][11].
Recently an increasing number of epidemiologic studies have investigated the association between HSV infection and AS risk by testing HSV antibodies [12]. Siscovick et al. revealed that HSV-1 infection was associated with a 2fold increase in the risk of incident MI and death from coronary heart disease [13]. Kis et al. detected increased levels of HSV-1 antibodies in patients with acute ischemic stroke, suggesting an association of HSV-1 infection with the disease [14]. The data of Guan et al. showed a higher prevalence of antibodies against HSV-2 in the subjects with acute myocardial infarction [15]. However, there were still some studies demonstrating no relationship between HSV infection and atherosclerosis [16,17]. Given the controversial results of these studies, we deemed it necessary to conduct a quantitative and systematic assessment with rigorous methodology to further evaluate the potential role of HSV infection in the development of AS. We performed a meta-analysis to explore the relationship between HSV-1 and HSV-2 and AS risk.
Publication Search.
We searched the databases of PubMed, Web of Science, and CNKI (China National Knowledge Infrastructure) for articles on any relationship between HSV-1 and HSV-2 infection and the risk of developing AS. The last search date was March 15, 2015. The following key terms were used: "Herpes Simplex Virus OR HSV" and "atherosclerosis OR myocardial ischemia OR ischemic heart disease OR coronary artery disease OR angina OR myocardial infarction OR stroke OR cerebral ischemia OR carotid artery disease OR peripheral artery disease." The references cited in the research papers were further searched manually for potentially available publications.
Inclusion Criteria.
(1) The study is a case-control design.
(2) The study evaluates the association between HSV-1 and HSV-2 infection and AS risk. (3) The study confirms the diagnosis of the atherosclerotic diseases. (4) The study clearly supplies the values (or percentage) of positivity for HSV-1 and HSV-2 infection in cases and controls, respectively. (5) The study is published in English or Chinese.
Data Extraction.
Data from these studies were extracted by two of the authors (Yu peng Wu and Dan dan Sun) independently using a standardized form, who reached a consensus on all items. The following data were collected from each study: first author, year of publication, country, region, disease type, mean age, male proportion, detection method of HSV-1 and HSV-2 infection, sample size, and the positivity or negativity for HSV-1 and HSV-2 infection in cases and controls, respectively.
Statistical Analysis.
The pooled odds ratios (OR) with 95% confidence intervals (CI) were used to assess the association of HSV-1 and HSV-2 infection with AS risk. Statistical heterogeneity between studies was assessed with the 2 -based test and 2 [18]. When heterogeneity was not an issue ( > 0.10), a fixed-effect model with the Mantel-Haenszel method was used [19]. Otherwise, a random-effect model using the DerSimonian-Laird method was used [20]. Meanwhile, subgroup analysis was conducted for different geographic regions, male proportion, mean age, and disease types (divided into myocardial ischemia and other types of AS). To explore sources of heterogeneity across studies, we conducted logistic metaregression using the following study characteristics: region, test method, and positivity for HSV-1 and HSV-2 infection in controls. In addition, publication bias was evaluated qualitatively by performing funnel plots and was assessed quantitatively by Begg's test and Egger's test, respectively ( < 0.05 was considered representative of statistically significant publication bias) [21,22]. The statistical analysis was performed using STATA 12.0 software (Stata, College Station, TX, USA).
The characteristics of the selected studies were listed in Tables 1 and 2. Of the 17 studies, 14 were published in English and three were written in Chinese. The sample sizes ranged from 15 to 1532. The controls were randomly selected and frequency-matched with the cases on age, region, and gender. Several methods were used to detect HSV-1 and HSV-2 specific antibody, including enzyme-linked immunosorbent assay (ELISA), solid-phase radioimmunoassay (SPRIA), and western blotting (WB). The diseases included myocardial ischemia, stroke, carotid artery disease, and mixed AS lesions (coronary, cerebral, carotid, and peripheral artery involvement) (Tables 1 and 2).
Effect of HSV-1 Infection on AS Risk.
The relationships between HSV-1 and HSV-2 infection and the risk of AS were shown in Table 3. Overall, there was statistical evidence of significantly elevated AS risk associated with HSV-1 infection (OR = 1.77; 95% CI = 1.40-2.23) ( Figure 2). In terms of stratified analysis by disease type, there were significant elevated risks in both myocardial ischemia and other types of AS for HSV-1 infection (myocardial ischemia: OR = 1.83; 95% CI = 1. 40
Effect of HSV-2 Infection on AS Risk.
In total population, HSV-2 positive subjects demonstrated significantly elevated AS risk when compared with the negative ones (OR = 1.37; 95% CI = 1.13-1.67) (Figure 3). In terms of stratified analysis by disease type, increased risk was only observed in myocardial ischemia and not in other types of AS (myocardial ischemia: OR = 1.66; 95% CI = 1.28-2.15). We also performed stratified analysis by region, age, and male proportion. Significantly elevated risks were observed in male
Heterogeneity.
There was heterogeneity among studies on HSV-1 infection but not in studies on HSV-2 infection (HSV-1 infection: < 0.001; 2 = 65.60%; HSV-2 infection: = 0.138; 2 = 38.2%). To explore sources of heterogeneity across studies, we compared HSV-1 infection according to region of origin, test method, and positivity for HSV-1 infection in controls. We found that the region ( < 0.05), but not test method and the positivity for HSV-1 infection in controls ( > 0.05), might play a role in the initial heterogeneity, which could explain the 2 value of 29% in the overall comparison of HSV-1 infection.
Sensitivity Analysis and Publication Bias.
There was no significant difference in the pooled OR estimated by omitting one study at a time, indicating that the final results of this meta-analysis were relatively stable and reliable (see Table S1 in Supplementary Material available online at http://dx.doi.org/10.1155/2016/2630865). The Begg and Egger tests were conducted to evaluate publication bias. Both revealed no evidence of publication bias in our study; the results were shown in Table 4, Figure S1, and Figure S2.
Discussion
Herpesvirus has been implicated in the inflammatory atherosclerotic process [35]. Chronic activation of inflammation by herpesvirus infection is hypothesized to promote atherosclerosis and thrombosis. As the major subtypes of herpesvirus, HSV-1 and HSV-2 have been a concern in relation to AS for many years. However, the existing data are somewhat conflicting. Hence, we deemed it necessary to take a quantitative approach by combining the results of various studies and provide what to our knowledge is the first meta-analysis evaluating the effect of HSV-1 and HSV-2 infection on AS risk.
In the overall analysis, significant increased risk was observed for both HSV-1 and HSV-2 infection, indicating that HSV infection may play an important role in the process of atherogenesis. Some mechanistic studies may explain certain relationships. In 1991, Etingin et al. demonstrated that the endothelial cells infected by HSV might express the adhesion molecule GMP140, which could mediate endothelial cell injury and inflammation [36]. Subsequently, Chirathaworn et al. showed that HSV enhanced the uptake of oxidized lowdensity lipoprotein in endothelial cells [37]. The atherogenic effect of HSV not only concerned the endothelial cells but also involved VSMCs. It had been reported that more saturated cholesteryl esters and triacylglycerols accumulated in VSMCs infected by HSV than in uninfected cells [38]. In addition, Key et al. concluded that HSV could contribute to deposition of thrombi on atherosclerotic plaques and induce coagulant necrosis by decreasing thrombomodulin activity and increasing tissue factor activity [39]. These in vitro studies demonstrated that HSV exerts effects in almost every step of atherogenesis.
Subgroup analysis suggested that both HSV-1 and HSV-2 infection had a significant risk effect in myocardial ischemia. Borderline significance was found for other types of AS in HSV-1 infection whereas no association was observed between HSV-2 infection and other types of AS. Many studies have reported the detection of HSV-1 DNA in human vascular tissue from different sites. Benditt et al. first found HSV-1 DNA in human vascular tissue from the ascending aorta in patients undergoing coronary bypass surgery [40].
Subsequently, HSV-1 DNA was reported in coronary artery tissue. Chiu et al. detected HSV-1 DNA in plaques from occlusive carotid artery [41], and HSV-1 DNA was also found in atherosclerotic tissues from six types of atherosclerotic lesions by Shi and Tokunaga [42]. However, only one study, by Kotronias and Kapranos, reported the detection of HSV-2 DNA in coronary artery tissue [43]. These data might partially explain the different results from our subgroup analysis. Future studies concerning the association between HSV-2 infection and other types of AS should be performed to confirm our results.
In the stratified analysis of age and male proportion, we found no relationship of HSV-2 infection with risk of AS in male proportion ≤60% group. We presumed that gender difference might account for more than half of the reason. Males are more likely to suffer from AS than females [44]. And androgen appeared to be associated with an increased risk of coronary artery disease by adversely affecting the plasma lipid and lipoprotein profile, producing thrombosis and cardiac hypertrophy [45].
Regarding the subgroup analysis of diverse regions, HSV-1 infection had risk effects on all three subgroups of Asians, Europeans, and Americans. However, the association of HSV-2 infection with AS did not reach statistical significance in any subgroup, possibly because of the limited number of studies and relatively small sample size in each subgroup (two studies of Asians, three studies of Europeans, and two studies of Americans). More well-designed studies with larger sample sizes should be conducted for future validation.
We are aware that this meta-analysis has its own limitations. First, only seven articles with 1810 cases and 1050 controls were available for HSV-2 analysis; the relatively small number of participants made it difficult to perform stratified analysis. Second, our meta-analysis was based on unadjusted estimates; OR adjusted for age and sex should be pooled to provide exact summary estimates if more specific data from studies become available. Third, significant heterogeneity existed in the overall comparison of HSV-1 infection, although we found that regional differences may account for this heterogeneity.
Conclusions
Our meta-analysis indicated that HSV-1 and HSV-2 infection potentially increases the risk of AS. However, further large-scale and well-designed studies, including different geographic regions and careful matching between cases and controls, are required to confirm these results. | 2018-04-03T03:26:57.297Z | 2016-04-19T00:00:00.000 | {
"year": 2016,
"sha1": "80f2c8cc10e7cfc6a42b4c0bbdea66cad739cecb",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2016/2630865.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "80f2c8cc10e7cfc6a42b4c0bbdea66cad739cecb",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
51891773 | pes2o/s2orc | v3-fos-license | What is the best or most relevant global minimum for nanoclusters? Predicting, comparing and recycling cluster structures with WASP@N †‡
To address the question posed in the title, we have created, and now report details of, an open-access database of cluster structures with a web-assisted interface and toolkit as part of the WASP@N project. The database establishes a map of connectivities within each structure, the information about which is coded and kept as individual labels, called hashkeys, for the nanoclusters. These hashkeys are the basis for structure comparison within the database, and for establishing a map of connectivities between similar structures (topologies). The database is successfully used as a key element in a data-mining study of (MX) 12 clusters of three binary compounds (LiI, SrO and GaAs) of which the database has no prior knowledge. The structures are assessed on the energy landscapes determined by the corresponding bulk interatomic potentials. Global optimisation, using a Lamarckian genetic algorithm, is used to search for low lying minima on the same energy landscape to con fi rm that the data-mined structures form a representative sample of the landscapes, with only very few structures missing from the close energy neighbourhood of the respective global minima.
Introduction
The application of structure prediction in the eld of clusters and nanoparticles has resulted in literally millions of structures being discovered for different compounds, systems with different magnetic ordering, systems containing different dopants, or simply systems of different sizes. [1][2][3][4] Crucially, each system can be described as an energy landscape and the initial target or targets are the location of the global minimum (GM) or the locations of low energy local minima (LM). 5 Today when one wants to study a new compound of interest within certain sets of parameters, including stoichiometry, size, environment, etc., a key question springs to mind: is it worth running new simulations that employ one or several contemporary global structure optimisation algorithms? We arguenot necessarily! Thoughtful exploitation of the available data that can be found in the literature presents a viable alternative that turns out to be the most efficient way to discover new structures, materials, and their physics and chemistry. [6][7][8][9][10][11][12][13][14] Similar considerations, apart from size, can be applied to crystal structures, including molecular, metallic, ionic, covalent, or hybrid organic and inorganic frameworks. [15][16][17][18] Another problem encountered by practically every practitioner of global optimisation for structure prediction is how to ascertain that the newly discovered conguration of a particular compound is not known from competitors' studies, for example, or exists out there under the guise of a different compound of similar stoichiometry, or is not published but is known as a lower ranked local energy minimum (i.e. data that has a rank that is beyond a chosen set threshold for publication). The use of slightly different energy functions, unintentional effects of tolerances both in energy denition and local optimisation, or possibly an intentional bias to match measurable properties (for example, infrared data) will all muddle the waters further.
The choice of the bestor most suitable for the investigator's purposescost (or tness) function is uncertain, and could be quite different in different studies even on the same system.
To address these challenges, we have developed a database complemented by a toolkit that includes structure comparison as a key element. Aggregating structures and their properties into one place also enables the sophisticated exploration of structural motifs and particular properties and the discovery of structure-property relationships. Databases are not a new concept in materials modelling, 19-29 even in the eld of nanoclusters. 30,31 Crucially, our searchable database generates a map of connections relating different structures. In this article, we describe both the database and the algorithms that generate these mappings, followed by simple showcase examples.
Web-assisted structure prediction at the nanoscale (WASP@N)
In the development of the database, our Hive of knowledge, we aimed to arm the scientic community and general public, from professional researchers to school pupils, with a new intelligent tool to search, discover and disseminate structures and properties of new nanoclusters. To allow access and interaction with the Hive, we built a web interface, which we refer to as the WASP toolkit. The mapping between structures and various properties is an essential element, or feature, of the Hive database, which is generated by algorithms that form part of a separate piece of code that we refer to as the Bee soware. The Bee soware runs on dedicated computing facilities. The WASP interface links the user, the Hive and the Bee sowaresee Fig. 1. With open access to the Hive, a number of security measures have been employed in order to protect the integrity of the data and the computing facilities from malicious attacks (to complete the analogy, we refer to unwanted visitors to the Hive as hornets). Datasets within the Hive are organised as follows: (a) published atomic structures, the atomic coordinates of which were originally used to generate a gure (e.g. ball and stick models) or were explicitly given in a table as part of a published paper (or electronic supplementary information) that has a DOI; and (b) atomic structures generated using the Bee soware. For the former, the atomic structures are labelled using the DOI of the published article they were taken from, and are uploaded as one or more concatenated xyz le(s) using an extended format that contains both the metadata saved on the comment line and the atomic structure, which includes atomic labels: Cartesian coordinates; one additional scalar and one vector record per atom (for example, charges, spin, dipole on atom). Searchable metadata are vital for the use of a database. Values for metadata that can be provided include the denition of energy and soware, total charge, energy ranking, total spin, etc. For example, the comment line: "Name¼drum; Symmetry¼D 3h ; Definition¼{FHI-aims, PBE0/PBE, tight}; Ener-gy¼210Hartree; Size¼6; Atoms¼12; Charge¼0; Spin¼0; Dipole¼(0,0,0)" for the cluster (ZnO) 6 indicates that the user refers to the local minimum conguration as a "drum", the atomic coordinates of which have D 3h point group symmetry aer geometry relaxation using the FHI-aims soware with the generalised gradient approximation in density functional theory in the form of the PBE exchange and correlation density functional and the tight basis set, an energy of 210 Ha with the same basis set and the hybrid PBE0 exchange and correlation density functional, a total charge and spin of zero, and no resultant dipole. If not specied upon upload to the Hive, some of these will be calculated along with, for example, stoichiometry, topology, total mass, centre of mass, and principal moments of inertia. Non-searchable metadata like, for example, thumbnail ball and stick images, are generated on-the-y. The dataset for each DOI string will also contain timestamp metadata (when it was uploaded or last modied) and publication metadata (authors and journal name, volume and page numbers). Generated datasets are given a DOI string by the Bee soware that is based on the chosen energy denition, and the atomic congurations result from structural relaxations of all the published datasets. The essential search and comparison features of WASP enable the user to investigate structural motifs and physical properties. The comparison of clusters can be quite expensive and, therefore, comparison-based pre-searches are performed by the Bee soware upon the upload of new datasets, both published and generated. A description of the algorithms employed in these comparisons is provided in the next section. The results of pre-searches are saved as links between (thus establishing) related structures. These links, or new metadata generated by the Bee soware, form a map linking different structures in the database. The map can be readily exploited by the user through the WASP interface to ascertain the uniqueness of newly found congurations of clusters of a certain compound and size or to compare clusters of different compounds. Moreover, as we will demonstrate below, this map can also help to reduce the effort needed to explore the energy landscapes of a compound that has yet to be investigated. The computational work and the interaction of the three complementary codes (WASP, Bee, and the Hive) are supported by appropriate hardware solutionsas illustrated in Fig. 1 and related operating system server soware (including task scheduler, etc.). In the near future, we plan to expand the solution shown in Fig. 1 to include the exploitation of third party computing platforms.
Uniqueness and similarity
Being able to quickly recognise similar structures, or measure their similarity, has always been a challenge in materials modelling. 32 Consider comparing the atomic structures of two nanoclusters that are essentially the same but have either small random perturbations (noise) resulting from the applied optimisation tolerances or slight differences because of the different, but similar, density functionals employed. In the comparison procedure, the rst task is to correctly align these two congurations: the translation and rotation of each cluster is xed by positioning the centre of mass at the origin and aligning the principal axes of rotation with the chosen Cartesian axes. Hopefully, upon alignment, a one-to-one match is found for each atom in one conguration with the equivalent atom in the other. If not, then there is a combinatorial problem to solve: which combination of atom pairs minimises the sum of the distances between all pairs (a sum of zero implies a perfect match, with each atom in one conguration positioned exactly on top of the equivalent atom in the other conguration). Minimising this measure of likeness for two dissimilar nanoclusters may also require optimising the relative rotation and translation of the two nanoclusters.
The efficiency of stochastic search algorithmsparticle swarm, basin hopping, and genetic or evolutionary algorithmsthat are employed to locate local minima (LM) on the energy landscape can be improved if there is a computationally cheap method that provides a measure of how similar two structures are. For example, this could be used to check whether a newly found/generated conguration is unique, whether the starting points are sufficiently spread apart for different random walkers on the energy landscape, or whether the candidate structures in the current population are sufficiently diverse for the evolutionary algorithm (otherwise inbreeding results in the population not evolving, or improving, any further). One may also want to distinguish between enantiomorphic clusterstwo clusters that are mirror images of each other. One half of such a pair can easily be lost if the comparison of nanoclusters is simply based on their relative energy of formation (since both enantiomorphic clusters have identical energies). There are several approaches in the literature designed to measure the similarity between structures, 33-45 which can be classied in two groups: direct one-toone comparison or an indirect approach that requires the generation of labels, also known as ngerprints or hashkeys, which are then compared.
One-to-one comparison algorithms are typically based around a cost function that measures the degree of similarity between two structures. As introduced above, the cost function will depend on the successful superimposition of the two structures, i.e. the translation and rotation of one cluster with respect to the other. Where Dirac delta functions are used to describe the position of an atom, the cost function will also depend on the matching of atomic pairs between the structures. This in itself can pose a formidable task (see for example ref. 33, which employs the Hungarian algorithm). [34][35][36] This problem is reduced for compounds or alloys if pairs are restricted between like species. Alternatively, where a Gaussian, or a similar function, is centred on each atom, the cost function is typically based on the degree of overlap of atom-centred Gaussians between the two clusters. For compounds and alloys, the overlap of Gaussians can be determined for each species type; there is no explicit need to match pairs of atoms. Goedecker employed a similar scheme, but based on atomic orbitals (see ref. 37). Both types of cost function can also be employed to nd out whether, or how well, a smaller cluster matches a fragment of a larger cluster.
In this article, we only compare pairs of clusters that have the same composition, and use only the species type and atomic coordinates as the input. One of the most straightforward and widely used metrics for the comparison of molecular structures is the root-mean-square deviation (RMSD) of the coordinates of equivalent atoms. 38,39 Following a similar idea, the metrics suggested by Ali Sadeghi et al. 37 use congurational ngerprints based on eigenvalues of matrices of interatomic distances. The structural ngerprints are then compared by measuring the distances between them, as small ngerprint-based distances correspond to small RMSD distances. The H-FORMS (a hierarchical algorithm for molecular similarity) 46 approach estimates a rigid transformation that aligns structures and computes rotation-invariant descriptors, which are then used to match atoms. Similarly, R. Hundt et al. implemented an algorithm in the analysis program KPLOT 40 based on the mapping of atomic patterns constructed using three-atom frame matches. An alternative approach to the problem of structure comparison exploits the properties of the nanoclusters, 41 such as radial distribution functions, vibrational frequencies 42 or principal moments of inertia.
Whichever method is used, when a structure needs to be efficiently compared with vast data for thousands or millions of congurations, the chosen approach needs to be both robust and computationally affordable. The second class of comparison methodsbased on comparing unique labels that are generated for every congurationally unique structuremay address this big data challenge.
Within our database, we implemented the approach rst adopted in the KLMC soware 47 to address the challenge of maintaining the diversity of structures during a genetic algorithm search. The approach relies on the NAUTY soware package (No AUTomorphisms, Yes?) written by McKay and Piperno, 48 which can generate canonical labels for graphs and compute automorphisms between them. NAUTY labels graphs canonically by providing a string consisting of three 8-digit hexadecimal numbers depending on the graph, i.e. a set of vertices and edges, and, in general, every unique graph will have a unique NAUTY string, also known as a hashkey, or ngerprint. By exploiting the feature of uniqueness, we have incorporated NAUTY in the Bee soware in the following way: each cluster is converted to a coloured graph by treating the atoms as vertices and the bonds between them as edges. The number of colours of vertices (atoms) is determined by the number of species in the structure. Thus, (MgO) n clusters will have two different colours (species), whereas Ti n clusters will have only one. It is important to note that (KF) n clusters will also have two different colours, therefore graphs of (MgO) n and (KF) n clusters of the same size can be compared explicitly. The edges of the clusters' graphs are generated from the calculated interatomic distances between the atoms (vertices) of a cluster and can be thought of as "bonds" between atoms. The radial cut-off by which the "bonds" are determined depends on the species and is slightly longer than the expected actual bond length. A owchart of the implemented hashkey generation is given in Fig. 2, where the (MgO) 5 GM cluster is used as an example. Here, the (MgO) 5 GM cluster (shown as a ball and stick model in Fig. 2a) is transformed into a coloured graph (shown in Fig. 2b). This graph is then processed using the "NAUTY" soware package, which in turn generates a unique hashkey for the cluster. An example of a hashkey is shown in Fig. 2d.
Given that the comparison of hashkeys is orders of magnitude faster than comparing atomic structures explicitly, each cluster within the Hive database is labelled with a hashkey. As described above, the hashkeys enable a rapid check of the database for duplicate structures by both the WASP and Bee soware and are used in the generation of maps connecting similar structures (the network of links between clusters entered into the database is updated as soon as the atomic coordinates of generated and published LM nanoclusters are uploaded to the Hive)a feature that is not currently implemented in other structural databases. This feature has proven to be essential when the WASP interface is used to nd out whether a newly discovered cluster is already within the Hive. To demonstrate one of the utilities our database provides, we have used the generated hashkeys to identify unique structural motifs for a particular stoichiometry (1 : 1) and size (24 atoms). We then data-mined from this set, rather than a set of LM congurations of one or all compounds in the Hive.
Data normalisation
Published LM cluster structures, which can be uploaded to the database, are, by denition, dependent upon the theory and accuracy of the level of theory employed in the calculation of energy as a measure of stability. Moreover, the measure of tness may also be based on the deviation from some geometric, physical or chemical observable(s). When LM on a potential energy landscape are targeted, energy calculations at different levels of theory (quantum mechanical (all-electron or pseudopotential), semi-empirical, Hartree-Fock, DFT, tightbinding, semi-classical, or atomistic simulations) yield values that may scatter across a few orders of magnitude. Even if a similar method is chosen, e.g. DFT with identical basis sets and, possibly, effective core potentials, employing different exchange and correlation density functionals could still lead to substantially different values. The situation is just as problematic if semi-classical simulations are employed, as there are oen many different sets of parameterised interatomic potentials for the same material or compound. One trick commonly used across the eld of materials chemistry is to switch from total to binding or cohesion energies, which can be expected to behave better, and do in practice. 49 The scatter in the calculated binding energy values obtained using different approaches is usually, however, still greater than the energy separating low ranking energy minima on the same energy landscape (denition of energy). In practice, the WASP interface lets users upload their data without any restrictions on how the data were obtained, but encourages the users to provide details of the adopted computational approach as metadata. To support the comparison of individual structures obtained using different energy denitions, we introduced an internal standard attained by a data normalisation routine. In particular, when data are uploaded to the Hive database, they are automatically rened by the Bee soware, using the all-electron, full potential electronic structure code FHI-aims 50 with the PBEsol functional, 51-53 the light basis set (which is variationally equivalent to split valence double-zeta Gaussian plus polarisation basis sets but can obtain energies that are much closer to the basis set limit). Further computational parameters are provided in the ESI. † Aer normalisation, the newly obtained structure is automatically uploaded to the Hive database with a two-way link between the original and normalised congurations, along with similarity links to the whole dataset in the database.
Hence, the user can search for structures that rene to the same LM on our normalised energy landscape (particularly useful for the investigation of nanoclusters of the same compound) or structures of any compound with the same connectivity (structural motif), as explained in the previous section.
Data mining
Starting from a known set of atomic congurations with the target stoichiometry and total number of atoms, the Data Mining (DM) module of the KLMC soware package 54 rescales each conguration to obtain an estimate of the expected nearest neighbour interatomic distances for the target compound, and then, using third party soware, relaxes the rescaled atomic structures to LM. In the results shown below, we employ GULP 55 as the third party soware, i.e. a semiclassical level of theory is used for the calculation of energies (and atomic forces). Aer the rescaling and renement procedure, KLMC is also employed to analyse the resulting congurations in terms of their energy ranking, uniqueness and geometrical properties.
Global optimisation
A Lamarckian genetic algorithm (GA) approach implemented in the KLMC soware package 47 was also used to locate LM on the energy landscape dened by the same set of interatomic potentials (semi-classical level of theory) as those used in the data-mining investigation. We note that the ability of the KLMC GA 47 to locate LM and GM efficiently has been proven for various types of system, and thus it is chosen here as a method for providing reliable data that we can use to assess the results obtained using the data-mining approach. The population of each GA run was set to 200 candidate structures, with the initial random structures generated within a 15Å Â 15Å Â 15Å cubic simulation box. Default values, as given in ref. 47, were used for the remaining simulation parameters.
Isomorphic structures, or structural motifs
As an illustration of how the connectivity maps are employed, we consider the case of a GM nanocluster reported in ref. 56 for (MgO) 7 that has the symmetry point group C 3v ; see Fig. 3a. The topological analysis tool nds that this structure has "7Mg3-7O3" topology, i.e. seven Mg and seven O atoms, each with a coordination number of three. When selected using the WASP interface for the Hive, beneath the rotatable ball and stick model of this structure are two lists; one showing the standardised entry for this conguration (as described earlier), and another showing all the "isomorphic structures" found in the Hive based on matching hashkeys (as also described above). A snapshot of the second list is shown in Fig. 4. In our chosen example, the (MgO) 7 GM structure currently has eleven isomorphic structures: eleven atomic congurations within the Hive have the same hashkey as our chosen example. The inclusion of a DOI in the entry for a candidate structure in this list indicates that it is a published LM. The remaining ve are, therefore, standardised LM (using FHI-aims). As more entries are submitted to the Hive, we would expect many more matches to be found. The six published LM show that this structural motif is also reported 54,56,57 to be the GM for (KF) 7 , (CaO) 7 , (SrO) 7 , (BaO) 7 and (CdSe) 7 . There is also another (MgO) 7 conguration, which has a different DOI 54 to that of the original chosen structure. Given that there are six different compounds with the same structural motif, we would expect six standardised LM. The two published LM entries for (MgO) 7 , the same compound, relax to the same standardised LM. To nd all the nanoclusters within the Hive that relax to the same standardised LM, the user only needs to click on the thumbnail of the standardised nanocluster. In our example, the missing standardised LM results from the standardised conguration for (CdSe) 7 relaxing to a different LM. Therefore, it has a different hashkey as it is a different structure (in fact, it has C 1 point symmetry).
Efficient structure prediction
The Hive contains the LM atomic structures for numerous binary compounds with 1 : 1 stoichiometry and a total charge of 0. We now concentrate on one particular size, clusters composed of 12 cations and 12 anions. To investigate a compound that is missing from the Hive database, one could data-mine structures already in the Hive for a similar compound. The success of this approach would rely on the chosen set of initial congurations; the more extensive this set, the greater the probability of nding the target LM. To maximise this probability one could data-mine all the compounds; however, this would generate many copies of each LM. Using the hashkey, which provides a unique identier for each structural motif, we were able to reduce this initial set to just over 100 unique structural motifs (which we will refer to as the DM-set). If the database contained entries for alkali halides, (XY) 12 , and alkaline earth oxides, (ZO) 12 , for X ¼ Li to Cs, Y ¼ F to I, and Z ¼ Mg to Ba, then potentially there would be a maximum reduction of 96%. The determination of this reduced set (calculation and comparison of hashkeys) is orders of magnitude faster to perform than the additional structural relaxations (using standard algorithms within an electronic structure code) that would have been necessary if we could not determine equivalent structures. Moreover, data-mining requires the evaluation of far fewer candidate structures than is typically performed in a stochastic approach. It is expected that the number of datasets within the Hive will grow, and that important unique structural motifs may be missed given our search has been performed soon aer we have created this database. Stochastic approaches may also miss important LM, and the number of unique motifs is likely to increase much more slowly than the number of entries for clusters of any particular size, charge and stoichiometry.
Using our DM-set of unique LM, we now investigate three different compounds that were not included in the initial dataset taken from the Hive, namely (LiI) 12 , Table 1 Parameters for the Buckingham potential, A exp(Àr/r) À C/r 6 , applied between ions X and Y
X-Y
A (eV) r (Å À1 ) C (Å 6 eV) Table 2 Parameters for the shell model for ions X, where Q and Y are the point-charges of the core and shell, which are connected by a spring with constants k 2 and k 4 . The Coulomb contribution to the energy between point-charges of an individual ion X is replaced with the energy associated with the spring, 1/2k 2 x 2 + 1/4k 4 x 4 , where x is the distance between the core and shell. Note that the strontium cation is treated as a rigid ion and therefore only has one parameter (SrO) 12 and (GaAs) 12 . As the main focus of this article is the methodology as opposed to the physical and electronic properties of the predicted nanoclusters, we have chosen to present new IP-LM structures, i.e. the atomic congurations and ranks of local minima on the energy landscape are dened using interatomic potentials (IP), the parameters of which are given in Tables 1 and 2. For each compound we also perform a search of low energy IP-LM using an evolutionary algorithm; details of both methods are described in the previous section. We note that the potential parameters for LiI were taken from ref. 58. The small spring constant for the lithium cation caused problems during the global optimisation runs; during the relaxations of new candidate structures (particularly the random structures used in the initial population), the initial electric elds were sometimes strong enough that during structural relaxation the shell was stripped away from the cation. It is known that the polarisability of an ion is dependent upon the electric eld, which is much stronger for our clusters than that experienced within the bulk. Thus, in our simulations, we doubled the value of the spring constant for lithium cations, which corresponds to an apparent reduction in their coordination number compared to the bulk. The results from data-mining our DM-set of unique LM are shown in Fig. 5-7. For strontium oxide, lithium iodide and gallium arsenide, 47, 50 and 41 LM structures were generated, respectively, i.e. not all the structural motifs of one compound were locally stable for another. Moreover, a different global minimum was found for each compound. Labelled DM01 in Fig. 5, the D 3d barrel was found to be the IP-GM for (SrO) 12 , whereas for (LiI) 12 and (GaAs) 12 it was ranked fourth and second, respectively. The 2 Â 2 Â 6 D 2d conguration of alternating atoms, labelled DM01 in Fig. 6, was found to be the IP-GM for (LiI) 12 . One can imagine that this cuboid conguration could be cut from the NaCl rock salt structure, and thus it is not surprising that this structural motif was not generated for (GaAs) 12 . The T h sodalite cage, so named as it is a basic building block of the sodalite bulk structure (given the abbreviation SOD by the zeolite community), was found to be the IP-GM for (GaAs) 12 . This conguration was ranked h and thirty-eighth for (SrO) 12 and (LiI) 12 , respectively. Comparing the ball and stick models for different compounds but for the same structural motif, one noticeable difference between the LM for lithium iodide and those of the other two compounds is the sharper (more acute) bond angles that directly result from the greater polarisability of the iodide anion. Essentially, the iodide anions are further out from the cluster's centre of mass than the lithium cations. To check the current success of data-mining the Hive for these three compounds, we also conducted global optimisation on each of the three IP-energy landscapes for low lying LM. We present the results as three densities of LM graphs; see Fig. 8. In the panel insert for each compound it is very clear that the data-mined LM present only a sample of all the possible LM. In terms of ranking, fortunately, the missing LM tend to be mid-range rather than at the more stable end (which, typically, is where there is most interest). Looking more closely at the top ranked LM, we identied which IP-LM structures are missing; these are shown in Fig. 9.
For strontium oxide clusters, the rst six missing LM were ranked 6, 7, 8, 9, 13 and 16. The rst three of these are basic rock-salt cuts that could have been included in our data-mined set if we had included the structures from ref. 54 (we did not as this paper includes data-mined structures for alkaline oxides, one of which is one of the compounds we chose to investigate). The GA08 cuboid conguration was in fact found as the IP-GM for (LiI) 12 . Generating this LM during the data-mining process was fortuitous given that this structural motif was not included in the DM-set of unique LM. GA09 and GA13 are composed of a n ¼ 6 drum (typically the IP-GM for (XY) 6 ) and 2 Â 2 Â m cuboids. More interesting is the GA16 conguration, which we have previously seen; it has an unusual distorted planar four-coordinated oxygen anion site. For lithium iodide clusters, the rst six missing LM structures were ranked 3, 4, 5, 7, 8 and 9. Unlike our DM-set, these congurations, which we will refer to as HC, have at least one highly coordinated (greater than 4) anion site and are not one of the possible cuboid cuts from the NaCl rock salt phase. Given the stability of this type of structure, quite a few of the better ranked structures were missed. As already seen, any unstable LM in the DM-set can lead to new structural LM and thus we did not miss all of the HC structures; the enantiomer of GA03 was found (labelled as DM03 in Fig. 6 and ranked equal third). For gallium arsenide clusters, data-mining the DM-set was much more successful in that only four additional IP-LM structures were found in the top thirty; the rst four missing LM structures were ranked 7, 14, 21 and 29. Of these, GA07 is the result of merging IP-GM for n ¼ 6 (a drum) and n ¼ 9 (bubble) across a hexagonal face; GA14 is very similar to the GA16 LM that was missed for (SrO) 12 ; GA21 has the same structural motif as DM18, but with all the anions switched for cations, and vice versa, cf. DM23 and DM24 and also GA06 and GA07 for strontium oxide. We note that the DM and GA runs found different chiral versions of DM23 and DM24.
Finally, we should reiterate that the structures reported above for LiI, SrO and GaAs were obtained on the interatomic potential landscape. These potentials were originally parameterised for bulk compounds, where atoms are typically in higher coordinated environments, and therefore such parameterisations are very limited in scope. For example, arsenide anions are highly polarisable, and more realistic structures should be expected to have more buckled shapes, as seen above in LiI congurations. The latter proved to be easier to optimise due to the relatively low charges on Li and I. Notwithstanding this, the structures obtained here will be uploaded to the Hive and rened using our chosen ab initio approach, which will both give the actual ndings more credence for future applications, but will also allow the parameters of the interatomic potentials to be rened. The latter is an important element of machine-learning techniques that have been particularly successful in studies of metallic clusters. 59,60
Conclusions
We have presented, for the rst time, details of our database of published atomic congurations of nanoclusters. We have described the algorithms employed within this database to establish whether two entries are equivalent LM for Fig. 9 Ball and stick models of (XY) 12 IP-LM configurations obtained by the genetic algorithm that were missing from the IP-LM found using the data-mining approach. The colour scheme is shown in the lower right hand panel and is the same as that employed in previous figures. The numbers in the GA** labels indicate the rank found for the nanocluster, where 01 indicates the IP-GM, whereas in the previous labels, DM**, they indicate the rank before the missing IP-LM were found using the GA. a particular compound and whether congurations of different compounds are equivalent when judged using connectivity arguments, and have shown how to exploit these data in order to predict structures for three new compounds. The database provides initial model structures that were traditionally obtained from experiments, congurations that can be employed in structure prediction using a data-mining approach, and a way of checking whether a candidate structure is indeed new. Data-mining the set of congurations for (XY) 12 structures that have a unique hashkey proved relatively successful in that the top two LM congurations for each of three compounds were found. However, global optimisation techniques are still required for compounds that are chemically distinct enough that their low energy LM structures do not match congurations already in the database, using our connectivity arguments. This will of course change with time, as more data is entered into the database. Lessons learnt in the creation of the Hive and the associated WASP interface as a toolkit will be of direct use for further work on nucleation and crystallisation processes, 61 crucially the nucleation and growth of small particles on or in solid supports and liquid environments. The LM atomic congurations in the database are also readily usable as secondary building units (SBU) for constructing crystal structures. 6,8,10,[62][63][64][65][66][67][68] Here, using low energy SBUs that do not resemble cuts from the main phases of the chosen compounds will produce more interesting results.
Conflicts of interest
There are no conicts to declare. | 2018-08-14T12:22:48.604Z | 2018-10-25T00:00:00.000 | {
"year": 2018,
"sha1": "4326a0337cd8863e9984b43aee87790366edca65",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/fd/c8fd00060c",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "fb11f771ee16ded25184d632eec8118e49f1d0e2",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
250495174 | pes2o/s2orc | v3-fos-license | Experimental Determination of a Single Atom Ground State Orbital through Hyperfine Anisotropy
Historically, electron spin resonance (ESR) has provided excellent insight into the electronic, magnetic, and chemical structure of samples hosting spin centers. In particular, the hyperfine interaction between the electron and the nuclear spins yields valuable structural information about these centers. In recent years, the combination of ESR and scanning tunneling microscopy (ESR-STM) has allowed to acquire such information about individual spin centers of magnetic atoms bound atop a surface, while additionally providing spatial information about the binding site. Here, we conduct a full angle-dependent investigation of the hyperfine splitting for individual hydrogenated titanium atoms on MgO/Ag(001) by measurements in a vector magnetic field. We observe strong anisotropy in both the g factor and the hyperfine tensor. Combining the results of the hyperfine splitting with the symmetry properties of the binding site obtained from STM images and a basic point charge model allows us to predict the shape of the electronic ground state configuration of the titanium atom. Relying on experimental values only, this method paves the way for a new protocol for electronic structure analysis for spin centers on surfaces.
F or decades, nuclear spins have constituted an excellent resource to gain information about the atomic scale. 1 In recent years, advances in many different architectures, including nitrogen vacancy centers in diamond, 2 molecular break junctions, 3 and phosphorus donors in silicon, 4 even allowed to address them on an individual level. This effort is mainly driven by their prospect as a future building block in quantum information processing and sensing. 5 However, nuclear spins have been used for even longer to gain structural and electronic information about materials in bulk experiments. The nuclei can be probed directly using nuclear magnetic resonance measurements as well as indirectly via ESR because the magnitude and anisotropy of the hyperfine interaction are reflected in properties of the electron cloud surrounding the nucleus. 1 The combination of electron spin resonance and scanning tunneling microscopy (ESR-STM) has opened a novel platform to access single nuclear spins of atoms on surfaces. 6−9 Most strikingly, both spatial and magnetic information can be obtained by the two techniques simultaneously, providing unique access to hyperfine interaction on the atomic scale. Previous experiments showed that the hyperfine interaction of individual hydrogenated titanium (TiH) atoms on a bilayer of magnesium oxide (MgO) strongly depends on the binding side. 7 Initial experiments hinted toward a strong anisotropic hyperfine interaction on all binding sides. However, these measurements were performed in one magnetic field direction only; this limited the electronic structure analysis and required the additional help of density functional theory (DFT) to interpret the data. 7 Here, we perform ESR-STM measurements of individual hydrogenated Ti atoms on a bridge binding side of MgO in a vector magnetic field. We demonstrate that the hyperfine tensor has distinctly different values along its principal axes than reported previously. 7 Combining the results from the hyperfine analysis with properties of the symmetry group of the atom's binding site derived from STM and a basic point charge model allows us to predict the shape of the ground state orbital of the atom without the use of first-principles calculations such as DFT.
Experiments were conducted in a commercial STM system (Unisoku USM1300) equipped with a vector magnetic field ( Figure 1a) and at a temperature of 1.5 K. The measurements were performed on well-isolated individual Ti atoms adsorbed on two atomic layers of MgO grown on a Ag(100) substrate. These titanium atoms were found to be hydrogenated by residual hydrogen in the vacuum chamber, 10 effectively reducing them to Ti 3+ with spin S = 1/2. Figure 1b shows a STM topography of a single hydrogenated Ti atom. For ESR experiments, a radio-frequency (RF) voltage V RF is applied to the STM tip in addition to the DC bias voltage V DC . This RF voltage can drive transitions between the two lowest lying spin states of the Ti 3+ atom, which is subsequently detected by changes in the tunnel current ΔI via magnetoresistive tunneling. For the latter, a magnetic STM tip is employed that is created by transferring several Fe atoms from the surface to the STM apex. We study hydrogenated Ti atoms adsorbed on O−O bridge sites, which come in two equivalent orientations as shown in Figure 1c: "horizontal" and "vertical", which have an in-plane magnetic field angle with respect to the crystal lattice of 14°and 76°, respectively. This leads effectively to two different orientations of the in-plane field and thus allows for a 3-dimensional mapping of the hyperfine interaction by rotating the magnet only in a single plane (see Supporting Information Section S1).
In accordance with ref 7, we can identify three different configurations of the Ti nuclear spin. In Figure 2, we display different ESR spectra measured above atoms adsorbed on vertical bridge sites; we observe a single ESR resonance for 46 Ti 3+ , 48 Ti 3+ , and 50 Ti 3+ (I = 0), six resonances for 47 Ti 3+ (I = 5/2), and eight for 49 Ti 3+ (I = 7/2). In line with previous experiments, we observe a variation of the overall signal intensity for different magnetic field angles. 11 Interestingly, for the isotopes carrying a nonzero nuclear spin, the different peaks are well-resolved when the external field is along the sample plane, with a splitting around ∼65 MHz, while they seem to merge when the field is aligned in the outof-plane direction, with an ∼20 MHz splitting. This strong anisotropy of the hyperfine splitting is remarkable and could not be accurately determined with measurements performed along a single field direction. 7 In Figure 3, we map the full evolution of the ESR spectra as a function of θ, the angle of the magnetic field with respect to the surface normal, for two perpendicular rotation planes. Figure 3a shows data taken on a hydrogenated 49 Ti atom on a vertical bridge site, meaning that the in-plane field makes a 14°a ngle with the x-axis. The data exhibit strong anisotropic behavior, with almost complete suppression of the hyperfine splitting for the out-of-plane field direction. All data in this panel were acquired with the same microtip, and by measuring for each data point a reference spectrum on a hydrogenated 48 Ti atom, we can ensure that the influence of the tip field is negligible (see Supporting Information Section S1).
We performed the same experiment on another hydrogenated 49 Ti atom adsorbed on a horizontal bridge site, with a different microtip but that is again kept the same for the whole data set (see Figure 3b). Also here, we observe anisotropic behavior of the hyperfine splitting, though much less dramatic than for the vertical binding site. The evolution of the hyperfine splitting can be quantified by fitting each spectrum with several Fano functions (see Supporting Information Section S1) and is shown in Figure 3c for both adsorption sites. The evolution of the hyperfine splitting is continuous and mirror-symmetric, indicating that the sign of the magnetic field along any direction is irrelevant. We note that the observed symmetry axis is rotated by ∼10°with respect to the magnet axes. We discuss possible origins for this rotation in Supporting Information Section S2. From the anisotropic evolution of the Figure 3c we can already infer that the extent of the ground state orbital, which scales the hyperfine splitting via the magnetic dipole−dipole interaction, is likely to be similar in two directions (out-of-plane and one in-plane) and differs substantially in the other (in-plane) one. The anisotropy of the hyperfine splitting is closely related to that of the g factor. The latter had already been observed for TiH on MgO/Ag(100). 11−13 The hyperfine interaction entails three different interactions: a dipole−dipole interaction between the electron and nuclear spins, a Fermi contact interaction that scales with the electron density at the position of the nucleus, and an orbit dipolar interaction that couples the nuclear spin and angular momentum of the unpaired electron. Spin−orbit coupling leads to a partially unquenched angular momentum which couples to the electron spin. Treating this effect up to second order with perturbation theory, one can write a spin Hamiltonian in which, in all generality, g and A are tensors: 1 The symmetry of the adsorption site often lowers the degree of anisotropy of these tensors for a particular set of axes (x, y, z). In fact, in traditional ESR spectroscopy, analysis of the hyperfine anisotropy in a vector magnetic field is used to determine the symmetry of the crystal field around the investigated species. 1,14,15 This powerful method compensates for the lack of spatial resolution in these ensemble measurements and permits to even observe effects due to hybridization with ligand orbitals. 16 In our case, the combination of ESR with STM allows us to measure ESR spectra of single atoms, while the symmetry of the adsorption site can be exactly determined by STM. As we show, we can thus perform an allexperimental electronic analysis to determine the shape of the ground state orbital, a quantity that has been long elusive for experimentalists.
The adsorption site of the atom has a C 2v symmetry (see Figure 4) so that g and A are vectors along the principal axes (x, y, z) of the crystal lattice. 16 In the presence of an external magnetic field that has (l, m, n) directional cosines with respect to these axes, the effective g and A parameters are given by 1 Using these two equations, we first determine the effective g values for the vertical and horizontal bridge sites corresponding to different in-plane fields. We find that the vector g is completely anisotropic with g x = 1.702 ± 0.004, g y = 1.894 ± 0.004, and g z = 2.011 ± 0.015. These values are in good agreement with the literature values, 11 and the small deviations can be explained by the presence of a small residual tip field. Because this tip field has been carefully accounted for by Kim et al., we use in the following their reported g values. 11 Next, we fit the data of Figure 3c to obtain the values of the
Nano Letters
pubs.acs.org/NanoLett Letter hyperfine splitting, first along our field directions and, finally, along the lattice directions (see Supporting Information Section S2). We here find A x = 68 ± 4 MHz, A y = 18 ± 4 MHz, and A z = 19 ± 4 MHz. The minima of the two data sets are each a measure of A z ; however, they are not exactly equal. We attribute the difference, which has been taken into account for the estimation of the error in A z , to small variations in the local electric field surrounding the two atoms. Statistical variations of the g factor of Ti 3+ atoms adsorbed on oxygen sites were indeed also observed and attributed to the same origin. 13 The errors for the in-plane components are dominated by the uncertainty concerning the tilt of the in-plane field with respect to the crystal lattice (see Supporting Information Section S2). Once both the values of g and A are determined, we can investigate how these relate to the d 1 ground state configuration of the Ti 3+ . The corresponding energy diagram for C 2v symmetry is displayed in Figure 4b. 16 The order of the excited states is arbitrarily chosen and bears no influence on the analysis. The ground state orbital is a superposition of d x 2 − y 2 , d z 2 , and 4s orbitals, and our study revolves around determining the values of their respective weights c 1 , c 2 , and c s , which satisfy the normalization equation c 1 2 + c 2 2 + c s 2 = 1. The molecular coefficients α, β, γ 1 , γ 2 , and δ quantify the hybridization of the d levels with ligand orbitals, which we assume to be small�these coefficients are therefore expected to be close to 1.
As for the A vector we have where i = x, y, z, ⟨r −3 ⟩ (g N : nuclear g factor; μ B : electron Bohr magneton; μ N : nuclear Bohr magneton) scales with the radial extent of the electronic wave function via ⟨r −3 ⟩, and f i are functions whose full expressions can be found in Supporting Information Section S3. These equations, along with the normalization condition for c 1 , c 2 , and c s above, allow us to calculate the anisotropy of g and A for a given set of parameters (P, α, c 1 , c 2 , c s ) and therefore identify all sets of parameters that could, from a symmetry argument, describe our system. We find that more than one set of parameters can lead to the experimentally observed g and A (see Supporting Information Section S3). Consequently, we employ a basic point charge model (Supporting Information Section S4) that allows us to discriminate the different solutions by their Coulomb interaction. The lateral positions of the atoms are determined experimentally by atomic resolution STM images. The positions in the z-direction of the Ti and H atoms are estimated, but we ensure the robustness of the model against variations of these parameters. The state with the lowest Coulomb energy is shown in Figure 4c. It consists of a superposition of the d x 2 − y 2 (74%) and d z 2 (26%) orbitals in very good agreement with results obtained from DFT calculations. 7 This is quite remarkable because our electronic structure analysis is solely based on experimental data assisted by the symmetry group of the surface and a basic point charge model. However, our model cannot discriminate between different values of c s which scales the admixture of the 4s orbital (see Supporting Information Section S3). Nevertheless, we show that additional admixture of c s merely influences the shape of the orbital by reducing the size of the central ring that points toward the neighboring O atoms (see Supporting Information Section S5).
In summary, this work illustrates how an analysis of the anisotropic hyperfine interaction can be exploited to gain an indepth knowledge about the shape of the ground state orbital. Crucial for this method is the addition of binding site information derived from STM, which we process in a basic point charge model. Because this protocol can be applied to other spin systems on surfaces in a straightforward manner, it paves the way to determine the spin ground states of atoms and molecules on surfaces and constitutes an independent method that more elaborate theoretical methods such as DFT can be benchmarked against.
While writing this manuscript, we became aware of a similar experiment performed in another group. 17 Overall, their results agree very well with those presented here: A strong anisotropy of the hyperfine splitting along the oxygen direction is also found in their experiment. In contrast to our work, they determine the shape of the ground state orbital via DFT, which allows to shed light onto the origin of anisotropic and isotropic contributions to the hyperfine interaction from a firstprinciples perspective.
■ ASSOCIATED CONTENT Data Availability Statement
All data presented in this paper are publicly available through Zenodo. 18
Details on fitting of ESR spectra, fitting of the hyperfine splitting, anisotropy of the hyperfine splitting in C 2v symmetry, point charge model, and influence of c s on the ground state orbital (PDF) | 2022-07-14T01:16:00.042Z | 2022-07-13T00:00:00.000 | {
"year": 2022,
"sha1": "a6999e7b224b1d163946f73c5e86a676bcd466fa",
"oa_license": "CCBY",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.nanolett.2c02783",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7d80001d2798de9d58ec2bd3de333c77ae5ffb7a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
10969123 | pes2o/s2orc | v3-fos-license | B7-DC, a new dendritic cell molecule with potent costimulatory properties for T cells.
Dendritic cells (DCs), unique antigen-presenting cells (APCs) with potent T cell stimulatory capacity, direct the activation and differentiation of T cells by providing costimulatory signals. As such, they are critical regulators of both natural and vaccine-induced immune responses. A new B7 family member, B7-DC, whose expression is highly restricted to DCs, was identified among a library of genes differentially expressed between DCs and activated macrophages. B7-DC fails to bind the B7.1/2 receptors CD28 and cytotoxic T lymphocyte–associated antigen (CTLA)-4, but does bind PD-1, a receptor for B7-H1/PD-L1. B7-DC costimulates T cell proliferation more efficiently than B7.1 and induces a distinct pattern of lymphokine secretion. In particular, B7-DC strongly costimulates interferon γ but not interleukin (IL)-4 or IL-10 production from isolated naive T cells. These properties of B7-DC may account for some of the unique activity of DCs, such as their ability to initiate potent T helper cell type 1 responses.
Introduction
The qualitative and quantitative nature of an immune response depends on the type of APC that processes and presents antigen to T cells. Critical determinants of T cell activation include the density of peptide-MHC ligand available for TCR engagement as well as the provision of soluble and membrane-bound costimulatory signals. Among the different types of APCs, dendritic cells (DCs) 1 are the central initiator of antigen-specific T cell responses. Early DCs have specialized antigen uptake and processing machinery, whereas mature DCs efficiently present antigen to and activate T cells. Functionally, mature DCs are Ͼ 100 times more potent than macrophages in activating naive T cell in vitro (1,2).
T cell-dependent immune responses initiated by DCs depend on their expression of specific costimulatory signals. Among the most important costimulatory signals are those delivered by the B7 family, which is currently composed of four members with demonstrated immunologic activity. Two of the members, B7.1 and B7.2, bind both CD28 and CTL-associated antigen (CTLA)-4 on T cells. CD28 delivers a positive costimulatory signal (3)(4)(5), enhancing the production of certain lymphokines such as IL-2 via transcriptional (6) and posttranscriptional mechanisms (7). CTLA-4, which is expressed a few days after T cell activation, delivers a counterregulatory negative signal that antagonizes CD28 signals and thereby limits the magnitude and duration of T cell activation (8,9). B7.1 and B7.2 are expressed by all bone marrow-derived APCs including B cells, macrophages, and DCs, although at different ratios and with different kinetics. Despite many reports suggesting different functions of B7.1 and B7.2 in Th1/Th2 differentiation, antibody production, and CTL generation (10)(11)(12)(13), their distinct immunologic roles remain to be clarified (14)(15)(16).
Two additional B7 family members, B7-H1/PD-L1 and B7h/B7RP-1, have been identified recently (17)(18)(19)(20). B7-H1/PD-L1 and B7h/B7RP-1 bind the PD-1 and inducible costimulator (ICOS) receptors, respectively, both of which are induced upon T cell activation (18,21). Expression of all the currently known B7 family members is broadly distributed on multiple hematopoietic and nonhematopoietic tissues, raising interesting questions as to their 840 B7-DC, a Dendritic Cell Molecule with Potent Costimulatory Properties relative role in immune response regulation. In vivo experiments in transgenic and knockout mice have confirmed a role for B7h/B7RP-1 in effector responses, particularly involving antibody production (21)(22)(23)(24). Although expression of the currently described B7 family molecules is clearly important in mediating T cell activation, their ubiquitous expression on multiple APC types suggests that other molecules contribute to the extraordinary stimulatory capacity of DCs.
In an effort to understand more about genes involved in DC function, we screened a subtractive cDNA library between DCs and activated macrophages. We have isolated a new member of the B7 family that appears to be expressed primarily in DCs in mouse and human. Our initial functional studies indicate that B7-DC is more active than B7-1 in stimulating IFN-␥ production by T cells.
cDNA Subtractive Hybridization. Total RNA from sorted DCs and activated macrophages was extracted with TRIzol (GIBCO BRL). Messenger RNA was purified by Oligotex mRNA purification kit (QIAGEN). We used the PCR-based SMART cDNA synthesis system (CLONTECH Laboratories, Inc.) to amplify cDNA followed by the PCR-based subtraction system PCR Select (CLONTECH Laboratories, Inc.). Subtraction was performed following the manufacturer's protocol. Plasmid dot blot was then performed to confirm that the cDNA cloned is DC specific. Alkaline-denatured miniprep DNAs were spotted on Hy-bond N ϩ membrane (Amersham Pharmacia Biotech) and hybridized with SMART cDNA probe derived from sorted DCs or activated macrophages. These cDNA probes were 32 P-labeled using random primer labeling method (Prime-It II; Stratagene). Hybridizations and washing were done as described previously (27).
cDNA Library Construction and Screening: Cloning of B7-DC. Bone marrow-derived DCs were harvested on day 8 without sorting. About 20 to 40% of these cells expressed high MHC class II and B7. Total RNA extraction followed by poly A RNA purification was done as described above. For oligo dT primed DC library construction, we used Lambda ZAP Express cDNA synthesis system (Stratagene). The PCR DNA fragment of B7-DC was probed and used for screening. Membrane transfer, denaturation, and renaturation were performed using Stratagene's protocol. Positive clones were isolated and second screening was performed. After second screening, plasmids were excised by in vivo excision and tested by dot blotting and sequencing. The BLAST program was used to do homology search of the nucleotide sequence against GenBank (National Center for Biotechnology Information) for similarity to previously reported genes. The fulllength B7-DC cDNA clone was pulled out from the DC cDNA library. 5 Ј rapid amplification of cDNA ends (RACE) was performed using SMART RACE cDNA amplification kit (CLON-TECH Laboratories, Inc.). 5 Ј -RACE products were cloned into pCR2.1 vector and sequenced. Two more full-length B7-DC clones were obtained by reverse transcription (RT)-PCR and their sequences were compared to avoid sequence error.
For cloning of the human B7-DC, human DCs were obtained from normal peripheral blood mononuclear cells by culture in GM-CSF plus IL-4 as described previously (28). A BLAST search identified an overlapping expressed sequence tag (EST) clone, under GenBank/EMBL/DDBJ accession no. AK001879, with homology to mouse B7-DC. 5 Ј RACE was performed as described above. We sequenced a 5 Ј -RACE PCR fragment and designed a primer corresponding to 5 Ј -UTR of human B7-DC. The following primers in the 5 Ј -UTR and 3 Ј -UTR of B7-DC was used to amplify full-length human B7-DC: 5 Ј -GGAGC-TACTGCATGTTGATTGTTTTG-3 Ј and 5 Ј -TGCAAACT-GAGGCACTGAAAAGTC-3 Ј . The full-length cDNA sequences of the human and murine B7-DC cDNAs are available from GenBank/EMBL/DDBJ under accession nos. AF329193 and AF142780.
BAC library screening yielded three positive clones. Chromosome location mapping was done by fluorescence in situ hybridization (Genome Systems, Inc.). A total of 80 metaphase cells were analyzed, with 79 exhibiting specific labeling. The human B7-DC mapping was done using available bioinformatic tools, National Center for Biotechnology Information's BLAST program, and the International RH Mapping Consortium. The hB7-DC sequence was searched in GenBank/EMBL/DDBJ and was found to map to two BAC clones RP11-574F11 (AL162253) and Rp11-635N21 (AL354744) localizing on chromosome 9.
Hybridization Analysis. RNA extraction and SMART cDNA synthesis for cell lines, sorted DCs, and activated macrophages were performed as described above. SMART PCR cDNAs were purified by PCR purification kit (QIAGEN). 0.5 g/lane purified DNAs were run on a 1% agarose gel and transferred on a Nytran nylon membrane (Schleicher & Schuell). To make radioactive probes, subtracted library-derived plasmid DNAs were amplified as templates. DNA was amplified by PCR using primer sets just adjacent to the cloning site of plasmid DNA and used purified PCR DNA of each of the clones for hybridization probes. The nucleotide sequences of these primers are as follows: 5 Ј -GTAACGGCCGCCAGTGTGCTG-3 Ј and 5 Ј -CGCCAGT-GTGATGGATATCTGCA-3 Ј . Hybridization analysis of total RNA of human DCs and control placenta was also performed. The probes used and RNA preparation was described above. Radiolabeling of probes, hybridization, washing, and autoradiography were done as described above.
B7-DC-Ig Dimer Synthesis. The B7-DC-Ig construct was made by fusing the sequence encoding the NH 2 -terminal aa of B7-DC without the transmembrane domain in-frame to the sequence encoding the COOH-terminal aa of the human IgG1 Fc in the pIg-Tail Plus vector (R&D Systems). COS-7 cells were transiently transfected with pIg/B7-DC using LipofectAMINE 2000 (GIBCO BRL) or GINE JAMMER (Stratagene). The B7-DC-Ig fusion protein was purified from the serum-free supernatants using the saturated ammonium sulfate precipitation. SDS-PAGE and silver staining demonstrated a purity Ͼ 90%.
T Cell Proliferation and Cytokine Assays. For costimulation assays with anti-CD3, 96-well flat-bottommed plates (Immulon4; Dynex) were precoated with anti-CD3 antibodies (2C11; BD PharMingen) and B7.1-Ig (R&D Systems), B7-DC-Ig, or isotype control (Sigma-Aldrich) at 100 ng/ml were diluted in 1 ϫ PBS (GIBCO BRL), pH 7.4, for 2 h at 37 Њ C. The plates were then washed three times with 1 ϫ PBS and blocked with RPMI 1640 supplemented with 10% FCS for one-half hour before adding T cells. T cells from spleens and lymph nodes were purified using dynabeads M-280 (Dynex) with the indirect method using an antibody cocktail composed of anti-IE d and B220/CD45RO or CD8 ␣ (BD PharMingen). For proliferation and cytokine secretion assays, cells were plated at 2 ϫ 10 5 cells/well.
For costimulation assays using the RENCA system to present hemagglutinin (HA) antigen, RENCA MHC class II expression was induced with IFN-␥ (75 U/ml) for 72 h. They were then irradiated (13,200 rad) and plated at 2 ϫ 10 4 cells/well (96-well flat-bottomed plates). HA110-120 peptide was then added at 2.5 g/well and various concentrations of the Ig-fusion molecules were added. Transgenic I-E d plus HA-specific T cells (gift of H. von Boehmer, Harvard University, Cambridge, MA) were isolated as described above and plated at 4 ϫ 10 5 cells/well.
For analysis by cytokine ELISA, cultures were set up as described above and supernatants were harvested at the indicated times. IL-2, IL-10, IFN-␥ (Endogen), and IL-4 (R&D Systems) concentrations were determined using commercially available ELISA kits.
Results
Identification and Characterization of B7-DC. B7-DC was isolated from a subtracted library between DCs and ac-tivated macrophages. The two populations used for cDNA subtraction were bone marrow-derived GM-CSF-cultured DCs as the "tester" population and IFN-␥ plus LPS-activated adherent bone marrow-derived M-CSF macrophages as the "driver" population. Day 8 MHC class II hi B7 hi "mature" DCs were sorted to Ͼ 93% purity as the source of tester cDNA. DCs were characterized by flow cytometry as having ف 50-fold higher MHC class II levels than macrophages. Both populations expressed B7.1 and B7.2, although B7.2 levels were significantly higher in the DCs. F4/80 and CD16 were expressed at higher levels on the macrophage population. Functional comparison of the two populations demonstrated that the DC population was ف 100-fold more potent than the macrophage population in stimulating an allo-MLR (data not shown).
After RNA extraction from both populations, we used a PCR-based cDNA synthesis system followed by the PCRbased subtraction procedure, PCR Select. One of the differentially expressed clones encoded a novel Ig supergene family member, which we name B7-DC. The murine B7-DC cDNA is ف 1.7 kb in length encoding a 247-aa precursor protein with a 23-aa NH 2 -terminal signal peptide and a predicted molecular weight of ف 25 kD (Fig. 1 a). The putative leader sequence and transmembrane domain were identified using the SOSUI program (29). Two charged aa are found within the 23-aa transmembrane domain of mB7-DC, suggesting a possible binding partner. At the aa level, murine B7-DC is 70% identical to the human B7-DC, indicating that they are orthologues (Fig. 1 a). The hB7-DC differs slightly from the murine B7-DC in that it has a longer cytoplasmic tail.
Through a homology search, we found that B7-DC has significant homology to B7-H1/PD-L1 (34% identity, 48% similarity; Fig. 1 b), to a lesser extent butyrophilin (30% identity, 45% similarity), and Ͻ 20% identity to B7.1 and B7.2. Even though no immunologic function has yet been demonstrated for butyrophilin, phylogenetic studies indicate that it is likely related to the B7 family through exon shuffling (30,31). They each possess the canonical IgV-IgC structure and a transmembrane domain. In contrast to the other B7 family members, murine B7-DC has an extremely short cytoplasmic tail (4 aa).
To determine the genomic structure of mB7-DC, we isolated a genomic clone by screening a pooled BAC library using probes from the 5 Ј and 3 Ј UTRs. Chromosome location mapping using the BAC clones showed that mB7-DC is located between the regions of chromosome 19C2 and 19C3, which corresponds to a region of human chromosome 9, where hB7-H1/PD-L1 has been mapped (data not shown). hB7-DC has been found to be located on two chromosome 9 BAC clones. In addition, both hB7-DC and hB7-H1/PD-L1 were found to be located on a single chromosome 9 BAC clone with an insertion size of ف 164 kb (Fig. 1 c). The genomic proximity of B7-DC and B7-H1/PD-L1 is reminiscent of the B7.1/B7.2 pair, which map to within 1 megabase of each other.
B7-DC Is Selectively Expressed in DCs.
To determine the expression pattern of B7-DC, virtual Northern blot 842 B7-DC, a Dendritic Cell Molecule with Potent Costimulatory Properties was performed using RNA extracted from multiple macrophage cell lines, macrophage cultures, and DCs derived from both bone marrow and spleen. Although strong hybridization was detected using a B7-DC probe in immature (day 4 and 6) and mature (day 8 and sorted MHC II hi B7 hi ) bone marrow-derived DCs and splenic DCs, no signal was detected in any of four macrophage cell lines, activated bone marrow macrophages, or peritoneal macrophages (Fig. 2 a). We were also able to detect strong expression of hB7-DC in human DCs grown from peripheral blood mononuclear cells with GM-CSF plus IL-4 (Fig. 2 b).
B7-DC Does Not Bind to CD28 or CTLA-4 but Does Bind to PD-1. Although B7-DC has structural and sequence homology to the B7 family, it does not contain the putative CD28/CTLA-4 binding sequence, SQDXXX-ELY or XXXYXXRT (32). To directly assess binding, we analyzed the ability of dimeric CD28-Ig, CTLA-4-Ig to stain 293T cells transfected with either B7-DC or B7.1.
Whereas strong binding was observed with B7.1 transfectants, there was no binding to B7-DC transfectants (Fig. 3). Based on the homology and genomic proximity between B7-DC and B7-H1/PD-L1, we also tested PD-1 as a candidate binding partner for B7-DC. Indeed, PD-1-Ig bound to B7-DC transfectants but not to B7.1 transfectants. The binding of PD-1-Ig to B7-DC transfectants is lower than the binding of CTLA-4-Ig and CD28-Ig to B7.1 transfectants, although it is totally specific. Further confirmation of the binding of PD-1 to B7-DC was obtained from positive staining of stable B7-DC-GFP transfectants with PD-1-Ig (data not shown). These results suggest that PD-1 is indeed a receptor for B7-DC.
B7-DC Functions As a Costimulatory Molecule for T Cells.
To determine whether B7-DC possesses costimulatory activity, we produced a soluble B7-DC-Ig fusion protein which could be added to T cell stimulation assays. We first measured the proliferative response to increasing amounts of plate-bound anti-CD3 in the presence of B7-DC-Ig, B7.1-Ig, or an isotype control. Fig. 4 a shows that, in the presence of suboptimal amounts of anti-CD3, B7-DC costimulates a T cell proliferative response to a greater level than B7.1. Furthermore, B7-DC costimulates proliferative responses among CD4 cells to a much greater extent than among CD8 cells (Fig. 4 b). B7-DC fails to stimulate T cells in the absence of a TCR-dependent stimulus, indicating that it provides a true costimulatory signal.
Patterns of Lymphokine Production Costimulated by B7-DC. The best characterized T cell costimulatory activity of B7 family molecules relates to lymphokine production, one of the most important mediators of T cell function. We therefore analyzed production of a panel of lymphokines by T cells stimulated with anti-CD3 or MHC-peptide costimulated with B7-DC-Ig, B7.1-Ig, or an isotype control. For experiments in which signal 1 is MHC-peptide, RENCA cells (which do not express any endogenous B7.1, B7.2, or B7-DC by RT-PCR analysis) were treated with IFN-␥ to induce MHC class II expression and loaded with the I-E d -restricted HA 110-120 peptide (FERFEIFPKE; reference 33). Purified splenic T cells from an I-E d plus HA 110-120-specific TCR transgenic mouse line were added and the proliferative response was measured in the presence of either B7-DC-Ig, B7.1-Ig, or an isotype control. Fig. 4, c and d, demonstrate that patterns of lymphokine costimulation are fairly consistent whether anti-CD3 or MHC-peptide complex is used as "signal 1". Significantly, although both costimulate similar levels of IL-2, B7-DC costimulates much greater levels of IFN-␥ production than B7.1. In contrast, B7-DC fails to costimulate IL-4 or IL-10 production. These findings implicate B7-DC as a DC signal to drive Th1 responses.
Discussion
We have characterized here a new B7 family member whose expression is highly restricted to DCs and which has potent costimulatory properties for T cells. In particular, B7-DC strongly costimulates IFN-␥ production much more strongly than B7.1 while not costimulating IL-4 or IL-10 production. These in vitro findings suggest that, in addition to IL-12, B7-DC may be an important DC mediator of Th1 responses. The human orthologue of B7-DC is also expressed in monocyte-derived DCs, though confirmation of biological equivalence awaits functional studies with human T cells. Mature monocyte-derived DCs in humans and CD8␣ ϩ DCs in mice induce very strong Th1 responses, whereas immature human DCs induce IL-10 production and differentiation of regulatory T cells that inhibit Th1 effector responses (34)(35)(36). The role of plasmacytoid DCs in humans and murine CD8␣ Ϫ DCs in mediating Th2 responses remains unresolved (37,38). It will be important to directly correlate B7-DC expression in these different DC subtypes with their ability to initiate Th1 effector responses.
The restricted expression of B7-DC contrasts with the previously described B7 family members, suggesting that it participates in different immune responses than the defined B7.1/2 pathways. Although a weak B7-DC signal was detected by RT-PCR in activated macrophages, preliminary real-time RT-PCR analysis indicates that B7-DC mRNA expression in DCs is Ͼ15-fold higher than in activated macrophages (data not shown). Antibody staining likewise detects very low levels of B7-DC on the surface of activated macrophages but it is unclear whether such low levels could contribute significantly to T cell activation by macrophages.
Although B7-DC fails to bind CD28 or CTLA-4, it does bind PD-1, a receptor for B7-H1/PD-L1 (18,39,40). The strong homology between B7-DC and B7-H1/PD-L1 (greater than that between B7.1 and B7.2), the close physical linkage between B7-DC and hB7-H1/PD-L1, and their binding to a common receptor suggests that they are related by a relatively recent duplication event. This is highly analogous to B7.1 and B7.2, which both map to within 1 megabase on mouse chromosome 16 and human chromosome 3 (41). It will be important to discern the relative biologic roles of B7-DC versus B7-H1/PD-L1 as mediated by PD-1 and other putative receptor(s). PD-1 is expressed subsequent to T cell activation and appears to inhibit T cell activation and induce apoptosis under conditions of T cell stimulation with high concentrations of anti-CD3. PD-1 knockout mice develop an autoimmune syndrome (18) characterized by clinical manifestations of hypertrophic cardiomyopathy. In contrast, Dong et al. (17) found that B7-H1/PD-L1 costimulated T cell proliferation and cytokine release at lower concentrations of anti-CD3. Thus, by analogy to the situation with CD28 and CTLA-4, PD-L1 may be a counterreceptor for an as yet unidentified activating receptor. Despite sharing PD-1 as a binding partner, B7-DC and B7-H1/PD-L1 appear to demonstrate differences in lymphokine costimulation patterns. For example, B7-H1/PD-L1 has been reported to costimulate IL-10 production by T cells whereas B7-DC does not. The molecular basis for these differences awaits the elucidation of binding affinities as well as the complete receptor complement for these two costimulatory molecules. | 2014-10-01T00:00:00.000Z | 2001-04-02T00:00:00.000 | {
"year": 2001,
"sha1": "6757fe8dee2f8b6ac9a3385108b6a38d40558790",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/193/7/839.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "6757fe8dee2f8b6ac9a3385108b6a38d40558790",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
221756137 | pes2o/s2orc | v3-fos-license | Research on Design Aesthetics and Cultural Connotation of Gold and Silver Interlocking Buckle in the Ming Dynasty
The gold and silver interlocking buckle is a unique accessory commonly seen in women’s clothing in Ming Dynasty. It is composed of a button loop with a bayonet, a flat spherical button and two keepers. This article organizes and summarizes the types of gold and silver interlocking buckles in the Ming Dynasty; analyzes the ingenious structure of this kind of interlocking buckle and the unique design aesthetics reflected in the shapes; and expounds the rich cultural connotation of gold and silver interlocking buckle: it is an intuitive expression of the aesthetic concepts of the Ming Dynasty society, an indirect proof of the strong trend of the Ming Dynasty society’s conspicuous consumption in the context of economic prosperity, and a material medium for the Ming Dynasty women to practice the concept of covering the body. These rich cultural connotation and aesthetic value are the inspiration source and cultural cornerstone of today's clothing design.
I. INTRODUCTION
After the Song and Yuan Dynasties, the craftsmanship of Gold and Silver Ware have reached a new peak in the Ming Dynasty, and the types and styles of Gold and Silver Ware also have been enriched and developed. Among them, the gold and silver interlocking buckles appearing in Ming Dynasty women's clothes are noteworthy examples. This kind of interlocking buckle was generally used for fixing the collar or the placket. Each pair of buckles consist of a button loop, a button, and two keepers. At present, academic researches on gold and silver interlocking buckles has begun to take shape. For example, Yang Zhishui classified this kind of interlocking buckle as jewelry and studied the pattern of the theme of "Butterflies and Flower" [1]; Wang Jiaqi studied this kind of interlocking buckle from the aspects of structural evolution, application methods and visual design [2]; Chen Fang mainly discussed the origin of this kind of interlocking buckle [3]. These studies allow us to appreciate the unique style of gold and silver interlocking buckles in the Ming Dynasty, and this article will make an in-depth discussion on the design of gold and silver interlocking buckles in the Ming Dynasty and the culture of the times, and explore the unique design aesthetics contained in the gold and silver interlocking buckle and the cultural connotation of the times behind it.
II. DESIGN AESTHETICS OF GOLD AND SILVER INTERLOCKING BUCKLE IN MING DYNASTY
A. The beauty of structureclever and practical structural design The button loop and the button in the gold and silver interlocking buckle form an open-close component which has the function of opening and closing the buckle, and the two keepers are used to fix the button loop and the button. The beauty of structure of the Ming Dynasty gold and silver interlocking buckle is mainly reflected in the structural design of the openclose component.
The open-close component of the fabric braid button use the suppleness of the fabric to achieve the locking and releasing of the button loop to the button, so as to achieve the purpose of convenient opening and closing. The fabric braid buckles had appeared in the Song costumes, as shown in " Fig. 1"; the Yuan Dynasty fabric braid buckles had shown a certain decorative effect, as shown in " Fig. 2"; and in the Ming Dynasty, gold and silver interlocking buckles appeared for the first time. From fabric braid buckle to gold and silver interlocking buckle, how to deal with the "opposite" relationship between the "stiffness" of gold and silver materials and the "suppleness" required by the button loop became a key issue to be solved by artisans in the Ming Dynasty. Judging the unearthed objects, the Ming Dynasty artisans used the bayonet on the button loop to realize the "unified" relationship between the structure and material of the gold and silver interlocking buckle open-close component. In the Ming Dynasty gold and silver interlocking buckle, the two pieces of metal extending inward from the inner edge of the button loop compose the bayonet. In order to ensure that the button can pass through the gap in the middle of the bayonet, the button is also changed from a spherical shape to a flat spherical shape. " Fig. 3" shows an Ming Dynasty gold and silver buckle in the open state. Therefore, when closing the buckle, simply twist the button to feed it into the button loop, and then twist it again to make the button firmly locked by the bayonet on the button loop. This structure greatly improves the applicability of the gold and silver interlocking buckle.
The appearance of the interlocking buckle with this open-close structure promoted the emergence and popularity of the mandarin collar and the Chinese style jacket with buttons down the front, and also laid the foundation for the subsequent use of a large number of Chinese knot buckles. What's more, the structure design of this kind of buckle shows that the Ming Dynasty craftsmen's ingenuity in the design of the structure of the utensils, and also reflects their profound ability to control the gold and silver materials. And the existence of these excellent craftsmen in turn confirms the rationality and historical inevitability of the Ming Dynasty gold and silver jewelry becoming the pinnacle of traditional Chinese jewelry.
B. The beauty of form-highlight the decorative function
The gold and silvers interlocking buckles of the Ming Dynasty pursue a balanced beauty of bilateral symmetry in the overall structure, but in the aspect of shapes they show the characteristics of exquisite elegance and rich change. These buckles of the Ming Dynasty have various form themes, such as " 蝶 采 花 (Butterflies and flower)" "云托月(Clouds and Moon)" " 双银锭(Double Silver ingots)" and so on, and the typical themes of Ming Dynasty gold and silver interlocking buckles' form are presented in " Table I". All the interlocking buckles show the formal beauty of the high integration of shape and structure which makes the functional components of the interlocking buckles are concealed and highlights the decorative effect of gold and silver interlocking buckles to the greatest extent.
Examples
The concealment of the functional components of the gold and silver interlocking buckles is divided into two aspects, including the concealment of the openclose component and the keepers. Among all the shape designs, the most typical example is the design of the form of the "蝶采花(Butterflies and Flower)" theme, and the following is the detailed analysis on the shape design of the theme of "Butterflies and Flower". The first aspect is the concealment of the open-close component which is composed of a button loop and a button. In the gold and silver interlocking buckles of the "蝶采花(Butterflies and Flower)" theme, the button loop and the button make up a complete flower in bloom when they are closed; and when they are opened, the button becomes the center of the flower, and the button loop becomes the petal. The second aspect is the concealment of the keepers. The two keepers of the gold and silver interlocking buckle become two lifelike butterflies flying around the "flower", and the holes on the keepers used for threading also become patterns on the wings of the butterflies. The shapes of this theme is in line with people's cognition of natural things, but it also breaks the laws of natural world. The design activities of the artisans of the Ming Dynasty made the forms in natural world appear in the ornaments, which fully demonstrated their great association and imagination ability. In addition, the shape of the theme of "Butterflies and Flower" also appeared in other gold and silver ornaments in Ming Dynasty, which shows that the Ming people love it very much, and also proves that the design of this theme is so exquisite.
The concept of the form design of the Ming Dynasty gold and silver interlocking buckles is to greatly enhance its decorative nature on the basis of retaining the opening and closing function of buckles. It is also under the support of this design concept that gold and silver interlocking buckle can always be one of the favorite fashion accessories of the Ming Dynasty women.
A. Intuitive expression of social aesthetics concept in the Ming Dynasty
Through the Ming Dynasty gold and silver interlocking buckles, we can realize that people in the Ming Dynasty valued real life and loved auspicious patterns. Gold and silver ornaments inlaid with jewels had flowed from West Asia to China during the Sui and Tang Dynasties. At that time, the form of the ornaments still retained more exotic features. After the transition between Song and Yuan, the gold and silver ornaments of the Ming Dynasty preferred to express everyday things and placed a longing for a better life. For example, the " 蝶采花(Butterflies and Flower)" theme interlocking buckles of the Ming Dynasty used "butterflies" to refer to "husband" and used "flower" to refer to "wife", implying a sweet and loving relationship between husband and wife; the " 双 银锭 (Double Silver Ingots)" theme interlocking buckles symbolized wealth by the images of silver ingots, expressing the Ming people's pursuit of rich life; the "芙 蓉捧寿(Furong Flowers and Shou)"gold interlocking buckles, unearthed from Ding Ling, used the homonym "fushou(福寿)" of "fu(芙)" and "shou(寿)"to pray for its owner to live a long and healthy life; the patterns of sunflowers and lotuses in the Ming interlocking buckles became the symbol of " 多 子 多 孙 (many children and grandchildren)", because of they can produce many seeds. These patterns on the Ming interlocking buckles are all related to the beautiful daily life, which is the embodiment of the Ming people's concept of paying attention to real life.
Also, through the Ming Dynasty gold and silver interlocking buckles, we can realize that the Ming people adored gold, silver and jewel materials. The interlocking buckle is a functional accessory in clothing, which can be made using cheaper and more common fabrics, but precious materials, such as gold, silver, jade and jewel, are often used in the Ming Dynasty interlocking buckles. For example, in the tomb of Zhu Youbin, the Ming Dynasty Prince of Yi, in Jiangxi, two large size gold interlocking buckles of "蝶 Advances in Social Science, Education and Humanities Research, volume 469 采花(Butterflies and Flower)" theme were discovered [4]. Each pair of them was inlaid with seventeen jewels, six on the button loop, one on the button, and ten on the two keepers, as shown in " Fig. 4". In addition, in an ordinary person's tomb in the early Ming Dynasty, one pair of " 蝶 采 葵花 (Butterflies and Sunflower)" theme silver buckles were also discovered [5]. From the member of the royal family to the ordinary people, they all loved to use precious materials such as gold, silver, jade and jewel to make interlocking buckles, which is enough to show the Ming people's aesthetic concepts of adoring the gold, silver and jewel.
B. Indirect proof of strong conspicuous consumption trends
With the prosperity and development of the commodity economy, the consumption concept of the Ming people gradually have changed from "thrifty" in the early Ming Dynasty to "luxurious" in the middle and late Ming Dynasties. The number of gold and silver buckles that appeared in tombs in the middle and late Ming dynasties have increased tens of times compared to the beginning of the Ming dynasty, and this result was caused by the conspicuous consumption behavior of the Ming Dynasty society.
The American economist Thorstein Veblen once introduced the term "conspicuous consumption" in The Theory of the Leisure Class [6], which refers to the consumption activities that provide evidence of wealth and power to obtain or maintain respect and honor. He summed up the two main motivations for conspicuous consumption: one is invidious comparison, and the other is pecuniary emulation. Invidious comparison refers to the strata with higher wealth levels striving to differentiate themselves from the strata with lower wealth levels through conspicuous consumption. The aristocracy in the Ming Dynasty used gold, silver, jewels and other very expensive materials rather than cheaper fabrics, to make a lot of interlocking buckles. This is the conspicuous consumption motivated by invidious comparison, and its purpose is to prove their high wealth level, highlight their identity and status, and maintain their image of being in a higher class. For example, 71 pairs of gold and silver interlocking buckles were unearthed in the coffins of two Empress Wanli [7], and 105 pairs of gold and silver interlocking buckles were unearthed in the tomb of Ming Dynasty Princess of Yi [8]. These buckles were only the tip of the iceberg of their gold and silver funerary objects and the burial of such a large number of gold and silver vessels really proves the supreme power and high wealth level of the royal family. The conspicuous consumption also appeared in the lower stratum of the Ming Dynasty, motivated by pecuniary emulation. Pecuniary emulation refers to the stratum with lower wealth level trying to imitate the stratum with higher wealth level through conspicuous consumption in order to be considered as one of them. As described in the fourteenth chapter of Jin Ping Mei Ci Hua [9], on Pan Jinlian's birthday:"只见潘金莲上穿了沉香色潞绸雁衔芦花样 对衿袄儿,白绫竖领,妆花眉子,溜金蜂赶菊钮扣儿…… ( Pan Jinlian was wearing a saffron-colored jacket of Lu-chou silk that opened down the middle and was decorated with a motif of wild geese holding bulrushes in their mouths. It had a stiff-standing white satin collar with purfled edging and gilt buttons that depicted honeybees rifling chrysanthemum blossoms...)" This is the conspicuous consumption in lower classes motivated by the pecuniary emulation. In the traditional hierarchy concept of "scholar, farmer, artisan and merchant", as the wife of a merchant, Pan Jinlian's social status should be low. But she implied the great wealth of her family through wearing gilded interlocking buckles in imitation of the upper class ladies, which aroused the envy of the other women. And it is the other women's envious gaze that can change Pan Jinlian's identity from a lowly merchant wife to an upper class lady.
After the middle Ming Dynasty, the extravagant ethos began to prevail, and the overstepping authority phenomenon in women's clothing is also more and more common. All of the above can prove that the conspicuous consumption trend is very strong in Ming society.
C. The material medium to practice the concept of covering the body
The stand collar with gold and silver interlocking buckles was one of the most distinctive and popular clothing structures in Ming Dynasty women's clothing. As shown in " Fig. 5", it is a portrait of a female in the Ming Dynasty wearing a stand-up collar with gold and silver interlocking buckles. Women in the Tang Dynasty loved to wear tube tops, which demonstrated the unrestrained thinking of the Tang women of exposing their bodies. In the Ming Dynasty, the stand collar with gold and silver interlocking buckles was commonly used in women's clothing, showing the conservative concept of the Ming women of covering Advances in Social Science, Education and Humanities Research, volume 469 their bodies. Women in the Ming Dynasty tended to cover their bodies partly for the strengthening of the "贞 节观(chastity view)", and partly for the climate changes during the Ming Dynasty. And it is the gold and silver interlocking buckles that acted as the material medium for Ming Dynasty women to practice the concept of covering the body. The performance of " 贞节观(chastity view)" in clothing was to hide the woman's body as much as possible with clothing. During the Southern Song Dynasty, when Zhu Xi was the magistrate of Zhangzhou, Fujian, he had stipulated: "良家妇女出门,需 用 蓝 夏 布 一 幅 围 罩 头 和 颈 项 , 以 避 免 妇 女 抛 头 露 面 (when gentlewomen went out, they needed a blue ramie cloth to mantle their heads and necks to prevent they from showing their faces in public)" [10]. In the Yuan Dynasty, Mrs.Ma in the Jie Fu Ma Shi Zhuan died of the ulcers on her breast, just because she insisted that she was Mr.Yang's widow and would rather die than be seen by men. It can be seen that the thought of women covering their body to avoid being seen by men had rooted in the hearts of the people because of the instruction of chastity on female that continued for hundreds of years during the Song and Yuan Dynasties. In the Ming Dynasty, the concept of woman keeping her chastity have been more prevailing because of the support of the law, and the idea of covering women's skin was naturally continued. Therefore, the stand collar with gold and silver interlocking buckles can remain in the Ming women's clothes.
In addition to the chastity view, climatic changes also prompted women in the Ming Dynasty tending to cover their bodies. During the Ming Dynasty, the average temperature in China was low. Using the structure of stand collar with interlocking buckles can make the traditional Chinese clothes wrap the body tighter and increase the warmth retention property of the clothing. Zhu Kezhen studied the temperature changes in ancient China according to the growth of vegetation and the freezing records of rivers, lakes and seas. His research showed that the annual average temperature during the Tang Dynasty was the highest in the past 1700 years and the annual average temperature during the Ming Dynasty was often two to three degrees lower than that during the Tang Dynasty [11]. These research results coincides exactly with the phenomenon that the Ming Dynasty clothes wrapped female's body tighter than the Tang Dynasty clothes, in order to increase the warmth retention property of the clothes.
The strengthening of the chastity concept and changes in climate and temperature all have prompted the Ming Dynasty women 's clothing towards the direction of covering the body. And the structure of stand collar with interlocking buckles has lasted for several hundred years, enduring and becoming an important element in Chinese style garment today.
IV. CONCLUSION
As a unique accessory that appeared in Ming Dynasty women's clothing, the gold and silver interlocking buckle not only decorated the life of Ming Dynasty women, but also embodied the design aesthetics of Ming Dynasty utensils, and also witnessed the cultural changes of Ming Dynasty society. In the aspect of structure design, the gold and silver interlocking buckle in the Ming Dynasty adjusted the open-close structure on the basis of the structure of the fabric braid buckle to suit new material properties and improve its applicability. In the aspect of form design, the functional components of the gold and silver interlocking buckle are concealed by chic shapes to enhance its decorative nature. In a word, its overall design reflects the aesthetic concept of "exquisite and applicable". And, the popularity of gold and silver interlocking buckles in the Ming Dynasty became an intuitive manifestation of the Ming Dynasty society's aesthetic concept of paying attention to real life, loving auspicious patterns, and advocating gold and silver jewelry and provided indirect proof for the strong conspicuous consumption trends in Ming society. At the same time, it also acted the material medium for Ming Dynasty women to practice the concept of covering the body. | 2020-09-16T22:52:11.404Z | 2020-09-07T00:00:00.000 | {
"year": 2020,
"sha1": "0713dcaeb177889afbf2992ce1645817610ede6f",
"oa_license": "CCBYNC",
"oa_url": "https://www.atlantis-press.com/article/125944399.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0713dcaeb177889afbf2992ce1645817610ede6f",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Art"
]
} |
18337248 | pes2o/s2orc | v3-fos-license | Resonant Slepton Production
We consider the production of resonant sleptons via R-parity Violation followed by gauge decays to a charged lepton and a neutralino which then decays via R-parity Violation. This gives a signature of two like-sign charged leptons. We find a background at run II of 0.14 +/- 0.13 events with an integrated luminosity of 2 fb^-1. This enables us to probe R-parity Violating couplings of 2.10^-3 for slepton mass of 100 GeV and up to slepton masses of 300 GeV for R-parity Violating couplings of 10^-2.
INTRODUCTION
Sleptons can be produced on resonance at the Tevatron via the R p L i Q j D k term in the superpotential. The slepton can then decay again via the R p -Yukawa coupling. This has been considered elsewhere [1][2][3]. It can also decay via the standard gauge decays of the MSSM [1] which we consider here. There are two possible gauge decays of the charged slepton, i.e.l * We focus on the decay of the slepton to a neutralino and charged lepton. The neutralino in turn will then decay via the same R p -operator L i Q j D k ,χ Since the neutralino is a Majorana fermion it can also decay to the charge conjugate final states, with equal probability. In spirit, this is similar to the HERA process considered in [4]. The tree-level Feynman diagrams for the slepton production and the neutralino decay are shown in Fig. 1 and Fig. 2, respectively. Due to the Majorana nature of the neutralino, we have a signature of two like-sign charged leptons. In the following we shall consider only electrons or muons, i.e. we focus on the operators L e Q j D k and L µ Q j D k . We expect these leptons to have high transverse momentum, p T , and be well isolated whereas the leptons from the Standard Model backgrounds have lower p T and are also poorly isolated. We therefore hope that this signature can be seen above the background if we apply isolation and p T cuts.
BACKGROUNDS
In the following we combine the backgrounds for both electrons and muons. The main backgrounds to this like-sign dilepton signature are as follows 1. bb production followed by the production of at least one B 0 d,s meson, which undergoes mixing. If the two b-quarks in the event decay semi-leptonically this gives two like-sign charged leptons.
2. tt production followed by t → W + b → e +ν e b, andt → W −b → qqb → qqW +c → qqe +ν ec . 3. Single top production (s and t channel) followed by semi-leptonic decays of the top and the B-meson produced after hadronization. 4. Non-physics backgrounds from fake leptons and charge misidentification. There are also backgrounds due to the production of weak boson pairs, i.e. WZ and ZZ, where at least one of the charged leptons is not detected [5]. These require a full simulation including the detector. We do not consider them here.
We use HERWIG 6.0, [6][7][8], to simulate these background processes. The program includes the computation of the supersymmetric spectrum and the MSSM decay branching ratios from the ISASUSY program [9]. Due to the high cross section for the production of bb it was necessary to impose a parton-level cut of 20 GeV on the p T of the b andb to enable us to simulate a sufficient number of events. In Fig. 3 we show the distribution of events (using the full Monte Carlo simulation) as a function of the (parton-level) p T of the bottom quark for two different values of the lepton p T cut. We did not simulate any events for which the p T of the bottom quark was below 20 GeV since the cross section is too large. If we extrapolate using Figs. 3a, b to lower b-quark p T we can see that for a lepton p T cut of 20 GeV, Fig. 3b, our approximation should be good, i.e. we expect the area under the curve for p T (b) < 20 GeV to be negligible. For p T (ℓ) > 15 GeV, Fig. 3a, we would still expect a significant number of events at 15 GeV < p T (b) < 20 GeV. Besides the parton-level cut, we forced the B-mesons to decay semi-leptonically. This means we neglect the production of leptons from the decay of charmed mesons which should also be a good approximation as we expect the leptons produced from these decays to be poorly isolated. Table 1 shows the backgrounds with a p T cut on the leptons of 20 GeV and an isolation cut of 5 GeV. We have used the leading-order cross section for the bb and single top backgrounds and the next-to-leading order cross section, with next-to-leading-log resummation, from [10] for the tt cross section. In both cases the error on the cross section is the effect of varying the scale between half and twice the hard scale, and the error on the number of events is the error in the cross section and the statistical error from the simulation added in quadrature. Realistically we cannot reduce these statistical errors due to the large number of events we would need to simulate. We have implemented the full hadronization using HERWIG 6.0.
With these cuts and using Poisson statistics, a 5σ fluctuation of the total background corresponds to 4 events with an integrated luminosity of 2 fb −1 . Hence we consider 4 signal events to be sufficient for a discovery of the new R p signal process.
SIGNAL
To simulate the signal and the effect of the cuts, we modified HERWIG 6.0, [6][7][8], to include the production process, the MSSM decay of the slepton, and the R p decay of the neutralino. The decay rate of the neutralino and its branching ratios were calculated in the code and a matrix element for the neutralino decay [11,12] was implemented in the Monte Carlo simulation.
We use the program to estimate the acceptance of the signal process, i.e. the fraction of the like-sign dilepton events which pass the cuts multiplied by the branching ratio to give a like-sign dilepton event. Fig. 6 shows the acceptance for two different SUGRA points, with an isolation cut on the leptons of 5 GeV and a cut p T (ℓ) > 20 GeV. As can be seen in Fig. 6b, the acceptance drops in two regions. For lower values of M 0 , the slepton is not much heavier than the neutralino. The charged lepton from the decay of the slepton is then quite soft and gets rejected by the p T cut. For large values of M 0 the slepton is much heavier than the neutralino. The neutralino then gets a significant boost from the slepton decay. The neutralino decay products are folded forward in the direction of this boost causing the event to be rejected by the lepton isolation cut.
To estimate the acceptance properly we need to run a scan of the SUGRA parameter space using the Monte Carlo event generator. This still remains to be done. To give some idea of what range of couplings and masses we may be able to probe instead we assume an acceptance of 10% using the same cuts as before. We can then estimate the range of couplings which may be accessible.
As can be seen in Figs. 7, 8 the production cross section for λ ′ 211 = 10 −2 is sufficient to produce a signal which is more than 5σ above the background for large regions of the SUGRA parameter space. In some regions where the neutralinos become Higgsino-like the cross section drops. The cross section also drops as we approach the region where the neutralino is heavier than the slepton and the resonance becomes inaccessible.
We focused on the coupling λ ′ 211 because the experimental bound on λ ′ 111 from neutrinoless double beta decay is very strict [13,14]. The bound on λ ′ 111 weakens as the squark mass squared and for squark masses above about 300 GeV (which we expect in the SUGRA scenario for the heavier slepton masses) λ ′ 111 ≈ 10 −2 is experimentally allowed and our analysis thus applies for this case as well. λ ′ 211 ≈ 10 −2 is well within the present experimental bounds [14].
In these figures we see that we are sensitive to slepton masses up to 300 GeV for couplings of 10 −2 . The production cross section scales with the square of the coupling. For slepton masses around 100 GeV, just above the LEP limits, we can thus probe couplings down to about 2 × 10 −3 .
CONCLUSION
We have performed an analysis of the physics background for like-sign dilepton production at run II and find that with an integrated luminosity of 2 fb −1 , a cut on the transverse momentum of the leptons of 20 GeV and an isolation cut of 5 GeV the background is 0.14 ± 0.13 events. This means that 4 signal events would correspond to a 5σ discovery, although in a full experimental analysis the non-physics backgrounds must also be considered.
Using a full Monte Carlo simulation of the signal including a calculation of the neutralino decay rate, its partial widths and a matrix element in the simulation of the decay we found that the acceptance for the signal varies but for a reasonable range of parameter space is 10% or greater.
When we then look at the cross section for the production ofχ 0 ℓ + we find that we can probe R p couplings of 2 × 10 −3 for slepton mass of 100 GeV and up to slepton masses of 300 GeV for R p couplings of 10 −2 , and higher masses if the coupling is larger. | 2014-10-01T00:00:00.000Z | 1999-03-20T00:00:00.000 | {
"year": 1999,
"sha1": "1600af9c72016c7e3476d02edb6d0f61bfd74011",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1600af9c72016c7e3476d02edb6d0f61bfd74011",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
11370123 | pes2o/s2orc | v3-fos-license | Design of super-elastic biodegradable scaffolds with longitudinally oriented microchannels and optimization of the channel size for Schwann cell migration
Abstract We newly designed super-elastic biodegradable scaffolds with longitudinally oriented microchannels for repair and regeneration of peripheral nerve defects. Four-armed poly(ε-caprolactone-co-D,L-lactide)s (P(CL-co-DLLA)s) were synthesized by ring-opening copolymerization of CL and DLLA from terminal hydroxyl groups of pentaerythritol, and acryloyl chloride was then reacted with the ends of the chains. The end-functionalized P(CL-co-DLLA) was crosslinked in a cylindrical mold in the presence of longitudinally oriented silica fibers as the templates, which were later dissolved by hydrofluoric acid. The elastic moduli of the crosslinked P(CL-co-DLLA)s were controlled between 10−1 and 102 MPa at 37 °C, depending on the composition. The scaffolds could be elongated to 700% of their original size without fracture or damage (‘super-elasticity’). Scanning electron microscopy images revealed that well-defined and highly aligned multiple channels consistent with the mold design were produced in the scaffolds. Owing to their elastic nature, the microchannels in the scaffolds did not collapse when they were bent to 90°. To evaluate the effect of the channel diameter on Schwann cell migration, microchannels were also fabricated in transparent poly(dimethylsiloxane), allowing observation of cell migration. The migration speed increased with channel size, but the Young's modulus of the scaffold decreased as the channel diameter increased. These findings may serve as the basis for designing tissue-engineering scaffolds for nerve regeneration and investigating the effects of the geometrical and dimensional properties on axonal outgrowth.
Introduction
Peripheral nerve injuries caused by trauma and iatrogenic injury are a worldwide problem and can result in a significant, life-long disability [1]. Although nerves can regenerate on their own if injuries are small, effective nerve regeneration and functional recovery subsequent to a larger injury are still a clinical challenge. The standard treatment of peripheral nerve injury involves direct end-to-end surgery of the damaged nerve ends or the use of an autologous nerve graft [2,3]. Autologous nerve graft transplantation is a feasible treatment for the injury across gaps greater than a few millimeters, but limited by donor site morbidity and insufficient donor tissue, impairing a complete functional recovery [4,5]. Thus, bioengineering strategies for the peripheral nervous system are focused on alternatives to the nerve graft. Tissue engineering has introduced innovative approaches to promote and guide peripheral nerve regeneration by using biodegradable materials [6][7][8]. Synthetic materials are attractive because their chemical and physical properties such as degradation rate, porosity and mechanical strength can be optimized for a particular application [9]. A number of synthetic materials have been explored for use in aiding nerve regeneration [4,6,10]. They are mainly based on the polyesters, such as poly(glycolide) (PGA), poly(L-lactide) (PLLA), poly(L-lactide-co-glycolide) (PLGA), poly(ε-caprolactone) (PCL) and poly(3-hydroxybutyrate) (PHB) because of their availability, biodegradability and approval by the Food and Drug Administration (FDA). Toba et al [11] designed a bioabsorbable PGA tube filled with collagen sponge (PGA-collagen tube) as a nerve connective guide for nerve regeneration. The PGA-collagen conduit supported the nerve repair and functional recovery after grafting into an 80 mm gap in a nerve injury in a dog [12,13]. They have also developed a nerve guide tube created by braiding together PLLA and PGA for repair of long nerve defects. By introducing the slowly decomposing PLLA, better regeneration was achieved compared to that of PLGA alone [14]. On the other hand, the nerve guides composed of a semicrystalline copolymer of ε-caprolactone and L-lactide (P(CL-co-LLA)) were found to degrade slowly, leaving significant amounts of biomaterial around the regenerated nerve after 2 years and causing a chronic foreign body reaction with formation of scar tissue [15][16][17]. To overcome this problem, copolyester of ε-caprolactone and D,L-lactide (DLLA, replacing LLA) were developed to obtain a more amorphous scaffold with controllable degradation properties [18,19]. Nerve guides constructed of P(CL-co-DLLA) possessed favorable properties for bridging a 10 mm gap in the sciatic nerve of the rat [20,21], and resulted in better functional recovery compared to autologous nerve grafts [22,23]. Despite a plethora of nerve prostheses developed, however, few approaches entered the clinical practice [24,25].
In the case of peripheral nerve regeneration and tissue engineering, evidence obtained from both in vitro and in vivo studies indicates that Schwann cells (SCs) are crucial and play both structural and functional roles [26]. When a peripheral nerve is damaged, SCs alter their behavior to become involved in Wallerian degeneration and Büngner bands [27][28][29][30]. In Wallerian degeneration, SCs grow in ordered columns along the endoneurial tube, creating a Büngner band that protects and preserves the endoneurial channel. SCs also elaborate neurotrophic substances that enhance regrowth in conjunction with macrophages after peripheral nerve injury [6]. Additionally, SCs express integrin surface proteins that interact with surface proteins at the growth cone of regenerating axons. When SCs were pre-seeded into nerve guidance channels, the injured peripheral nerves regenerated at a faster rate and over longer distances than without pre-seeding [31,32]. From these perspectives, appropriate synthetic materials for artificial conduits must (1) be readily formed into a conduit with desired dimensions to support SC migration, (2) be pliable and easy to handle and suture, but should maintain their shape and resist collapse during implantation, and (3) protect the regenerating axons in the lumen from the environment, but should be porous and semipermeable to provide a conduit with the diffusion of biomolecules such as growth factors.
To design promising artificial conduits, here we describe a novel technique for preparing 'super-elastic' scaffolds, which are composed of biodegradable P(CL-co-DLLA)s with longitudinal microchannels and have an optimized channel geometry for migration of SCs. We define the term 'super-elastic' as follows: (1) the elastic modulus is lower than 5 MPa, and (2) the maximum strain is higher than 500%. First, end-functionalized four-armed P(CL-co-DLLA) macromonomers with various CL/DLLA compositions were synthesized according to the previously reported protocol (scheme 1). The macromonomers were then crosslinked in a cylindrical mold containing uniformly spaced silica capillary fibers. The silica fibers were later dissolved with hydrofluoric acid (HF). Their thermal and mechanical properties have been characterized by differential scanning calorimetry (DSC) and tensile tests, respectively. The produced channels within the scaffolds were observed by scanning electron microscopy (SEM). To evaluate the effect of the channel diameters on SC migration, microchannels were also fabricated in transparent poly(dimethylsiloxane) (PDMS). SCs were plated onto poly-L-lysine-coated cell culture dish covered with PDMS microchannels, and the migration distances were evaluated.
Materials
ε-Caprolactone was purchased from Tokyo Kasei (Tokyo, Japan) and purified by distillation over calcium hydride, which was purchased from Wako Pure Chemical Industries (Tokyo, Japan), under reduced pressure. Pentaerythritol and acryloyl chloride were also purchased from Tokyo Kasei and used as received. DLLA and LLA were kindly supplied by the Musashino Chemical Laboratory (Tokyo, Japan) and recrystallized twice from ethyl acetate before use. Triethylamine was purchased from Wako Pure Chemical Industries, Ltd, and dehydrated by distillation over potassium hydroxide. Tin octanoate, HF solution (46%) and other chemicals were also purchased from Wako Pure Chemical Industries, Ltd. Benzoyl peroxide (BPO) was purchased from Sigma (St Louis, Missouri, US) and used as received. PDMS prepolymer was purchased from Dow Corning Corporation (Midland, Michigan, US). Negative photoresist SU8-50 was Scheme 1. Preparation schemes of pliable biodegradable tubes with microchannels. Four-armed P(CL-co-DLLA)s are synthesized by ring-opening copolymerization of CL and DLLA from terminal hydroxyl groups of pentaerythritol. Acryloyl chloride is then reacted with the ends of the chains. The end-functionalized P(CL-co-DLLA) is crosslinked in a cylindrical mold in the presence of longitudinally oriented silica fibers as the templates, which are later dissolved with HF. SEM images of the silica fibers (a) and the crosslinked P(CL-co-DLLA) before (b) and after etching (c). purchased from Microchem Corporation (Massachusetts, US).
Synthesis of four-armed macromonomers
Four-armed copolymers with different CL and DLLA compositions were synthesized by ring opening copolymerization from terminal hydroxyl groups of pentaerythritol using tin octanoate as a catalyst according to our previous reports [18,[33][34][35]. Acryloyl chloride was then reacted with the ends of the branched chains. Four-armed PLLA macromonomer was also synthesized using the same protocol. The structures and the molecular weights were estimated by proton nuclear magnetic resonance ( 1 H NMR) spectroscopy (JEOL, Tokyo, Japan) and gel permeation chromatography (GPC, JASCO International, Tokyo, Japan).
Preparation of crosslinked materials with microchannel structures
Silica capillary fibers with various diameters (8,150,350 and 660 µm) were inserted in a glass tube of 1.5 mm inner diameter, and a xylene solution containing the four-armed macromonomers (40 wt%) and BPO (1.5 wt%) was injected. The macromonomer solution was then cured at 80 • C for 2 h. After the reaction, the crosslinked samples were removed from the glass tube and immersed in HF solution at room temperature for 24 h to dissolve the silica templates. Films for a tensile test were prepared by crosslinking the macromonomers between glass slides with a 0.2 mm thick Teflon spacer.
Characterizations
The thermal properties of the macromonomers before and after crosslinking were measured by DSC (DSC6100, Seiko Instruments, Chiba, Japan). The measurements were conducted from 0 to 120 • C at a heating rate of 5 • C min −1 . The mechanical properties of the crosslinked films were characterized with a tensile tester (EZ-S 500N, Shimadzu, Kyoto, Japan) equipped with a heating chamber (Chromato chamber M-600FN, TAITEC, Saitama, Japan). The tensile tests were carried out at an elongation rate of 10 mm min −1 at various temperatures, and the elastic modulus was calculated from the initial slope of the stress-strain curve. The morphologies of the channels produced in the crosslinked tube were observed by SEM (JCM-5000, JEOL).
Fabrication of PDMS microchannels
To fabricate microchannels, PDMS prepolymer was cast against the patterned silicon master and cured at 60 • C for 3 h [36,37]. PDMS prepolymer was prepared by mixing PDMS base with a curing agent in a 10 : 1 ratio by weight and degassing the mixture under vacuum. Patterned silicon master was fabricated by photolithography. Briefly, a silicon wafer was spin-coated with SU8-50 and baked at 95 • C for 1 h. The photoresist was exposed to UV light (SUSS MicroTec MA6) for 15 s thorough transparent masks. After exposure, the masters were baked at 95 • C for 10 min and developed with SU-8 developer (Microchem). Patterned masters were passivated by 10 min exposure to silane under vacuum.
Migration assay of Schwann cells using PDMS channels
Primary rat Schwann cells were isolated from sciatic nerves of 4-to 5-day-old Wistar rats and cultured in Dulbecco's modified Eagle's medium (DMEM) containing 10% fetal bovine serum (FBS). The following day, 10 µm cytosine arabinoside (AraC) was added to the medium to eliminate dividing fibroblasts for 48 h. The cells were grown in DMEM containing 3% FBS with 3 µm forskolin and 20 ng ml −1 neuregulin. PDMS microchannels were attached to the 6-well-dishes coated with poly-L-lysine. The medium was added to the dish, and air in the gutter on the surface of PDMS microchannels was aspirated. Schwann cells were plated at a density of 1 × 10 4 cells cm −2 . The migration length of Schwann cells into the gutter was observed daily for 12 days.
Preparation and characterization of P(CL-co-DLLA) scaffolds
PCL is a semicrystalline solid and an important class of biodegradable polymers. The mobility of its chains and the crystallinity markedly change at the melting temperature (T m , see figure 1(a)). Crosslinked PCL is relatively elastic above T m [38,39]; however, the T m value (∼60 • C) is too high for biological applications. We have previously reported novel techniques to control the T m of PCL by tailoring the number of branched chains [39,40] or incorporating non-crystalline segments [18,[33][34][35]. In this study, the T m was adjusted below the human body temperature by copolymerizing CL with DLLA. Four-armed P(CL-co-DLLA) copolymers with various CL/DLLA ratios were synthesized by ring-opening polymerization from pentaerythritol. The obtained copolymers were then reacted with acryloyl chloride to introduce vinyl groups at the end chains. The copolymer properties are summarized in table S1. All the copolymers possessed relatively narrow molecular weight distributions (Mw/Mn = 1.32-1.57) as determined by GPC. The DLLA contents of the copolymers were determined by 1 H-NMR as 0, 9, 19, 29 and 39 mol% DLLA when the feed concentrations were 0, 10, 20, 30 and 40 mol%, respectively. These results indicate that copolymers with the desired branch number, molecular weights and compositions of CL and DLLA have been obtained. Figure 1(b) and table S1 show the effect of DLLA content on the parameters of crosslinked P(CL-co-DLLA). Both the T m and melting enthalpy ( H , table S1 in the supporting information) decrease with the DLLA content. The T m of the copolymer with 30 mol% of DLLA (70/30), for example, is close to the body temperature (∼39 • C), whereas that of the original PCL homopolymer (100/0) is 59 • C. Even lower T m (∼29 • C) was achieved for the 60/40 sample. These results indicate that incorporation of amorphous DLLA units prevents crystallization of PCL [41,42]. All samples were chemically crosslinked and therefore retained their original shape even above T m . Thus the crosslinked PCL copolymers with T m near or lower than 37 • C can be used as an elastic biodegradable scaffold in the human body. Figure 2 shows temperature dependences of Young's modulus for the crosslinked copolymers. Young's modulus was determined from the initial slope of the stress-strain curve recorded during the tensile test. The Young's moduli of crosslinked PLLA, which is one of the most promising FDA-approved materials for nerve tissue engineering [43][44][45], varied between 650 and 830 MPa. Those values were independent of temperature in the range 20-45 • C because PLLA has a higher T m (around 143 • C). The crosslinked P(CL-co-DLLA) samples, on the other hand, show significantly lower values compared with PLLA. Their Young's moduli markedly decreased as temperature increased. The transition temperatures are consistent well with the T m obtained from DSC measurements (as shown in figure 1(b)). For 60/40 samples, Young's moduli of 11.0 ± 0.5 MPa, 370 ± 60 and 80 ± 30 kPa were measured at 20, 30 and 40 • C, respectively; that is, these samples are relatively durable (> 10 MPa) at room temperature and are elastic (< 0.1 MPa) at body temperature.
Fabrication and characterization of microchannel structures in the scaffolds
It has been widely accepted that not only biocompatibility, biodegradability and mechanical properties but also the three-dimensional architectures such as porosity and pore interconnectivity in the scaffolds are important for tissue engineering applications. In particular, scaffolds with structural features similar to neural structures can be more effective in the reconstruction process. Valmikinathan et al [46] reported that tubular structures enhanced the Schwann cell attachment and proliferation compared with non-tubular structures. Many other reports have also suggested that in a scaffold with microtubular channel architecture, regenerating axons can extend more efficiently through open longitudinal than randomly oriented channels [47][48][49][50]. Those oriented channels are usually created by inserting a needle or wire in a polymer scaffold. After the shape is stabilized, the needle or wire is removed to form the channels. Wang et al [51] developed a molding technique to produce multichannel scaffolds using acupuncture needles as mandrels. Huang et al [52] created longitudinal channels in chitosan scaffolds using nickel-copper wires. Another way to create longitudinal channels is to create a conduit from one polymer with longitudinally embedded fibers from another polymer, and then selectively dissolve the fibers to form longitudinal channels. Flynn et al [53] developed a method of creating longitudinal channels within hydrogels using PCL fibers as the template.
In this study, we fabricated multiple channels in P(CL-co-DLLA) scaffolds using a silica fiber templating technique as shown in scheme 1. This fabrication technique is simple and reproducible. It allows to control dimensions because silica capillary fibers have a well-defined structure with a wide range of diameters and can be easily dissolved in an HF solution. Silica capillary fibers were inserted in a glass tube and the P(CL-co-DLLA) solution was injected. After crosslinking, the fibers were removed by dissolution resulting in longitudinal fiber-free channels uniformly distributed in the scaffold. Figure 3(a) shows SEM images of microchannels produced in the 70/30 P(CL-co-DLLA) using silica templates. Average diameters of the individual channels were measured from the SEM images ( figure 3(b)). Templates 8, 150, 350 and 660 µm in diameter created channels with diameters of 8.3 ± 0.6, 130.0 ± 18.0, 338.3 ± 7.6 and 605.0 ± 5.0 µm, respectively. Because the scaffolds shrank during drying and crystallization after the crosslinking reaction, the final channel diameters were slightly smaller than those of the templates. The duplicated channel-shapes were observed in cross-sectional images at different positions (data not shown). These results reveal that the silica templating technique yielded well-defined, longitudinally oriented, fiber-free channels in P(CL-co-DLLA) scaffolds.
Next, the pliability of crosslinked scaffold was evaluated. Bending tests were performed in cylindrical tubes with 350 µm inner diameter to measure the deformations in the tubes in the bent regions. The optical micrographs of bent P(CL-co-DLLA) (70/30) and PLLA are shown in figure 4(a). The samples were bent to 90 • at 37 • C and sliced at the bending point. Cross-sectional images revealed that the channel structure in P(CL-co-DLLA) tube did not collapse, and the shape and size were consistent before and after bending ( figure 4(b) left). However, the channel in the PLLA tube was significantly deformed by bending the tube ( figure 4(b) right). The deformation of the inner channel was also observed by a replica molding technique. PDMS prepolymer solution was injected into the channel and the tube was bent to 90 • at 37 • C. After PDMS was cured, the PDMS replica was removed from the channel. Figure 4(c) shows the cross-sectional images of the PDMS replicas at the bending point. The deformations of the replicas were almost identical to those of the corresponding channels observed in figure 4(b). The different pliability between P(CL-co-DLLA) and PLLA does correlate with the elastic modulus for each sample. As shown in figure 2, the Young's modulus of the 70/30 polymer was several hundred times smaller than that of PLLA at 37 • C. Moreover, P(CL-co-DLLA) has a larger elastic deformation range where the object returns to its original shape, as does rubber. The 70/30 sample recovered its shape after elongation up to 700%, whereas PLLA broke at 50% of strain, which is in the irreversible plastic deformation range for PLLA. These results clearly indicate that the crosslinked P(CL-co-DLLA) has great potential as a new class of pliable scaffold, particularly for regeneration of soft tissues.
The effect of channel diameter on mechanical properties of P(CL-co-DLLA) tube was also evaluated by tensile test at 37 • C. To eliminate the effect of porosity, the total porosity in the scaffold was fixed at approximately 38% by varying the number of channels. For example, for templates with diameters of 660, 350 and 150 µm, 2, 7 and 39 channels were created in the scaffold, respectively. Figure 5 compares the Young's modulus and strain at break for P(CL-co-DLLA)s (70/30) containing channels of various diameters. At equal porosity, the Young's modulus decreased from 1.6 MPa to 300 kPa with increasing diameter of the individual channel from 150 to 660 µm. The 150 and 350 µm-scaffolds, however, withstood up to 1 N mm −2 of stress, whereas the 660 µm-scaffold broke at 0.41 N mm −2 . These results may be attributed to the uniformity of channel distributions, i.e. smaller channels were distributed more uniformly in the scaffold. In addition, larger pores may cause cracks at lower stress, leading to an unstable fracture. Wen et al [54] reported that the Young's modulus of porous magnesium scaffolds decreased with increasing pore size, where the porosity was approximately 45%. Miyoshi et al [55] also reported that the mechanical properties of porous aluminum were related to not only the porosity but also the pore structure. These observations indicate that higher aspect ratios of the wall thickness against the channel edge length result in better energy absorption under applied stress. Therefore, the scaffold with 660 µm channels was the most brittle in this study. Most importantly, these results indicate that the mechanical properties of P(CL-co-DLLA) scaffolds could be controlled not only by T m but also by the channel structures. From our results, the 350 µm scaffold is the most elastic but reliable material among samples tested in this study.
Effect of channel size on Schwann cells migration
Several studies reported that conduits with longitudinal channels demonstrated better regeneration across long peripheral nerve gaps than hollow conduits in vivo [56,57]. Since longitudinal channels approximate the microarchitecture of native peripheral nerves, the luminal surfaces can promote the adherence and migration of SCs, whose presence is known to play a key role in nerve regeneration. However, there have been few reports on the effect of channel size on SC migration in vitro, and the optimal pore size for SC migration is not fully understood. The optimal pore size usually depends on the cell type. Endothelial cells, for example, show favorable attachment to pores in the range of 20-80 µm, whereas osteoblasts require pores larger than 10 0µm for bone formation [58,59]. A series of PCL copolymers showed good cell compatibility with contact angles in the range of 50-70 • depending on the composition [33,34,40]. However, SCs hardly attach on surfaces without coating with poly-L-lysine or laminine. Accordingly, we performed in vitro migration assay for SCs using poly-L-lysine-coated TCPS surfaces covered with microchannels for better understanding of the effect of pore size on the SC migration. The migration assay was carried out in two-dimensional system instead of three-dimensional scaffold to eliminate the effect of curvature (figure 6(a)). The channels were fabricated in PDMS instead of P(CL-co-DLLA) because the transparency of PDMS enables direct observation of cell migration. A cell culture surface coated with poly-L-lysine was covered with PDMS containing gutters of various widths and heights. Figure 6(b) shows the effect of channel width on cell migration distance. The channels were 80 µm tall and 30, 70, 150 and 350 µm wide. More SCs migrated away from the edge in wider channels than in narrower ones. Likewise, the migration distance of SCs after 12 days was much longer in the channel with internal width of 350 than 30 µm. This was expected because the number of cells would increase with the channel cross-section. In other words, when the migration distance in each channel is normalized to cross-sectional area, the migration speed may be independent of channel width [60]. Therefore, the effect of channel height on cell migration distance was also observed using PDMS channels 350 µm wide and 80, 200 or 400 µm tall (figure 6(c)). Faster migration was observed in the taller channels, at constant cross-sectional area. There results indicate that channel with larger diameters can provide not only a larger surface area for cell adherence, but also a more efficient supply of oxygen and nutrients, which is important to maintain cell viability.
From the reported results, we have developed a concept of scaffold design for peripheral nerve regeneration as shown in figure 7. Although the surface area available for cell adhesion dramatically increases as the channel size decrease (solid blue line in figure 7), no significant migration of SCs was observed for the channel with a width of 30 µm and a height of 80 µm (see figure 6(b)). This is in part due to the volume decrease for individual channels, which is important for supplying oxygen and nutrients to cells (solid red line). This trend agrees with the general notion that when cells are adhering and growing under a limited supply of oxygen/nutrients, their viability should be reduced. In this study, we used PDMS because of its transparency and capability of microfabrication. PDMS is also known as the most permeable rubbery polymer for oxygen. This can be explained by its chain flexibility, rotational mobility, large free volume and low glass transition temperature [61]. Because the crosslinked P(CL-co-DLLA) scaffolds also exhibited similar characteristics to PDMS in terms of flexibility, crystallinity and hydrophobicity [35,39,62], we expect that the observed trend in PDMS channels can be applied to the crosslinked P(CL-co-DLLA) scaffolds. We also demonstrate that mechanical properties of the scaffold can be controlled by selective sizing of the channel dimensions. The scaffold with larger channel diameter broke at lower stress (dashed black line). From our results, 350 µm is the optimal channel size, which approximates the microarchitecture of native peripheral nerves. Additionally, mechanical tests revealed excellent elasticity and pliability of the crosslinked P(CL-co-DLLA) scaffolds at 37 • C, and the channel structure was maintained when the scaffolds were deformed. These results may help designing scaffolds for peripheral nerve regeneration.
Conclusions
We prepared super-elastic biodegradable scaffolds with longitudinal microchannels by crosslinking endfunctionalized four-armed P(CL-co-DLLA)s. The elastic modulus was controlled (<1 MPa) by adjusting T m to the body temperature. Well-defined microchannels consistent with the mold design were produced in the scaffolds using a silica fiber templating technique. The produced microchannels did not collapse upon bending due to the elastic nature of scaffolds. The effect of individual channel diameter on mechanical properties was also evaluated. Scaffolds with wider channels (approximately 600 µm) broke at lower stress. On the other hand, SC migration was faster in wider channels because they provided more oxygen and nutrients to the cells. From these results, 350 µm is the optimal channel width, which also approximates the microarchitecture of native peripheral nerves. | 2017-09-01T03:16:52.373Z | 2012-11-23T00:00:00.000 | {
"year": 2012,
"sha1": "45c88eac1dc652ae7de320b4c5747fa6d4ce6437",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1088/1468-6996/13/6/064207",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "4ccadd0a355595f1b1e58d227cbe5815c3ae1c8f",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
49557031 | pes2o/s2orc | v3-fos-license | Dynamic Voltage Conditioner: A New Concept for Smart Low-Voltage Distribution Systems
Power Quality (PQ) improvement in distribution level is an increasing concern in modern electrical power systems. One of the main problems in low voltage (LV) networks is related to load voltage stabilization close to the nominal value. Usually this problem is solved by smart distribution transformers, hybrid transformers and solid-state transformers, but also dynamic voltage conditioners (DVC) can be an innovative and a cost effective solution. The paper introduces a new control method of a single-phase DVC system able to compensate these long duration voltage drifts. For these events, it is mandatory to avoid active power exchanges so, the controller is designed to work with nonactive power only. Operation limits for quadrature voltage injection control is formulated and reference voltage update procedure is proposed to guarantee its continuous operating. DVC performance for main voltage and load variation is examined. Proposed solution is validated with simulation study and experimental laboratory tests. Some simulation and experimental results are illustrated to show the prototype device's performance.
I. INTRODUCTION
P OWER quality (PQ) in modern power systems is an high demanding concern in both medium voltage (MV) and low voltage (LV) networks for all industrial, commercial, and domestic users [1].
Different PQ problems has been reported and categorized [2]. Among those voltage rms deviations, that may be caused by different reasons in power systems, are most often reported as important issues. In particular, recent fast development and high grid penetration of renewable energies, made it more difficult and challenging to respect to the rms voltage standards [3]. Indeed, European standards persuades distribution system operators (DSOs) to provide the LV users voltage within standard range [4]; so, voltage regulation within the standard range is becoming more and more important for DSOs especially within future modern and smart grid systems [5]. Therefore, it is economically and technically rational for DSOs to investigate flexible and advance PQ conditioners for LV distribution networks.
Nowadays, in order to deal with long duration voltage drifts, DSO can adapt smart distribution transformers, hybrid transformers and solid-state transformers that are among strongly developing group of voltage conditioners [6], [7], while, rarely ac electronic compensator devices, as Static Var Compensator (STATCOM) [8], dynamic voltage restorer (DVR) [9], unified power quality conditioner (UPQC) [10] and open unified power quality conditioner (Open UPQC) [11], are adapted because those are too expensive or have limited functionalities. So an economic active voltage conditioner, which perfectly covers long-term events, has the ability to operate continuously in order to regulate voltage at PCC and to protect a group of downstream end users from several voltage disturbances, can be very interesting for DSO.
A device that can perfectly satisfy modern power system requirement is the Dynamic Voltage Conditioner (DVC). Indeed, its mission is regulation and not only restoration, so it can operate to deal with short-and fast-term voltage disturbances, as a typical DVR [12], and it can operate to deal with long-term voltage disturbances in the range ±10% of the nominal value.
Several topologies and control algorithms for DVR have been presented and analyzed in literature [13], [14], as the topology proposed by Babaei et al. in [15]. In phase voltage injection (with active power) is the easiest method to compensate the voltage variation but it needs large storage system for shortterm events [16] and it cannot be applied for long duration voltage drifts compensation. To reduce energy storage, energy optimized control method has been introduced [17]. However, to compensate longer voltage variation as voltage variation between ±10% nominal value and voltage fluctuations in LV networks, quadrature (respect to the line current) voltage injection is needed (with nonactive power).
In literature, the proposed control methods are always for three-phase systems and it is based on three-phase measurement, synchronous rotating frame (SRF) calculations and d-q transformation as reported in [12] and [18].
However, most domestic, commercial, and some industrial loads are single-phase; so, in LV distribution level and with mostly single-phase users as domestic one, a cost effective, and practical PQ solution can be a single-phase device without an expensive storage system. This configuration in practice can give to the DSO the possibility to move all problematic loads on a specific phase of the feeder equipped with a single-phase device, reducing installation costs [19]. A single-phase configuration has been attracted immense interest recently because it is more convenient with mostly single-phase final users and also it is always possible to built a three-phase system linking three single-phase units together.
Therefore, an active voltage conditioner able to cover both long-, short-and fast-term events, can perfectly satisfy modern power system requirements as the DVC proposed by the authors in their previous work through simulation study [20].
This paper presents a single-phase DVC as an economic voltage controller for LV distribution smart grid systems, which makes the proposal an appreciated solution for DSO [21]. Coupling transformer and device sizing is evaluated. New singlephase and fast calculation based controller for DVC system is proposed, and device operation principle based on nonactive compensation method is presented. For first time in literature, the single-phase DVC operation limits of quadrature voltage injection method is formulated. To guarantee device stability, for the first time in literature, a new PCC reference voltage update procedure is suggested in order to update the V PCC ref if the system goes outside its operation limits. Even if the device has the capability to support short-and fast-term voltage variations for few cycles with active power coming from DC bus capacitor storage system, only DVC long-term operation limits will be analyzed in detail here, so sag/swell will not be covered by this work because several solutions are just presented, as in [12] and [13], and they can be implemented and added to the DVC functions. MATLAB based simulation and laboratory experimental results are reported to validate the presented solution.
Rest of the paper is organized as follow; Section II explains operation principle and limits. The proposed DVC controller and its inverter controller is presented in Section III. Simulation and laboratory based experimental results in the grid voltage range from 0.9 to 1.1 p.u. are reported at Section IV and finally discussions and conclusions are pointed out at Section V.
II. DVC OPERATION PRINCIPLE
Hardware configuration of the DVC is equally the same as a DVR, only its control logic is updated by this article to add several new important functionalities and enable its continuous operating, in particular to compensate long duration voltage drifts, within smart grid system.
A. DVC-Injected Control Voltage
The hardware configuration of the proposed single-phase DVC is shown in Fig. 1. The system consists of a full bridge converter with capacitor bank as DC bus. The converter is connected in series to the line by means of a coupling transformer. The system is equipped with Bypass switch in order to bypass the device in case of any fault of the DVC device and also to protect DVC inverter and other components against possible damages originated from LV network side. In Fig. 1, V s stands for grid voltage. The DVC is meant to keep PCC voltage (V PCC ) at set value (V PCC ref ) by injecting proper voltage (V x ) in series to the line. So at any instance, the KVL (1) needs to be satisfied For the proposed DVC, V x has to be perpendicular to the line current, I L . So, the device can work with nonactive power without absorbing active power from the grid. Fig. 2 shows the system working principle in steady state condition with inductive load for both under V s2 and over voltage V s1 events.
From Fig. 2, using trigonometric equations in right triangle OAB, the injected voltage magnitude can be calculated as (2). In (2), i can be either 1 or 2 depending on compensation state and γ is the phase difference between PCC voltage and line current, θ i is the phase difference between network voltage and line current and V xi is the calculated injected voltage magnitude. As it can be noticed from Fig. 2 and (2), the formula gives negative and positive values for V xi in different compensation scenarios Equation (2) gives the magnitude of DVC-injected voltage. This value should be injected perpendicular to the line current. So, the angular frequency of line current should be moved 90 • to get the right injected voltage angular frequency.
B. Operation Limits
The proposed DVC, depends on device nominal voltage (|V x,max |), load power angle (γ), and requested reference 1) Case 1, Over Voltage: From Fig. 3, triangle OAC or OAC can be used to evaluate the upper limit in two different load conditions. The maximum value of over voltage, that can be compensated with nonactive power only, is proportional to the segment OC or OC . The (3) represents the maximum grid voltage that can be compensated with nonactive power only and it is required to get the nominal V PCC voltage for the first load condition According to (3), over voltage compensation range characteristics can be shown versus inverter rating voltage (V x,max ) and γ variation as it is shown in Fig. 4.
It can be noted that increasing both V x,max and γ, the over voltage compensation range increases. The V x,max is a design parameter and it would be fixed during design procedure and cannot be changed. Indeed, γ depends on load and it varies continuously during operation. This characteristics function is useful during hardware design procedure to set inverter rated voltage and also to design the control method. For the first one; analyzing the feeder over voltage history, the load power factor profile and knowing the interested over voltage compensation range, it is possible to use these data to properly design the DVC inverter hardware. For the second one; the limits are used inside control method in order to guarantee safe and continues operation of the device, as it is described in the following.
During operation, if V s is more than maximum value, the DVC can injects active power for few periods only so after that, 2) Case 2, Under Voltage: In this case, DVC has two possible limits and it is necessary to evaluate both of those and finally decide on the right one. a) Limit Due to V x,max : From Fig. 3, the inverter nominal voltage (V x,max ), imposes limits on under voltage compensation range that can be evaluated by the triangle OAD or OAD . The minimum value of under voltage that can be compensated with nonactive power only, is proportional to the segment OD or OD . Equation (5) represents the minimum grid voltage due to V x,max that can be compensated with nonactive power for the first load condition Fig. 3, the load power angle (γ), imposes limits on compensation ranges that can be evaluated by the triangle OAB or OAB . The minimum value of under voltage that can be compensated with nonactive power only, is proportional to the segment OB or OB . Equation (6) represents the minimum grid voltage due to γ that can be compensated with nonactive power for the first load condition Depends on the system condition, either (5) or (6) has to be considered as control limit for Case 2, so if V x,max > V PCC · sin(γ) the V s,min has to be evaluated by (6) else by (5). When V x,max = V PCC · sin(γ), (5) and (6) give the same results.
Same as over voltage case, considering (5) and (6), the characteristics of under voltage compensation range can be shown respect to DVC inverter rating voltage and load power angle, as it is shown in Fig. 5.
These information are used during hardware design as well as control method design procedure.
During operation, if the V s is less than minimum value, the DVC can inject active power for few periods only so after that, if the voltage variation is a long event, the |V PCC ref | has to be updated to the limit value that can be achieved by the DVC. To update the new reference value, reverse calculation of (5) and (6) has to be performed with respect to V PCC ref . Therefore, there are two different limits and it is necessary to evaluate both of them for corresponding condition.
a-update) Limit due to V x,max : If the limit is due to V x,max , case a, solving (5) with respect to V PCC ref will lead to a second order quadratic equation and two possible solutions can be found. Again in this case, the negative value should be ignored and the positive one, (7), is the optimal solution b-update) Limit due to power angle (γ): In this case, the new achievable reference, V PCC ref , can be found as In this condition, when the minimum limit is reached, it is necessary to evaluate if V x,max > V PCC · sin(γ); so, the V PCC ref should be updated by (8) otherwise by (7). When V x,max = V PCC · sin(γ), (7) and (8) give the same result.
In order to guarantee DVC intrinsic stable continuous operation, V PCC ref update procedure has to be implemented inside the control system.
C. V PCC ref Update
Considering limits, complete flowchart to update the V PCC ref for V x evaluation is shown in Fig. 6. The flowchart in Fig. 6 is implemented following the same concept as the limits of the system were extracted. First, the measured V s voltage is compared with V PCC in order to understand in which case (Case 1 or Case 2) the system is, then, by checking corresponding equation, it has been evaluated that either the system exceeded its limits or not. If the system exceeds the corresponding limit, V PCC ref needs to be updated otherwise, the system is inside the limits and it can continue operating with nominal or set V PCC ref . The only point is when the system is in Case 2, under voltage condition, where it is important to understand which limit should be considered, limit due to V x,max or limit due to γ. To do so, V x,max is compared with V PCC · sin(γ) instantaneous value in order to chose the right limit (due to V x,max or due to γ). If it is necessary, the V PCC ref value should be updated and the new value has to be used in (2) to find the V x magnitude for v x (t) calculation, as it is reported in the next section.
The situations, shown by dashed lines in Fig. 6, are for the conditions where the system is inside its compensation range and V PCC ref does not need to be updated. So, it depends on system operation condition, either original V PCC ref or the updated one will be used as V PCC ref to generate v x (t) for inverter voltage controller. Implementing this strategy, the system can continue to operate even if the load power angle and grid voltage are not enough to guarantee the set V PCC ref . Instead the reference value will be updated, DVC will do its best and the system can avoid any instability and failure. with outer voltage control and inner current loop [17]. The double loop control method has been found more suitable for proposed DVC, due to its capability to manage wide load current variation and avoid current distortion during both short-, fastand long-term voltage variations [22]. The proposed controller needs instantaneous voltage reference to be followed by DVC inverter. Therefore, a sinusoidal reference signal need to be generated as reference for inverter voltage controller. In order to generate this reference signal, it is important to take into account the system limits as it was explained in Fig. 6 flowchart. With the output of Fig. 6 (V PCC ref ), it is possible to evaluate V x cal by (2). To have better control on PCC voltage, an extra PI controller (P I(V PCC )) is also used and the output is added to the calculated value by (2), as it is shown in Fig. 7. The output is considered as V x to generate the reference voltage value as it is reported in ( The injected voltage angular frequency should be perpendicular to the line current angular frequency (ω I L ). ω I L is extracted from I L measure using a PLL system. Evaluated ω I L needs to be shifted 90 • and it has been implemented by adding ±π/2 to the I L angular frequency. ±π/2 is due to the nature of the load (if inductive, π/2 is subtracted from the line current angular frequency, while if the load is capacitive, π/2 should be added) so positive or negative sign in (9), are for capacitive and inductive loads, respectively.
During the working period, the DVC has some losses and it is necessary to compensate those. In real working condition, this can be obtained by changing the voltage injection phase of the angle δ. In this way, the DC bus voltage can be managed to be quite constant around set value. It should be noted that, DC bus voltage control capability of DVC, is function of load current. This controller has been put in evidence by dashed line inside Fig. 7. The DC bus PI controller is designed much slower than main controllers, in order to avoid any effects on mains voltage compensation performance.
When V PCC ref is inside the standard limits or it can be obtained with nonactive power injection, the DVC control strategy can use the generated voltage [v x (t) * ] as reference voltage for inverter, otherwise any other DVR control strategy can be used [12]- [17]. For the proposed DVC control system, the obtained reference voltage [v x (t) * ], is fed to voltage source inverter controller, as it is shown in Fig. 8. Inverter controller is meant to produce reference v x (t) * voltage at its ac side terminal. The inverter should operate as a voltage generator and several works have suggested to use a PWM-based voltage controller. However, since the DVC is a series connected device, as a typical DVR and as it has been analyzed in [22], during transient the system can disturb the line current. In distribution system, the load can change significantly and in uncontrolled manner and this can worsen the DVC operation. Thus, it has been recommended to use a double loop controller with outer voltage controller and inner current controller for such systems [22]. Therefore, a double loop controller has been used as inverter controller where the outer voltage controller is a PI controller indeed, as inner current controller a model based current controller (MBC) is adapted as it was explained in [23]. Inverter double loop controller is illustrated in Fig. 8. Three different PI controllers have been used inside the control system. Since the system model is quite complex and nonlinear one, Ziegler-Nichols method has been used to design the PI controllers [24]. The three PI controllers designed gain values are reported in Table I.
A. System Design Consideration
In [25], several design aspects of a DVR are evaluated, so this paper can be used as a good reference to design the DVC series connected device. Before following [25], two important correlated aspects need to be defined to correctly size the DVC: the first one is related to the presence of a fast protection system (as a fast static switch bypass system or a fault current limiting system [26], [27]) and the second one is related to the coupling transformer turn ratio. In order to illustrate the working principle of the DVC only, in current work the system is designed according to full power of the line (50 kVA), so the fast protection system is not included and, to limit over sizing on inverter IGBT switches (400 A/1200 V), the coupling transformer is designed with a turn ratio k equal to 1.5 (detail analysis of these systems' functionalities and the choice of the turn ratio will be covered in future publications). Selecting the coupling transformer k, it is straight forward to design inverter DC bus voltage, its switching inductance (1 mH/300 A) and low-pass filters (100 μF/240 V) for inverter and line side, according to [25] and [28].
In particular, the inverter DC bus capacitance is designed according to the required energy to support the full load in case of fast-term voltage variations (sag/swell events) for about 20 cycles with a maximum DC voltage variation of about 200 V. By these consideration, the DC bus is designed with 74.8 mF/1000 V capacitor bank and it has been controlled about 600 V; however, during fast-term voltage variation (sag/swell) it is allowed to drop till minimum 400 V or increase till 800 V.
It is worth to mention that the proposed DVC if it works inside its operation limits, it is able to regulate voltage at PCC to its nominal/set value using mainly nonactive power only, but this does not mean that outside operation limits the device is useless. Actually outside its operation limits it is able to deal with shortand fast-term voltage variations as a same sized DVR.
Results presented in this work are focused on long-term voltage drift rather than short-and fast-term voltage variations. Indeed, voltage compensation in the range from 0.9 to 1.1 p.u is very important, and often not considered in literature, mainly for DSO (they can manage their network power profile [29] and improve voltage profile when the network voltage touches the minimum standard limit [19], [30]).
The DVC configuration shown in Fig. 1 is simulated by MATLAB software and a laboratory prototype is realized. The simulation and experimental prototype parameters are reported in Table II. MATLAB-based simulation is performed with discrete fixed step solver and as experimental prototype Texas Instrument TMS320F28335 microcontroller is used to apply the control logic. The realized DVC experiment setup is shown in Fig. 9. The device is the series unit of an Open UPQC [19]. In order to verify proposed device performance, several scenarios are reported in the following.
B. Operation Limits and V ref Update
Section II has shown that over voltage compensation limits is mostly function of DVC inverter rating voltage while under voltage compensation limits is mostly function of load power factor. Considering that under voltage is most probable than over voltage, here as most relevant issue, under voltage event is reported for V PCC ref update. Therefore, in case of an under voltage event, if V x,max is close to the nominal voltage network and with a low load power factor, (6) is always valid to find the minimum under voltage compensation limit V s,min . But if the grid side voltage V s and/or the load power factor cos(γ) are/is decreasing, the DVC would not be able to compensate the voltage drop with nonactive power only. So, the reference value has to be updated to the maximum achievable PCC reference value V PCC ref , which can be found from (8), in order to avoid DVC instability. Therefore, in order to show the V PCC ref updating procedure of DVC, the following simulation is reported.
The simulation is carried out with constant under voltage at grid side and a load power factor equal to 0.9 till t = 2 s (P = 8000 W and Q = 3875 Var), therefore, in this contest the procedure has to be updated immediately. Considering the inverter maximum voltage, V x,max equal to 200 V the V x,max is greater than V PCC · sin(γ) and (6) is valid to find under voltage operation limits rather than (5). With cos(γ) = 0.9 and using (6), the minimum grid side voltage is equal to 207 V so till t = 2 s the DVC is able to compensate 10% voltage drop with nonactive power compensation method as it is shown in Fig. 10 when V s is equal to 207 V and V PCC ref = 230 V.
At t = 2 s till the end of simulation, the load is simulated to have a reactive power step change (load became P = 8000 W and Q = 2630 Var) so, the load power factor increases to 0.95 and new V s,min has to be calculated from (6), and replacing cos(γ)=0.95, it becomes 218.5 V, as it is shown in Fig. 10(a). With this new values, the DVC is not able to compensate 10% voltage drop, but only 5.5%, with nonactive power only. So, the reference value is updated to the maximum achievable PCC reference value which can be found from (8). Using (8), new reference value can be found and it is equal to V PCC ref = 217 V as it is presented in Fig. 10(b). Fig. 10(c) shows load side power factor variation.
C. Load Variation
Several load variation simulation results are addressed in [20], here experimental results are reported to verify device operation under load variation. The test under investigation is an over voltage test in which the grid side voltage is constant and equal to 220 V and the PCC voltage reference is set to 215 V (to make sure that DVC will operate inside its working limit during the test). So, as before, the voltage control has to be instantaneous because the device works inside its limits. Fig. 11 shows the DVC recorded response adding and removing a resistive 1400 W load to the initial or final load of about 1200 VA (P = 800 W and Q = 850 Var), respectively. Fig. 11(a)-(c) results show the DVC response to adding 1400 W load at about t = 0.085 s. Fig. 11(a) shows the PCC voltage, it experiences very small and negligible disturbance. Fig. 11(b) shows the DVC-injected voltage that, due to the small difference between V s and PCC reference voltage during all the experiment, has a small magnitude (around 10 V rms) even when adding load causes phase and magnitude change on it. Fig. 11(c) shows load current that without any over or under shoot sees a step change after adding the load, thanks to the implemented double-loop controller and specially MBC as DVC current controller. Fig. 11(d)-(f) results show the DVC response to removing 1400 W load at about t = 0.07 s. So, the DVC starts working with P = 2200 W and Q = 850 Var constant load. Fig. 11(d) shows V PCC profile during this transient. As it can be noticed, removing the load has almost no effect on PCC voltage profile. DVCinjected voltage, V x , sees small phase and magnitude variation as it is shown in Fig. 11(e). Same as previous case, the injected voltage during the experiment had about 10 V rms value because the grid side voltage and PCC set values were close to each other. Finally load current variation is illustrated in Fig. 11(f) and as it was expected, it experiences a phase shift during transient but again thanks to adapted controller, there is no rapid changes on its magnitude.
D. Over and Under Voltage Compensation Performance
Here experimental results are reported to verify device voltage compensation in the range from 0.9 to 1.1 p.u. The voltage variations are simulated by means of a Variac connecting the DVC device at the output of it. By moving the Variac output, it is possible to create about 10% voltage variation. The grid measured voltage value during the test was about 218 V, so in the over voltage case, the Variac output voltage and the V PCC ref have been set to 90% of the input value, while in the under voltage case, the Variac output voltage and the V PCC ref have been set to 100% of the input value. Fig. 12 shows the experimental results in the over voltage case. The device was working in steady state and at about t = 0.22 s, the Variac output has been increased to 100% in 6-7 periods. As it can be noticed from Fig. 12(a), that shows the instantaneous voltages V PCC and V s , voltages are equal before the event, once the over voltage is triggered, the DVC manages the PCC voltage constant by nonactive power compensation strategy. Due to the adapted strategy, there is only a small amount of phase displacement between V PCC and V s , when V x magnitude changes, however, it is not possible to detect this effect in Fig. 12(a). Fig. 12(b) depicts V x instantaneous voltage, when the event starts, its magnitude and phase change to compensate the over voltage. Fig. 12(c) shows the V PCC and V s rms values before and during the explained over voltage event. During it, the DVC keeps the voltage at PCC unaffected by means of injected voltage which is shown in Fig. 12(d).
The results in Fig. 12(c) shows that before the over voltage event, DVC injects about 3 V which means to compensate DVC losses in order to keep its DC bus voltage constant at set value. During the over voltage event, DVC injects about 25 V to compensate the event. It is worth to mention that, when the over event starts, the inverter DC bus voltage increases slightly and once the DC bus voltage controller starts to regulate the DC bus voltage, it restores DC bus voltage to the set value slowly. In experiment, it took about 10 s to restore the DC bus voltage to its set value. Fig. 13 shows the experimental results in the under voltage case. The device was working in steady state and at about t = 0.2 s, the Variac output has been decreased to 90% in 6-7 periods. Fig. 13(a) shows the instantaneous voltages V PCC and V s . Before t = 0.2 s, the V PCC and V s were completely matched to each other, while, when the voltage drop is triggered, the controller is able to restore the V PCC ; so, the V PCC voltage with minor variation is kept unaffected to any distortion. As previous case, the small amount of phase displacement between V PCC and V s is present, even if it is not possible to detect this effect in Fig. 13(a). Fig. 13(b) illustrates V x instantaneous voltage when the event begins, its magnitude and phase change to compensate the under voltage. Fig. 13(c) and (d) show experimentally recorded V PCC , V s , and V x rms voltage values for the explained event. Similar to the previous case, when the PCC and grid side voltages are equal, the DVC injects about 3 V in order to compensate system losses and to keep DC bus voltage regulated, while, after the under voltage event, the DVC injects about 30 V to compensate voltage drop at grid side and regulate voltage at PCC by means of nonactive power compensation strategy. When the event happens, the inverter DC bus voltage changes but contradictory to the previous case, the DC bus voltage decreases slightly during the voltage drop. Once the DC bus voltage controller starts to regulate the DC bus voltage, the controller restores its DC bus voltage to the set value slowly, it takes about 10 s to reaching the set value.
V. DISCUSSIONS AND CONCLUSIONS
A new device concept, which goes beyond typical DVR functionalities, is presented. The proposed device is named DVC, it is an active voltage conditioner able to cover both shortand fast-events, as a typical DVR, and long-events (in the grid voltage range from 0.9 to 1.1 p.u.). So it can perfectly satisfy modern power system DSO requirements. In particular the paper presents only the control strategy that can be adapted during steady state condition (long-events) for a single-phase DVC. Indeed, the steady state condition is not reported in literature and the single-phase configuration seems to be the best economic solution for smart grid LV distribution system. The device controller, here introduced for first time, has been designed to operate with nonactive power during steady state condition. So, to guarantee DVC continuous working, the paper describes a control method to generate DVC reference voltage considering its limits. Moreover, single-phase design can decrease device initial cost and it is also more compatible with LV distribution and mostly single-phase domestic loads.
Designed control method is verified by MATLAB-based simulation and laboratory experimental testbed. Results show that the device has good performance and it can improve PQ level of the installed distribution Smart Grid network effectively (mainly in the grid voltage range from 0.9 to 1.1 p.u.). This is essential for, nowadays, modern network because the proposed DVC can give flexibility to the system operator in order to move all problematic single-phase loads on a specific phase (where the DVC is installed).
Even if the paper analyzed a single-phase system, all the theoretical analysis on device limits can be extended for threephase system and it will be addressed in future works. It should be noted that this solution, since it injects the compensation voltage in quadrature to line current, creates phase shifting on installed phase voltage and this can impose voltage unbalance issues to the supplied three-phase loads. Therefore, this device can be used effectively in LV distribution network with singlephase loads only. | 2018-07-03T13:27:21.904Z | 2018-09-01T00:00:00.000 | {
"year": 2018,
"sha1": "cd8062cb00bb13009caedcf522e87a3757982a69",
"oa_license": "CCBY",
"oa_url": "https://re.public.polimi.it/bitstream/11311/1040083/2/11311-1040083_Faranda.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "fa46e434c86a6c460d9b648813df3b6a1b53fad6",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
238808511 | pes2o/s2orc | v3-fos-license | Trans-Himalaya Connectivity: Comprehensive Approach to Resolve China-India Political Conundrum
China and India have a long civilization. Even before the establishment of formal diplomatic relations in 1950, both countries had a long history of exchanges on educational, cultural-spiritual, and trade in ancient times. But since the mid of 1950, both countries didn't enjoy good bilateral relations and that culminated with the war of 1962. Again rapprochement began in mid of 1980. Border issue, ideological-political model, and approach on global orientation are the issues both countries have different views, which sometimes produce severe bilateral relations problems. June 15 incident in Ladakh has created severe confrontation and new regional geopolitical imbalance. Five-point consensus in Moscow helped to reduce prevailing tension. This study finds that only regular cooperation and open dialogue can provide a fruitful platform to create a trustful environment and search for peaceful settlement of the dispute. Harmony between these two countries will support for peace and prosperity of the Trans-Himalaya region. This research is based on qualitative research and follows the secondary data. It takes a comprehensive approach as a theoretical framework to analyze the issues. The researcher is very serious about testing the validity of data and its analysis. As per the research requirement, this paper is explorative in format.
Strong trade and finance links, which will cross 100 billion dollars soon, have been proof of growing interdependencies and cooperation. Both sides without a doubt did profound work to reduce tension and peacefully continue the relations (Zhao 2000). Thoughtful political engagement and reality-based dealing stand on own national interest have created space for tangible cooperation. Interestingly, since some years of security and military cooperation, exchanges and joint exercises are also happening this is said to further enhance cooperation mitigating misunderstanding. No doubt that China and India both have distinct political systems and exercises. Through the mechanism like a BRICS (a community of Brazil, Russia, India, China, and South Africa) both countries have given a strong message that they have many issues to work together, and can respect each other's political system, practices, and model. India's soft-balancing strategy could work nicely to deal with China if pragmatically implemented (Paul 2018).
War of 1962, India's open support of Dalai Lama particularly since 1959, anti-China activities using Indian soil, and border dispute are the major factors that are creating hurdles on China-India open-heart relations. Definitely, both countries are trying a lot to mitigate tensions and build trust, but in reality, until and unless they address political issues, the actual harmony is hard to establish. Understanding this reality, both leaderships have been focusing on gradually building understanding through multiple efforts, and also through established mechanisms, they try hard to resolve politically contentious issues. Many regional experts opine that India should stop listening to the provocation of America-led Western powers, it should listen to the voice of its own people and national requirement to maintain bilateral relations with neighbor countries. China and India need to resolve their differences through their own mechanism detecting Western concern (Wang 1998). At a critical moment created by COVID-19, China and India went in a face-off on June 15, which took the lives of 20 Indian soldiers and injured dozens on both sides. It has created a new conflicting situation in the region and attracts concern of major powers of the world. At the sideline of the SCO Summit on September 10, Foreign Ministers of China and India, Wang Yi, and Subrahmanyam Jaishankar reached a 'Five-Points' agreement to reduce existing tension on the Ladakh border. Both focused on continued dialogue and negotiations to resolve the disputed issues.
The paper's findings and the conclusion are that both governments should take a tangible cooperative approach and ways to resolve the border dispute through continuous dialogue and discussion. It supports wiping out distrust and misunderstanding which is still a hurdle to bilateral engagement. They should also further advance the level of exchanges and collaboration based on common mutual areas and interests. Both governments can work closely at a regional and global level at large to play a mediating role in conflicts, crises, and confrontations (Yuan 2008). It is because the developing world listens to their voice and the developed world will be compelled for fruitful negotiation. By implementing the above-mentioned recommendations, India and China could uplift their partnership to a new level of broad cooperation and engagement.
This research is based on qualitative research and follows the secondary data. It takes a comprehensive approach as a theoretical framework to analyze the issues. The researcher is very serious about testing the validity of data and its analysis. As per the research requirement, this paper is explorative in format.
Objective of the Research:
This study's major objective is to try to analyze multifaceted issues regarding China-India relations and suggest rational recommendations to resolve contentious issues and move forward toward broad inclusive cooperation based on mutual trust and understanding.
Theoretical Framework and Research Methodologies
The comprehensive approach of international relations is used to analyze the China-India overall relations. This approach is started used widely in the international relations discipline since 1970. United Nations is one of the strong agencies to use this approach to describe international behavior, events, and activities. The purpose of such an approach is cooperation among nations when reasonable and integration capabilities when possible to develop both a shared vision of strategic objectives and an end state, requiring mutual awareness of risks, threats, and actions of participants (Jasper and Moreland 2015). This approach support to judge, analyze and acknowledge the patterns of relations between actors in all positive and negative circumstance. "A Comprehensive approach is taken to mean the employment of unified principles in planning and conducting with all relevant actors in an increasingly complex environment" (Ibid.). This approach is widely accepted, and new practices based on this approach hope to generate new ideas and patterns in a related area. It supports analyzing and gives the theoretical ground for international actor's behavior (Jasper and Moreland 2014).
In this paper, a comprehensive approach is used to analyze, judge, and evaluate the overall bilateral relations between China and India particularly since 1950. Both countries have faced numerous experiences of positive and negative circumstances in their bilateral relations. This approach provides theoretical basements to analyze the situation and guide toward the conclusion points. Theoretical clarity and basement structurally help to generalize even the complex and diverse issues systematically and give direction for further research work. This research is qualitative in nature and descriptive in form. Secondary data are used and tries hard to make proper validity and authenticity of the data.
Post WWII to 1962: Sino-India Relations
People's Republic of China (PRC) was founded in 1949, it was a big political event in the Asia region particularly victory of communist principle at the nation of largest population. Similarly, another giant of Asia, India, got independence in 1947 from British imperialism. Critics said that though India had got independence from the British, its functions, attitude, and administrative systems are still clung with British imperialistic modality (Sharma 2019). India was the first non-socialist nation to recognize PRC on 30 December 1950. Even though the two countries have a different political foundation, social democracy, and communist socialism have some commonalities particularly in issues related to social, economic, and justice. China appreciated India's role to recommend PRC on the Security Council permanent seat of UN (Zhao 2000). Chinese President and Chairman of Communist Party of China Mao Zedong and India's Prime Minister Jawaharlal Nehru both came from long political struggle. So initially they recognized each other on how two countries cooperate and move forward harnessing the relationships. China and India signed significant bilateral agreements in 1954 which formulated "five principles of co-existence", also known by Panchasheela in the Hindi language (Zhang 2010). In course of increasing bilateral relations, Chinese Premier Zhao Enlai visited India in 1954. Similarly, Nehru visited Beijing in the same year and talked with senior Chinese leaders about extending friendship and finding common ground for cooperation at the international level (Wang 1998). Substantially, India and China enjoyed cordial relations until they broke out of the 1962 war. It does not mean that they had no problems on a bilateral level. Because China had not accepted the borderline demarcated by British imperialists and this issue is still in limbo. But India is stick on the McMahon line drawn by the British Indian administration in 1914, which the Chinese side has protested for a long and didn't recognize.
China claims that around 90,000 square kilometers of territories were under India's control since the British imposing the McMahon line unilaterally. On another side, India is blaming China for taking Aksai Chin territory, which is said, in a strategically significant location. Zhao and Nehru tried to find common ground to resolve the border issue but failed to reach on solid conclusion because of different approaches. So particularly from the end of the 1950s, Sino-India relations turned into a lack of trust and some point of rigid direction. India's decision to increase military checkpoints at border areas and starting to play Tibetan card had infuriated Beijing. Tibet's location as a strategic area also gave pressure on China to detect India's expansionist approach (Yuan 2008). Nehru's administration gave asylum to the Dalai Lama and his thousands of disciples and supported for anti-China activities that had made Chinese leadership angry and pressure them to take tangible measures. Beijing decided to 'teach India a lesson that was culminated in a border war of October 1962 (Ibid.). War went just for a month, and China secured victory over India. It is said in a military sphere that the 1962 war had given the real pictures of the lack and weakness of the Indian army and administration. Failure of the Indian army to tackle China had humiliated and demoralized Indian leadership (Dixit 2003). They misjudged the real strength of the People's Liberation Army.
There were fewer dialogues in the 1970s between the leadership of China and India though there were some efforts to calm down the relations. When China-Soviet Unions relations were weakest since the weakest point since the mid-1960s, India-Soviet Union had strong friendships and partnerships. The Soviet Union had provided huge supports to India's development. Another side, China and Pakistan became strong friends and started multiple bilateral cooperation and exchanges. Wonderful was that rest of the South Asian neighbors were not divided on the Sino-Indo war; instead, they maintained a neutral position and requested both parties to resolve the issue. The role of the governments of Nepal and Sri Lankans to give pressure on China and India for ending war immediately was widely appreciated. History shows that due to the inherent dispute, it is becoming hard to forward a true sense of rapprochement on bilateral relations of India and China (Panda and Atmaja 2019).
China and India had restarted dialogue since the end of the 1970s with exchanges of visits. Government and non-governmental levels visits had supported slowly to calm down the heat of war and move for peaceful means to talk and resolve the pertinent issues. Positively, in 1981 foreign ministers of both countries met and discussed normalizing the relations. With increasing dialogue, Indian Prime Minister Rajiv Gandhi's state visit to China in December 1988 was a historical event to normalize the relations and both sides agreed to find mutually acceptable solutions to the border issues (Ranganathan 1998). With the high-level visit of the Indian Prime Minister, a new dimension on bilateral relations began. Two countries have initiated multiple efforts to enhance cooperation, even though, still now there is no substantial change on territorial claims by the two sides.
Uncertainty and Rapprochement
The 1962 war had created a deep lack of trust and containment approach between India and China. Substantially they have been political, ideological, and practical mechanism-based differences. At the pick time of the Cold War, these two highly populous countries had entered into the new format of the cold war with aim of negation to each other until the rapprochement began in the mid of 1980. The political leadership of the two countries had started informal discussion and dialogue since the mid of 1970s with aim of normalizing the situation. Both realized that they could cooperate in common issues by putting political differences on one side, which can be addressed through continuing negotiation and efforts. China had tested nuclear power in 1964 and India did in 1974. Due to India's close policy on testing second nuclear power in 1998, China had objected to India's motive and breach of the international laws, norms, and consensus.
China's entry into the World Trade Organization (WTO) in 2001 paved the way for the further enhancement of a cooperative approach toward the world. With it, China has been more connected and inter-dependence with the world community. Though India is still not recognized as a sub-regional power in South Asia, China comes in a position to tackle American hegemony and contribute to balanced world order. India-China relations from 1950 to 1990 had multiple dimensions with sometimes deep confrontation and sometimes triumphalism of slogan Hindi-Chini Bhai Bhai (Indian and Chinese are brothers). Since 1950, these two countries had shown a strong commitment to fight against imperialism globally. Indian Prime Minister Jawaharlal Nehru once said, "our two peoples' common interests in their struggle against imperialism outweigh by far all the differences between our two countries. We have a major responsibility for Sino-Indian friendship, Asian-African solidarity, and Asian peace" (Government of India 1962).
Many experts take the issue of the Bandung Conference of 1955, where Nehru took the time to introduce Chinese Premier Zhou En Lai to other leaders of developing nations (Miller, 2007). Bandung conference supported fostering solidarity between developing nations and formed a commitment to fight against imperialism and hegemonic postures of great powers. But unfortunately, the Sino-India rivalry began just three years after the Bandung Conference. India shows a more arrogant role and destructive approach toward the sovereignty and integrity of China. It tried to play a hampering role on the issue of China's Tibet by provoking and supporting separatist Dalai Lama.
Since the end of the 1950s, India had taken a more aggressive policy in terms of the border issue. In November 1961, India establishing some military posts at the north of existing Chinese positions and also cut off Chinese supply lines. This strategy was also maintained at the time of truce even after the 1962 war. When the situation went on a critical phase China and India entered into a full-fledged war in October 1962. Angry Nehru asked the United States for assistance even the bilateral relations between these two countries was not fruitful due to India's involvement on non-alignment movement and some campaign against the United States. Then American President John F. Kennedy had responded quickly by sending an aircraft carrier to the Bay of Bengal. But knowing the internal situation and global responsibility, China had declared a ceasefire unilaterally. The war took 31 days and India was severely defeated by China.
Acknowledging the international situation, the United States and China had agreed to build up rapprochement by fostering mutual understanding and cooperation. It had supported China to get UN membership and a permanent seat of the Security Council. With Deng Xiaoping became a paramount leader in 1978, China launched reform and opening-up policy aiming to enhance deep and inclusive cooperation with the rest of the world. Since then, politically stable China also with a solid economic-social plan, went on a deep structural reform and achieved outstanding development and prosperity. In South Asia, China's maintained more close relations with Pakistan. With increasing cooperation with Pakistan, the two countries had signed an agreement on nuclear cooperation in 1976 (Tellis 2004).
Deng's foreign policy was very pragmatic and objective. He maintained the principle of "Hide brightness, Nourish obscurity" emphasizing domestic issues and detach from regional and international conflict and war. At the short period of Janata Dal-led government in 1979, then Foreign Minister Atal Bihari Vajpayee paid a historic visit to China. But after a governmental change in New Delhi, the confrontation between the two countries existed with some border skirmishes. Experts said that China-India comprehensive rapprochement began with Indian prime minister Rajiv Gandhi's historic visit to China in 1988. At the meeting, both leaders agreed to find consensus and understanding on common issues and also cordially continue dialogue and negotiation to resolve the contentious issues. Since then, the normalization process getting positive, and two countries with having differences on multiple issues have been fostering cooperation in multiple fields. Exchanges of visits and cooperation on regional and global issues by both sides also increased exponentially. Their strong cooperation in the trade and finance sector is historical with huge global impacts. Fostering cooperation support to reduce misunderstanding and expand the level of trust and collaboration (Luo 2018) Though understanding was building since mid of the 1980s, there had been some skirmishes and a clash of interests on multiple occasions since then. India illegally supported the seventh Karmapa to escape from China's Tibet in 2000. It showed the malign intention of the Indian administration even though rapprochement was going in a positive way. Both countries had celebrated "India-China Friendship Year" in 2006 with the exchange of high-level visits and many cultural events and programs. Nathula trading pass on the Sino-India border in Sikkim was reopened which was halted after the 1962 war. Importantly, the two countries did their first-ever joint military exercise in December 2007. The leadership of the two countries reaffirmed a 'shared vision on the 21st century' in 2008 June 15 incident had given a big strike on their relations. Both countries blamed each other for escalating the conflict. After that India took many actions to detach from China like banning Chinese apps, stopping Chinese investments, and canceling ongoing projects. It has increased fear that if this situation went on the wrong track, it could lead to a massive conflict at a regional level which would create destructive instability and long conflict. Indian professor Meghnad Desai suggested Indian Prime Minister Modi learn from the mistake of Nehru and resolve the dispute with China. "Nehru ordered the Army to throw the Chinese out. India was humiliated. No NAM nation came to help India; only the US and Israel did. India forgave him. Modi can learn from Nehru what not to do" (Desai, 2020). Now also without provoking the situation, both countries should focus on tangible dialogue to address the situation.
With the dawn of the 21st century, though they have to resolve some specific political and boundary issues, they made commitments to foster cooperation and understanding in the rest of the areas. The border issue is still a contested issue they have to resolve. Fortunately, on many occasions, both leaders vowed to resolve issues through comprehensive dialogue and negotiation. Sometimes India crossed the boundary of trust and consensus and hurdle on the bilateral relations. China claims that the Eastern part of Tibet, which India is taking as its Arunanchal state, should come under Chinese sovereignty. So when the Indian high-level leadership went on to visit 'Arunanchal state', China has protested and asked about the real intention of the Indian government on China's Tibet affairs.
Since the time of Prime Minister Nehru, India's neighbor policy has been unproductive. Other neighbor countries of India except for China, like Pakistan, Nepal, Bangladesh, and Sri Lanka have been at multiple occasions raised questions on Indian hegemonic attitude and non-diplomatic activities of Indian officials. Many types of research also show that India is the most unpopular country among neighboring people in South Asia. But still, India is searching dominant role in the region, when it has a lack of trust and practicality among neighbor friends.
China-India and Engagement on Regional Affairs
Though China has been trying to keep India as a friendly neighbor country, the Indian approach toward China looks more arrogant and assumes as a strategic and security challenge regionally. In reality, Indian establishments still failed to realize China's real economic, political, and security strength. China, by its overall strength, is a global power even though it has not declared its side. The reason behind it is that China doesn't want to play the geopolitical, zero-sum, and confrontational game at the global level. Published foreign affairs documents have emphasized China's interest to contribute to peace, harmony, development, good governance, and inclusiveness of the world (Zhao 2000).
The two countries realized that the border issue could not be resolved from one discussion. Former Chinese Premier Zhou Enlai at the time of his India visit said that due to the nature of the problem, it would take a long time to settle the boundary issue. But he further said that continuous dialogue and works from mechanism could support to resolve the issue in a proper manner. Since the end of 1990, both countries have advanced the border region's infrastructure by installing high-level technologies. Both sides are strictly working to detect incursion from one another side, which situation sometimes leads to border skirmishes (Wang 1998). Donald Trump administration's Indo-Pacific strategy is directly aimed to contain the rise and influence of China in the Indo-Pacific region. India, though it has not formally supported the US-initiated Indo-Pacific strategy, it is not hard to understand that on the issue of containment of China, the Indian establishment would support any initiative of US administration unless the latter hurdle the security interest of India. To show global responsibility, China still needs to expand more role and contribution to the global community as per its strength and capability (Mazarr, Michael J., et al 2018).
American National Security Strategy of 2017 and Defense Strategy of 2018, formally has recognized China as a threat to its overall national security and interest. It mentioned that ideologically, culturally and in terms of the range of development, China is going to reduce American influence and establish a new global order, which is friendlier to it. But Chinese policymakers reiterate that China just wants to contribute to world peace, stability, development, and harmony. With China's positive and comprehensive partnership increased in South Asia, Western powers particularly the US has shown seriousness and since last decade it has been proposing multiple projects and program to reduce the Chinese supports. Indian policymakers have a fear that China never accepts her as a rising power in Asia (Kulkarni 2017). Debate on taking or refusing of American millions dollar project Millennium Challenge Corporation (MCC) in Nepal is also one example of how the US is approaching projects to contain Chinese supports and investment in the South Asian region.
China-India bilateral trade is going to cross 100 billion dollars soon. China's trade and other affairs of cooperation with the rest of the countries of South Asia have been mounting for some years. On one side, India's cooperation and trade with China are also increasing to a new height, but on another side, Indian experts and policymakers, directly and indirectly, raise a serious question on China's increasing constructive partnership with the rest of South Asian countries (Zhang 2006). South Asian neighbors think that India always creates an obstacle to the peace and stability of their countries. History also shows that India's big brother attitude and British-era style diplomacy tended it toward failure of neighbor diplomacy, and studies show that anti-India sentiment is very high in its neighbor countries.
South Asia is the land of the largest number of poor people in the world. Deep structural reforms are required for the socioeconomic transformation of the region. Great powers always give importance to the geopolitical and geostrategic significance of the region. Knowing the potentiality of development, China for some years has been giving immense priorities on the region. From his first tenure, President Xi Jinping has given policy priority to neighboring countries, which is proved by the programs of the Belt and Road Initiative (BRI). Due to the narrow attitude, regional experts opined that India is morally and practically losing grounds in the politics of South Asia (Sharma 2019a). The rest of South Asian countries is not ready to accept India's traditional and hegemonic behavior. As a rising economic power of the region, India needs to foster partnerships with neighboring countries based on equality, respect, and balance. Then only, it could lead the region to gain mutual economic benefits and eventually contribute to peace and stability of the region (Ibid.).
It is not hard to judge that India is anxious about the increasing partnership of China with its neighbor countries. But unfortunately, India doesn't calculate and evaluate that why there is a high positive attitude in the rest of South Asia on China and similarly negative sentiment on India. Sometimes Indian policymakers talk about regional integration, but they failed to understand the reality of hurdles created by their own nation for peace and stability of the region (Meng 2017). Without finding the reason, it is futile to blame China for creating positive space in South Asia. Some regional experts say that South Asia can be the strategic challenge for India where other great powers have a strong presence and Indian interest is being checked and forced to accept the reality. China's increasing hard and soft power presence in the rest of South Asian countries is not only being taken positively but also multiple layers of exchanges and collaboration are also harnessing in an exclusive way based on mutual trust and win-win cooperation.
With China's growing engagement and cooperation in South Asia since some year can be assumed that India is not taking easily and feeling her declining influence in the region. The major problem of the Indian establishment in dealing with South Asian neighbors is that it is not even minimally successful to gain positive views from people of the region. It is also depicted as a malign and interfering party who don't want peace, stability, and development in the region. Regional experts opined that due to a lack of strategic culture and diplomatic characteristics, Indian official diplomats are known bad guys in the region. Same works can be done positively but Indian officials carry out with noise, disturbance, and encroachment style. So people easily know what the Indian establishment is doing in the internal matter of their country.
Western powers' Containment policy particularly America has actually begun a geopolitical game in the South Asia region. Indo-Pacific Strategy (IPS) is an open document about America's strategy to stop China's engagement in the Indo-Pacific region. Some
Indian experts say that for long-term goals, Western encroachment in the region aiming to stop China will be more dangerous for India than China's developmental support and cooperation. They also suggest Indian policymakers detect Western influence; India should start a constructive dialogue with China and South Asian neighbors, and also provide kind support with zero interference to the neighbors (Raghavan 2018). But in a practical area, the Indian establishment doesn't accept this suggestion and seems to continue the same outdated interfering neighbor policy.
India's interfering approach toward the neighboring countries definitely undoubtedly creates an environment for other powers' interest and builds the ground, though this is not only the region. There are many reasons for South Asians to be positive toward China, but the first and foremost region is China's non-interfering and supportive posture. China never involves government change or creates hurdles in the region (Lan 2008). Based on the understanding and requirement of the respective country, China has put forward cooperation. Experts say that if India realized its long diplomatic mistakes while dealing with the neighbor countries and wants to change the course, it should learn from China's neighbor policy and diplomatic strategic culture and practice. Without correcting own policy, it is futile to blame others for the enhancement of the presence in the region. Due to the changing world scenario and geopolitical dynamism, South Asia is becoming a new ground of great power interest; it is the sign of changing context, course, and behavior of the regional actors.
Is China a friend or foe of India?
With the China-India face-off of June 15, many questions are raised from many sides on durable relations between them. When we go back toward history and observe the scenario, in the open office documents and opinion of Indian politicians or officials, it has not denoted China as an adversary or enemy. The two largest developing countries of the world where China is the 2nd and India is the 5th economic power in terms of Gross Domestic Product (GDP) have multiple differences in terms of political, ideological, economic, and social model and practices. Since the China-India war of 1962, Indian policymakers have been showing not much positive attitude even though the two countries' rapprochement since mid of 1980 has linked them with multiple and higher scales of cooperation and exchanges. But in outside, Indian policymakers have not used the negative statement to criticize or blame China. Moscow's meeting of Foreign Ministers of both countries has done important Five-Points agreements to mitigate confrontation on border areas. Both have emphasized solid dialogue when the nations are in a confronting situation. At a critical time of the Covid-19 pandemic, China and India need to cooperate while both are large populations and socio-economic impact could be high. Any provocation on border areas could be of high cost at this critical moment. Experts say that only tangible communication is a medium to create a stable environment at the border areas (Sharma 2020).
India thinks that it is a competitor of China, but China doesn't accept it. Experts say that India needs to reform and do a lot to be a place of a competitor of China. America can say China is a competitor but India can't. In practice, China doesn't like the term 'competitor', rather it prefers 'comprehensive or strategic partner'. India is facing a lot of political, social, and economic problems, and another side China is highly organized, stable, and more purposeful too. And similarly, China has very strong and dynamic public and private institutions to formulate and implement policies, programs and mobilize resources. These are the most important reasons, why China leaves India behind on speedy development and institutional reforms.
China and India both are nuclear power, even though, Military experts opine that China can only be compared with America and Russia in terms of modern military equipment. China is also far ahead on cyber warfare technologies and security. Acknowledging China's growing military strength, India has been also buying advanced military equipment of an amount of billion dollars from Russia and America. India with a solid economic plan can minimize the economic gap with China (Sen 2010). China has been assuring India that its rise is not against the neighboring countries, but to support peace, development, and harmony. China's peaceful policy is also proved by its effort to resolve the Doklam issue of 2017 between China and India. Experts say that President Xi Jinping had shown more maturity and political morality than Indian Prime Minister Narendra Modi to sort out the Doklam issue peacefully (Qinglong 2017).
Because of China's strong presence globally and deepening China-India economic-cultural-people to people level cooperation, now many experts on the China-India issue suggest to the Indian policymakers that dialogue and continuous communication are the effective means to resolve the issues and get benefits from cooperation ). For some years, China and India have done joint military exercises. It is expected to support building trust and reduce confrontation on the border issues. This exercise could support mitigating misperception and. Furthermore, if India makes harmony with China by rejecting Western provocation, China could help for deeper rapprochement between Pakistan and India. If this situation occurs, it will support sustainable peace, stability, and development of the entire region. Positively, China and India's leadership agreed to set up a direct hotline service to build understanding and continue the dialogue (Shukla 2010).
Regional experts opine that India should leave to see China as a foe. This outdated mentality would not provide a positive consequence for India in a long run. Instead, India's outdated neighbor policy further brings India's neighbors and China close. India can't stop China to foster deep cooperation in the region. China's comprehensive engagement in the region will further support India's social-economic change and promote inclusive integration (Zhang and Zhang 2006). The process is already beginning. Chinese companies have funneled billions of dollars in the region. Except for India and Bhutan, most of the countries of the region have signed an agreement on China-led Belt and Road Initiative (BRI). Not only this, China and these countries have started multiple levels of inclusive partnership and getting benefits from the spirit of win-win cooperation. But India is still protesting BRI assuming that the BRI-based project China-Pakistan Economic Corridor (CPEC) has interference with her sovereignty. But China says that CPEC has no intention to interfere in India's matters.
Except for some political issues, there are no confrontation issues where China and India collide. Both are strong markets to each other and are the strong voice of the developing world in the international forums. Similarly, there are many regional and international mechanisms, where both countries with other countries are cooperating on common issues. India is the second strong member of the China-led Asian Investment and Infrastructure Bank (AIIB). Both countries are cooperating on BRICS (Community of Brazil, Russia, India, China, and South Africa; and Shanghai Cooperation Organization (SCO) (Sharma 2019b). Agriculture, climate change, terrorism, taxes; property rights, patents are among the issues China and India are working closely in the international levels. Due to the combined strong voice of China and India, on many occasions, World Trade Organization (WTO) has changed policies to support the issues of the underdeveloped and developing world.
Increasing cumulative engagement between China and India has many positive aspects for the regional and global levels. West doesn't want China-India rapprochement. Sometimes history shows that the Indian establishment had been provoked by Western powers to act against the genuine concern and interest of China. Unfortunately, due to the lack of strategic culture and long-term diplomatic vision, India doesn't understand that Western influence in South Asia hampers it more than China's cooperative approaches. Besides political interference, the West tries hard to impose cultural imperialism on the rest. In a long run, this motive would be more destructive for host countries and can damage history and cultural strength. Although there are many political differences between China and India, they have many cultural similarities and common historical aspects (Zhang and Jianxue 2007). With dialogue and negotiation, these two countries can sort out the challenges and move toward the common goals of peace, prosperity, and harmony.
Increasing Interdependence and Mitigating Stalemate
For some years China and India are enjoying maximum cooperation and exchanges at any time in history. The trade scale of both countries is going to cross 100 billion dollars soon. Now China is a second economy, India is the fifth largest GDP in the world. Rising countries with the largest populations, both are becoming strategically important to each other in terms of trade, finance, technology, capital flow, and knowledge sharing. It is the significance of present context and even more requirement for the positive consequence in the future. The problem for India is that due to the lack of effective neighbor policies, her neighbors still don't recognize India as a regional power or any type of responsible power in the regional sense. For it, the failure of Indian policymakers or establishment in a broad sense is responsible for not creating a minimum level of trust among neighboring countries.
If India wants an important place in regional and global affairs, support of China is vitally necessary. The attitude of rivalry or unhealthy competition could not support India to expand its role. If India with the support of Western forces tries to intersect China, definitely China will solidly face the situation and its diplomatic hurdle to India could be high, which India might feel hard to recover. Competition with cooperation formula is only the option as policy experts say to expand China-India multi-dimensional cooperation and get benefits as per national capacity, requirement, and interest. The ball is in the Indian court actually, in which way to follow to move in a comprehensive way with long-term benefits (Sharma 2019a).
China comparatively shows a peaceful approach toward the regional and global perimeter. Its official proclamations of the peaceful rise and peaceful development policies have been working comprehensively to build trust globally (Zhang 2020). Even a country like Japan, whose political relations for long with China have not been positive, its success to expand depth cooperation and build understanding in many levels. Here most important factor is whether you are successful to forge a common point of understanding with other countries or not. If you succeed, your policy will work, if not then you have to change the road with self-correction or judgment. China's recent history shows that it is successful to expand foreign relations according to the capability and national requirements.
India needs to reform more structural change in the economy, military and defense sector if it wants to reduce the power gap with China (Sawhney and Wahab 2017). India is a member of some organizations in which China is a leading actor like the Shanghai Cooperation Organization (SCO), Asian Infrastructure Investment Bank (AIIB), New Development Bank (NDB), and so on. Some Indian experts opine that India can join China-led Belt and Road Initiative (BRI) if there is no China-Pakistan Economic Corridor program (CPEC). India accused that CPEC has violated its sovereignty. But on another side, China reiterated that there is no intention to violate India's sovereignty, and at CPEC, China and Pakistan are dealing based on their national priorities. There is some issue of dispute between India and Pakistan over Kashmir. Since 1947, these two countries went on direct and indirect war and confrontation on the territorial issue of Kashmir.
More engagement between China and India would support them to mitigate disputes and foster the multiple levels of cooperation and exchanges on benefits of both countries. To resolve the border dispute, both countries have established joint mechanisms to study and recommend the ways of tangible solutions (Wang 1998). Only the concrete negotiation is the option to address the dispute and increase the harmonious relations as both countries reiterated many times. On the occasion of the Wuhan informal summit in 2018, Chinese President Xi Jinping had proposed a "Two Plus One" proposal to mutually cooperate between China and India to invest and expand cooperation with the rest of South Asia countries. China always wants to play a constructive role in regional integration (Chinese Ministry of Foreign Affairs 2018). It is said that Indian Prime Minister Narendra Modi did not give any positive or negative sign on that proposal. Wuhan informal summit has supported building understanding between India and China (Indian Ministry of Foreign Affairs 2018). Similarly initiative or second informal summit between two leaders happened in October 2019 in Mamallapuram ancient city of Tamil Nadu state of India. "The purpose of the summit, as described by Chinese Ambassador to India Luo Zhaohui, was for the two leaders 'to have a free exchange of views without fixed topics. They will talk about major issues, they will have a free atmosphere with each other which is a very good format for discussion'." (Thakker 2019).
Chinese side claims that with a benign approach of cooperation "Two Plus One" proposal was forwarded. It further clarified that this is a long-term policy to create understanding and mutual trust between China and India and to attract interested countries to move on related cooperative fields. When we see this proposal's depth spirit, it is not hard to understand that China wants solid cooperation with India to foster cooperation in the region. There could not be any other mechanisms to reduce misperception rather than dialogue and continue the discussion (Mohan 2012). When China and India begin cooperation in one common project, two countries can build up understanding, and second, they can enter into a new era of cooperation mitigating confrontation, dispute, and distrust. If the Indian establishment can realize that its neighbor policy is failed, China's proposal could give space to sow trust, then it shouldn't do late to move based on the spirit of win-win cooperation according to the direction of the "Two Plus One" proposal.
With the increasing national power and global engagement of both China and India, experts suggest long-term comprehensive cooperation and partnership and can also move with healthy competition. On the occasion of President Xi's visit to India last October, both countries agree to uplift friendship into a comprehensive strategic partnership (Thakker 2019). Both countries have clearly acknowledged the boundaries, significance, and challenges of their relationship. Both countries' relations depend on their national priorities, regional geopolitics, and global great power equations. Western forces are provoking India to take strict options while dealing with China. But the pragmatic option is to maintain harmonious and cooperative relations with China for the longterm benefits of India.
Major Recommendation and Conclusion:
With the new confrontation of June 15, misunderstanding and distrust have increased after a long period of peaceful engagement. Even though they have politically contentious issues on bilateral relations, two rising powers China and India have been working together in many common issues of regional and global concerns. The five-Points consensus held in Moscow between Foreign Ministers of both countries has created a position environment to harness dialogue and negotiation. Arms control, terrorism, climate change, the demand of developing world, agriculture, information technologies, peace, and security are among the major issues both countries are cooperating in an international forum. India is a member of some regional and global organizations initiated by China. India needs not think that its rapprochement with China will have a negative impact on its future development (Yang 2010). India is a big market for Chinese products, and India's software industries have the largest market in China. So with the dawn of the 21st century, cooperation between these two countries has reached a new height with deep engagement and exchanges. Banning Chinese apps and trying to stop Chinese companies, India cannot get any benefits. Further, it will lose the massive opportunity of investment and a big Chinese market. Now, experts on China-India affairs raise questions on that though there is deep cooperation between these countries, is that relations credible and really based on mutual trust? This is definitely a tough question. The challenge before the leadership of both countries is how to maintain trustful relations with reducing conflict and misperceptions. Chinese experts sometimes accuse that India is provoked by Western powers to act against China. This statement also became reality when particularly American senior officials openly put their views against China. They openly urge India to take assertive actions to contain genuine Chinese interest. West wants division, not cooperation between India and China, to wipe out China to become a global actor. The Chinese side always claims that its military strength is not targeted to any countries and it believes in the peaceful settlement of the disputes (State Council Information Office 2015).
This study has concluded that if China and India become really serious to expand their relations based on trust and credibility, they can achieve a lot and also can contribute to regional and global peace, stability and prosperity. According to the report of the World Bank, by 2050 China and India will be first and second economy respectively. In this scenario, as they say, many times about their interest in the balanced new world order, cooperation between China and India will be very essential in that regard. The present world order led by America is more imbalances, injustice, win-lose basis, and top-bottom paradigm in political engagement. To challenge this imbalance and establish new world order, China-India should foster their bilateral relations fully based on trust and understanding, completely reducing misperceptions and stalemates.
Regular high-level political dialogue, multiple levels of negotiation through the established joint mechanism, strategic cooperation on bilateral relations, cultural and people to people level exchanges are among the things which need to be accelerated further to reduce future confrontation and increase benefits based on mutual understanding and trust. No doubt, still there are huge misperceptions between these two countries. This study finds that only regular cooperation and open dialogue can provide a fruitful platform to create a trustful environment and search for peaceful settlement of the dispute. Present mistrust could only be address through regular dialogue and a pragmatic trust-building process from both sides. | 2021-09-09T20:45:57.802Z | 2021-07-30T00:00:00.000 | {
"year": 2021,
"sha1": "176078f8c52ab0cdd386a65279240f9eacd8c378",
"oa_license": "CCBY",
"oa_url": "https://al-kindipublisher.com/index.php/jhsss/article/download/1967/1678",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "5d8a4faaa4de0d76506483ab07b5d447364611e8",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
119333798 | pes2o/s2orc | v3-fos-license | CO~($J = 1-0$) Observations of a Filamentary Molecular Cloud in the Galactic Region Centered at $l = 150\arcdeg, b = 3.5\arcdeg$
We present large-field (4.25~$\times$~3.75 deg$^2$) mapping observations toward the Galactic region centered at $l = 150\arcdeg, b = 3.5\arcdeg$ in the $J = 1-0$ emission line of CO isotopologues ($^{12}$CO, $^{13}$CO, and C$^{18}$O), using the 13.7 m millimeter-wavelength telescope of the Purple Mountain Observatory. Based on the $^{13}$CO observations, we reveal a filamentary cloud in the Local Arm at a velocity range of $-$0.5 to 6.5~km~s$^{-1}$. This molecular cloud contains 1 main filament and 11 sub-filaments, showing the so-called"ridge-nest"structure. The main filament and three sub-filaments are also detected in the C$^{18}$O line. The velocity structures of most identified filaments display continuous distribution with slight velocity gradients. The measured median excitation temperature, line width, length, width, and linear mass of the filaments are $\sim$9.28~K, 0.85~km~s$^{-1}$, 7.30~pc, 0.79~pc, and 17.92~$M_\sun$~pc$^{-1}$, respectively, assuming a distance of 400~pc. We find that the four filaments detected in the C$^{18}$O line are thermally supercritical, and two of them are in the virialized state, and thus tend to be gravitationally bound. We identify in total 146 $^{13}$CO clumps in the cloud, about 77$\%$ of the clumps are distributed along the filaments. About 56$\%$ of the virialized clumps are found to be associated with the supercritical filaments. Three young stellar object (YSO) candidates are also identified in the supercritical filaments, based on the complementary infrared (IR) data. These results indicate that the supercritical filaments, especially the virialized filaments, may contain star-forming activities.
INTRODUCTION
Filamentary structures are frequently seen in nearby star-forming regions (SFRs), such as the Orion cloud (Bally et al. 1987;Chini et al. 1997), the Taurus cloud (Abergel et al. 1994;Mizuno et al. 1995), the Ophiuchus cloud (Loren 1989), and the Perseus cloud (Hatchell et al. 2005), and have also been revealed in the distant infrared dark clouds (IRDCs; e.g., Peretto & Fuller 2009) and cold interstellar medium (ISM; e.g., André et al. 2010;Men'shchikov et al. 2010;Molinari et al. 2010). The ubiquity of filamentary structures suggests that they exist such a long time during the lifetime of clouds that they can provide important information about the origin of SFRs (Myers 2009). Numerous studies indicate that filamentary structures represent a key step in the process of star formation, which connects the compression of molecular gas with the fragmentation into prestellar cores (see a recent review by André et al. 2014, and references therein). However, the nature of filamentary structures is still unclear (Myers 2009;André et al. 2014), and their spatial organization is also under controversy.
Filaments are typically elongated structures, with the lengths ranging from a few parsecs in the nearby molecular clouds to several tens of parsecs in some IRDCs (Beuther et al. 2011;Jackson et al. 2010). Arzoumanian et al. (2011) found that filaments in the Gould Belt have a narrow distribution of width with a median value of 0.10 ± 0.03 pc. With the length and width of filaments, André et al. (2014) defined a filament as an elongated structure with an aspect ratio larger than ∼5 -10 that is significantly overdense compared to its surrounding ISM. They also pointed out that filaments are generally linear over their length and appear to be co-linear in the direction of the longer extents of their host clouds.
For the spatial organization of filaments, Tachihara et al. (2002) suggested the so-called "headtail" structure, in which "head" is the central region where cluster formation takes place and "tail" is the filamentary extension of the central region. Myers (2009) noticed that some "heads" have more than one associated "tail" in the deeper and higher angular resolution observations, and presented the "hub-filament" structure. In this model, "hub" is the central body of low aspect ratio and high column density, while "filaments" are the associated features of higher aspect ratio and lower column density. Hill et al. (2011) found that the filaments in the Vela C molecular cloud complex appear to be more uni-directional within "ridges" in inner, higher column density area and more variedly directional within "nests" in outer, lower column density area. The magnetic field appears to be perpendicular to the "ridges" and chaotic in the "nests" (Kusune et al. 2016). In the Taurus molecular cloud, on the other hand, Palmeirim et al. (2013) found that the dense, star-forming filament B211 is surrounded by a large number of low-density sub-filaments (so-called "striations") oriented roughly perpendicular to the main filament, along with the magnetic field running parallel to the "striations" and perpendicular to the main filament. Moreover, as pointed out by Palmeirim et al. (2013) and Kusune et al. (2016), the magnetic field plays a significant role in shaping the morphology of the filaments in both "ridge-nest" and "filament-striation" structures.
It is of importance to search for more filamentary structures and study their properties in detail, as well as star-forming activities therein, in order to achieve a better understanding of the nature of filaments. The Columbia-CfA 12 CO (J = 1 − 0) line survey toward the Galactic Plane 1 (see Dame et al. 1987Dame et al. , 2001 had found a large elongated molecular cloud in the Galactic region centered at l = 150 • , b = 3.5 • (referred to as the "G150 region" hereafter). This elongated cloud is located to the west of a giant molecular cloud (GMC), and it covers an area within 148 • ≤ l ≤ 153 • and 1 • ≤ b ≤ 6 • (see Figure 1). A few supernova remnants (SNRs) identified by radio observations are also located in this area, like G149.5+3.2 and G150.8+3.8 (Gerbrandt et al. 2014), and G150.3+4.5 (Gao & Han 2014). Three velocity components can be roughly resolved in the longitude-velocity map of this elongated cloud (see Figure 2), with two GMCs located in the east and west. Nevertheless, there is no detailed study toward this elongated cloud thus far and the properties of molecular gas in the cloud still remain unknown.
In this paper, we present new CO (J = 1−0) observations toward the G150 region in the 12 CO, 13 CO, and C 18 O lines, using the 13.7 m telescope of the Purple Mountain Observatory (PMO), which is part of the Milky Way Imaging Scroll Painting (MWISP) project for investigating the molecular gas along the northern Galactic Plane. Based on the high-resolution CO multi-line observations (the angular resolution in the MWISP survey is ∼50 ′′ , while the angular resolution in the Columbia-CfA survey toward the second Galactic quadrant is 0.25 • ), we resolve the elongated G150 molecular gas into a large filamentary cloud, and study in detail the properties of this filamentary cloud in this work. In Section 2, we describe the CO line observations and data reduction. Observational results are presented in Section 3 and discussed in Section 4. The main conclusions of this work are summarized in Section 5.
CO Data
Our observation of the G150 region was conducted by the 13.7 m millimeter-wavelength telescope of PMO located in Delingha, China, from 2013 September to 2014 November. The whole observed region covered an area within 147.75 • ≤ l ≤ 152 • and 1.5 • ≤ b ≤ 5.25 • , and was observed in 12 CO (J = 1 − 0), 13 CO (J = 1 − 0), and C 18 O (J = 1 − 0) simultaneously with the 9-beam Superconducting Spectroscopic Array Receiver (Shan et al. 2012). Using the on-the-fly (OTF) observation mode, the telescope scanned the sky along both longitude and latitude directions at a constant rate of 50 ′′ s −1 , and the receiver recorded at an interval of 0.3 s.
The half power beam width of the telescope is about 52 ′′ at 115.2 GHz, and 50 ′′ at 110.2 GHz. The antenna temperature (T * A ) is calibrated to the main beam temperature (T MB ) by using T MB = T * A /η MB , with the main beam efficiency (η MB ) of 44% for 12 CO and 48% for 13 CO and C 18 O. The typical system temperature is 270 K for 12 CO, and 180 K for 13 CO and C 18 O during the observation.
After deriving the original OTF data, we checked and removed the bad channels and standing waves in the spectra. Then we regridded the checked data to 30 ′′ × 30 ′′ pixels and converted them into the standard FITS files by the GILDAS software package (Guilloteau & Lucas 2000). Using the Interactive Data Language software package with astronomy library, we mosaicked these FITS files together. In the resulting data cube, the rms noise level was 0.48 K for 12 CO at a velocity resolution of 0.16 km s −1 , and 0.28 K for 13 CO and C 18 O at a velocity resolution of 0.17 km s −1 .
Overview of the G150 Region
As shown in Figure 1 and Figure 2, the area of our CO observations of the G150 region has covered the main part of the elongated molecular cloud found by the Columbia-CfA survey. Figure 3 presents our result of the longitude-velocity map of the G150 region. Three velocity components are clearly resolved with the ranges of −10.5 to −5 km s −1 (first velocity component), −5 to −1.5 km s −1 (second velocity component), and −0.5 to 6.5 km s −1 (third velocity component), respectively. The first component is distributed from 149.5 • to 152 • in the direction of longitude, while the second component is distributed from 148 • to 149.5 • . Compared to the former two components, the third component, which is distributed from 148 • to 152 • , is more extended in the direction of longitude and has higher intensity of the 12 CO emission. The spatial distribution of these three velocity components in the 12 CO emission is shown in Figure 4. The first component (blue area) is mainly distributed in the northeast of the G150 region, while the second component (green area) is in the southwest. The third component (red area) is distributed from northeast to southwest, covering a much larger area and presenting the shape of a large filament.
According to the results of Dame et al. (2001), the velocity range of the Local Arm in the direction of G150 is roughly from −15 to 10 km s −1 , which means these velocity components should be located in the Local Arm. We further calculate the distances of these components to confirm this idea. For the former two components, we derive the heliocentric distances of 340 pc (first component) and 60 pc (second component) using the spatial-kinematic method based on the galactic parameters of model A5 in Reid et al. (2014). For the last component, we adopt a method based on near-infrared photometry (see Section 4.1) to derive a heliocentric distance of 400 pc. Then, according to the characteristics of spiral arms in the Milky Way (Table 2 of Reid et al. 2014) and the galactocentric radius of the Sun (8.34 ± 0.16 kpc in Reid et al. 2014), we find that these three components are indeed located in the Local Arm. Figure 2, the first and second components seem to be connected to the East GMC and West GMC, respectively, while the third component looks like a "bridge" between the East GMC and West GMC. Along with the spatial distribution shown in Figure 4, we suggest that these components may be different layers belonging to different GMCs in the Local Arm. Regarding the formation and dynamical interaction of these components, we speculate that these components and GMCs are used to be an entirety, thus a larger GMC, and separate from each other under the internal motions of the larger GMC (e.g., rotation, expansion). Another possible speculation is that these two GMCs are used to be different inhomogeneous clouds, and these three components, including the filamentary structures, are generated in the shocked layer during the collision of these two inhomogeneous GMCs, as suggested by isothermal MHD simulations (Inoue & Fukui 2013). However, the resolution of the Columbia-CfA survey is not good enough to reveal the details of these clouds in Figure 2, and our observations have not covered the entire region in Figure 1 and Figure 2 so far. A future large-scale CO survey with high resolution will help us to develop a better understanding of the physical relation of these components and GMCs. In this work, we tend to regard these three velocity components as different gas layers in the Local Arm.
Shown in
Based on the CO observations, we investigate the basic physical properties of these three velocity components. In addition to the 12 CO emission, we also calculate the 13 CO and C 18 O emission of these components, which are presented in Figure 5. The CO spectra show that the 12 CO and 13 CO emission of the third component are much stronger than those of the former two components, and the C 18 O emission is more significant in the third component.
Assuming the Local Thermodynamic Equilibrium (LTE), we can calculate the excitation temperature of these three components using (1) Here, we use the radiation temperature of 12 CO to calculate the excitation temperature of clouds.
In the equation, T 0 = hν/k B is the intrinsic temperature of 12 CO, where h is the Planck constant and k B is the Boltzmann constant. T MB is the main beam temperature and T bg is the background temperature with the value of 2.7 K. For 12 CO, the opacity depth τ ν ≫ 1, which means 1−e −τν ≈ 1. Under all these conditions, we derive the excitation temperature maps and present them in Figure 6.
The former two components have similar excitation temperatures with the same mean value of ∼6.2 K, while the mean excitation temperature of the third one is ∼7.4 K. Figure 7 shows the H 2 column density maps of these three components traced by 12 CO, 13 CO, and C 18 O from top to bottom. For 12 CO, the H 2 column density can be directly derived from its integrated intensity by using the X factor (CO-to-H 2 conversion factor), where the value of X is 1.8×10 20 cm −2 K −1 km −1 s (Dame et al. 2001). The calculation of 13 CO column density can be described as where T 0, 13 CO = hν13 CO /k B is the intrinsic temperature of 13 CO and T ex is the excitation temperature calculated from 12 CO. To derive the H 2 column density traced by 13 CO, we multiply the 13 CO column density by the abundance N H 2 /N13 CO with the value of 7×10 5 (Frerking et al. 1982). The method to derive the H 2 column density traced by C 18 O is almost the same, the formula of C 18 O column density is and the abundance N H 2 /N C 18 O is 7×10 6 (Castets & Langer 1995). We notice that the C 18 O emission of second component has a signal-to-noise ratio smaller than three, which means we have not detected the effective signals of C 18 O emission in the second component. So the H 2 column density traced by C 18 O of the second component is not available. With the integrated intensity of CO emission as the weight, we also calculate the mean values of H 2 column density, which are marked on each panel in Figure 7. The third component has much greater column density than the other two components.
In the following sections, we mainly focus on the third velocity component (−0.5 -6.5 km s −1 ), which is a filamentary molecular cloud and has greater intensity and column density of the 13 CO and C 18 O emission, and we study the properties of this filamentary cloud. Figure 8 shows the three-color image of the third velocity component. Three CO isotopologues present different views of this filamentary molecular cloud. The 12 CO emission (blue area) outlines the general structure of the cloud, the 13 CO emission (green area) reveals the skeleton of the cloud, and the C 18 O emission (red area) only appears at the brightest parts of the 13 CO emission (green area). This is mainly because that 12 CO represents the diffuse gas with the low density of ∼10 2 cm −3 , while 13 CO traces the denser intermediate gas with the medium density of ∼10 3 cm −3 and C 18 O traces the densest inner gas with the high density of ∼10 4 cm −3 .
Filamentary Molecular Cloud
According to the distribution of 13 CO emission, we can clearly see that there is more than one filamentary structure within this molecular cloud. In order to identify these filaments, we adopt the visual inspection method consisting of the following steps in this work. First, we check the 13 CO integrated intensity of this molecular cloud and search for the elongated structures with intensities greater than three times the rms noise level. Second, we calculate the length and width of each derived elongated structure to check whether the aspect ratio is larger than three or not, and regard the qualified ones as filament candidates. Third, we inspect the 13 CO channel maps to make sure these candidates are coherent structures distributed in the adjacent channels rather than different velocity components along the line of sight, overlapping with each other to mimic the shape of the filament. After applying these steps, we finally identify 12 filaments, which are presented in Figure 9, Figure 10, and Figure 11. Each filament is named after the first letter of "filament" and Arabic numerals from "1" to "12". The length and width of identified filaments are listed in Table 1.
As pointed out by Myers (2009) andHill et al. (2011), one of the most important criteria to make a distinction between main filament and sub-filaments is the difference of column density, which can be calculated from the integrated intensity. Shown in Figure 9, F1 is the main filament with higher intensity of the 13 CO emission compared to the rest filaments. With lower integrated intensity, F2, F3, F4, and F5 are located on the northeastern side of F1, and F6, F7, F8, and F9 are located on the southwestern side. These eight filaments in the surrounding area of F1 can be considered to be the sub-filaments. In the eastern area, F10, F11, and F12 are distributed approximately parallel to F1. Their integrated intensity is not as high as F1, so we may also consider them to be the sub-filaments. Figure 10 and Figure 11 present the velocity distributions of the identified filaments. Each filament only appears in the adjacent channels ( Figure 10) and does not have a large velocity gradient in itself ( Figure 11), indicating that it is the self-consistent structure instead of different components overlapping along the line of sight. In Figure 10, F2, F3, F6, F7, F8, F9, and F1 are mainly distributed in almost the same channels from [1.5, 2.5] to [3.5, 4.5], while F4 and F5 are mainly distributed in [3.5, 4.5] and [4.5, 5.5]. In Figure 11, F2, F3, F6, F8, and F1 have similar velocity components from 2 to 4 km s −1 , while F4 and F5 with the velocity mainly from 3.5 to 5 km s −1 and F7 and F9 with the velocity mainly from 1.5 to 3 km s −1 are slightly different with F1. Considering the spatial distribution of these eight filaments (F2 to F9) and F1 presented in Figure 9, it is reasonable to believe that they are associated with F1. Away from the main filament, F10 (∼2.5 km s −1 ), F11 (∼1.5 km s −1 ), and F12 (∼1.5 km s −1 ) have different velocities from F1. These three filaments may not be associated with F1.
The spatial organization of the identified filaments is more similar to the "ridge-nest" structure (Hill et al. 2011), rather than the "hub-filament" (Myers 2009) and "filament-striation" (Palmeirim et al. 2013) structures. The "hub-filament" model requires the central body "hub" to be of low aspect ratio, while in our case, the filament in the central area is more like a "ridge" with high aspect ratio. We also have not found the faint "striations" closely perpendicular to the main filament, which are the main features of "filament-striation" model. In addition, the main filament in the inner area is uni-directional with higher integrated intensity and the sub-filaments in the outer area are variedly directional with lower integrated intensity, which are the characteristics of "ridge-nest" structure. Figure 12 is the integrated intensity map of the C 18 O emission. With respect to the other two isotopologues, C 18 O traces the densest parts of this filamentary cloud. Only the main filament F1 and sub-filaments F2, F3, and F6 are identified in C 18 O. The C 18 O distributions of these filaments appear to be discontinuous along their length. In addition, we notice that there is an isolated clump with strong emission to the east of F1, which also appears at the brightest part of the 12 CO and 13 CO emission.
Properties of Filaments
We present the CO spectra of all the identified filaments in Figure 13. Each filament's spectra are averaged within the area surrounding it where the intensity of the 13 CO emission is three times higher than the rms noise level. The 13 CO emission is stronger in F1, F2, and F6, and decrease to below 2 K in other sub-filaments. In F10, F11, and F12, which are the sub-filaments away from F1, the 13 CO emission is only about 1 K. The emission of C 18 O is clearly detected in F1 and F2. For F3 and F6, of which we have detected the C 18 O emission in the integrated intensity map (Figure 12), the C 18 O emission in their spectra is too weak to be identified. This is mainly due to the fact that the area of emission is too small compared to the area where spectra are averaged. For other sub-filaments, the C 18 O emission is unidentifiable.
In Figure 14, we show the position-velocity plots of the filaments in the 13 CO emission. These position-velocity plots are extracted along the solid white lines marked in Figure 9 with the directions indicated by the white arrows. The main filament and most sub-filaments display continuous structures without significant curvature along the position axes and do not have other evident components along the velocity axes. The plot of F3 presents a twisted velocity structure with a velocity gradient from southeast (4 km s −1 ) to northwest (2 km s −1 ) (referring to the directions of filaments in Figure 9) of it. F6 also has a velocity gradient, which is from 3 to 0.5 km s −1 occurring at its southern part. Different from other filaments, F9, F10, and F11 all have high 13 CO emission parts located at their two ends, with velocity gradients from northwest (3.5 km s −1 ) to southeast (1.5 km s −1 ) for F9, from southeast (2.5 km s −1 ) to northwest (4 km s −1 ) for F10, and from south (1 km s −1 ) to north (3 km s −1 ) for F11, making their 13 CO spectra shown in Figure 13 have two peaks. Corresponding to its spectra in Figure 13, F12 has another component on the velocity axes, but this one is too weak in the 13 CO emission to be identified as a new filament.
The C 18 O position-velocity plots of the filaments are presented in Figure 15. These plots are extracted along the dotted arrowed lines shown in Figure 12. Only the four filaments detected with the C 18 O emission are presented here. Representing the densest parts of filamentary structures, the C 18 O velocity structures of these four filaments all display discontinuous features along the position axes. Similar with its 13 CO velocity structure in Figure 14, F6 is identified with the velocity gradient of 3.5 to 2 km s −1 from north to south (referred to the directions of filaments in Figure 12), while no evident velocity gradient is revealed in the other three filaments.
In Section 3.1, we have derived the excitation temperature of filamentary structures (right panel in Figure 6). We notice that F2, F3, and the northern part of F1 have relatively high excitation temperatures (T ex > 12 K). The 12 CO gas distribution shows that there is a cleft between F1-F2 and F10-F12, and the main filament F1 has an asymmetric distribution with a diffuse extension at the northern part and sharp cut-off at the cleft. Moreover, the areas with high excitation temperature are located at the edge of the cleft. These morphological features suggest that some external effects, such as UV irradiation, propagation of shock waves, or dynamical interaction between filaments, may be the plausible causes of the enhancement of excitation temperature in F1, F2, and F3. We extract the mean excitation temperature of each filament according to the distribution of filaments in Figure 9, and list the results in Table 1.
We have also derived the H 2 column density maps of filamentary structures (right panels in Figure 7). Overlaid with the positions of identified filaments, we present the new column density maps in Figure 16. Similar to the distributions of filaments in the integrated intensity maps (Figure 9 and Figure 12), the new column density maps in Figure 16 show that the main filament F1 resembles the shape of a "ridge" with high column density in the inner area, while the subfilaments around or away from it form the "nest" with lower column densities and various directions, no matter whether they are shown in 13 CO (left panel) or in C 18 O (right panel).
With H 2 column density, we can calculate the LTE mass of filaments by where µ is the mean molecular weight with the value of 2.83, m H is the mass of hydrogen atom, and S is the area of CO emission. We adopt the distance of 400 pc, which will be discussed in Section 4.1, when calculating the area. We also measure the length of each filament and derive the linear masses, which are listed in Table 1.
In addition, we calculate the averaged line width of each filament in the following way. First, we calculate the line width of each pixel in the area around each filament where the integrated intensity of the 13 CO emission is greater than three times the rms noise level. Then, we use the integrated intensity of these pixels as the weight to calculate the averaged line width of each area, and thus the line width of each filament. The result is listed in Table 1.
We further calculate the radial density profiles of filaments based on the H 2 column density of filaments traced by 13 CO emission (shown in the left panel of Figure 16). Similar to Arzoumanian et al. (2011) and Palmeirim et al. (2013), we first determine the tangential direc-tion of each pixel along the position of each filament shown in Figure 16. Then, for each pixel, we derive one column density profile perpendicular to the tangential direction of the pixel. Finally, we average the profiles of all pixels along each filament and derive the mean radial density profile. The results are presented in Figure 17. The profiles of most filaments have Gaussian-like shapes in the inner parts, and the outer parts of the profiles reflect the distributions of surrounding structures of filaments. We apply Gaussian fittings to the inner parts of the profiles, which are shown by the dashed red curves in Figure 17.
According to the results of Gaussian fittings, we also calculate the FWHM width of each filament (see Table1). We note that the widths of our sample of filaments are much larger than those of the filaments identified in the Herschel Gould Belt survey (e.g., Arzoumanian et al. 2011;Palmeirim et al. 2013;Cox et al. 2016). This is mainly because the far-IR observations of Herschel trace much denser ISMs than the CO gases that are used as tracers in our observations. Another reason is that we choose to fit the inner Gaussian-like portions rather than inner flat portions (Arzoumanian et al. 2011;Palmeirim et al. 2013) of the profiles to calculate the widths of the filaments.
Based on the properties listed in Table 1, we derive the median excitation temperature, line width, length, width, and linear mass of filaments with the values of 9.28 K, 0.85 km s −1 , 7.30 pc, 0.79 pc, and 17.92 M ⊙ pc −1 , respectively.
Distance of Filaments
Distance is an essential parameter when calculating the area of emission and the length of filaments. In our observations, we have derived the Local Standard of Rest (LSR) velocity of the molecular gas. However, as mentioned by Xu et al. (2006), the kinematic distance calculated from LSR velocity is not accurate because of the large peculiar motions of gas material in the spiral arms of the Milky Way. Another method is to use the trigonometric parallax of the maser to estimate the distance, which is adopted by the Bar and Spiral Structure Legacy (BeSSeL) Survey (Reid et al. 2014) to measure the distances of high-mass SFRs in the Milky Way. However, according to their catalog, they have not derived the distance of any maser source in the G150 region.
In this work, the distance to the identified filaments is estimated based on the 3D extinction map from Green et al. (2015). With 5-band grizy Pan-STARRS 1 photometry and 3-band 2MASS JHK s photometry of stars embedded in the dust, they trace the extinction on 7 ′ scales out of a distance of several kiloparsecs, by simultaneously inferring stellar distance, stellar type, and dust reddening along the line of sight. We select six regions with high intensity integrated from −0.5 to 6.5 km s −1 (see Figure 9). The regions are centered at the Galactic coordinates of (150.5, 4.0), (150.3, 3.9), (149.7, 3.5), (151.4, 3.9), (151.1, 4.4), and (149.2, 3.0), respectively, and with the same radius of 0.25 • . In Figure 18, we show the median cumulative reddening in each distance modulus (DM) bin of all the selected regions from Green et al. (2015). We notice one rapid increase in the extinction centered at DM∼8 (400 pc), which could be due to the dust reddening in the filamentary molecular cloud. Therefore, we take 400 pc as the distance of the filamentary molecular cloud.
Comparison with DisPerSE Algorithm
The filaments presented in this work are all identified by the visual inspection method (see Section 3.2). To testify our identification, we present the result from the Discrete Persistent Structure Extractor (DisPerSE) algorithm to make a comparison. DisPerSE is a coherent multi-scale identification approach to all kinds of astrophysical structures, especially the filamentary structures, in the large-scale matter distribution of the universe (Sousbie 2011). It is originated from the analysis of the cosmic web and its filamentary network, and has been upgraded to apply to 3D simulated data and X-ray data observed by the satellite Suzaku . As illustrated by Sousbie (2011), implementation of this method is based on two theories, the Morse theory, which makes it possible for the application of topological principles to astrophysical data, and the persistence theory, which helps to deal with the intrinsic uncertainty and Poisson noise in the data set.
In the practical process, we first define a persistence level with the value of three times the rms noise level, and the algorithm works over the Delaunay tessellation of the 13 CO data to obtain a discrete density field by the Delaunay Tessellation Field Estimator technique (Schaap & van de Weygaert 2000;van de Weygaert & Schaap 2009). Then the algorithm computes discrete Morse complexes from the density field and extracts the filamentary structures. The result is shown in Figure 19. The solid white lines mark the filaments identified by DisPerSE, with the backgrounds of the integrated intensity map of the 13 CO emission (left panel) and the velocity distribution map of the 13 CO emission (right panel), and the dashed red lines are the filaments identified by visual inspection.
In the left panel of Figure 19, the result of DisPerSE is quite consistent with our result, except for the filaments F5 and F8. DisPerSE tends to extract filaments as a set of connected segments ), while we prefer to consider the continuity of filaments in spatial distribution during the identification. In the right panel, the connections between some segments by DiePerSE seem too random, ignoring the velocity gradients between the segments, while the filaments we identified are consistent with the velocity distribution (presented in Section 3.2). We argue that DisPerSE can track the trails of most filamentary structures, but we have not found that it can take the velocity information into account when connecting the segments to form the shape of the filament.
Gravitational Stability of Filaments
Theoretical studies of self-gravitating cylinders predict that cylinders should have a maximum, critical linear mass above which cylinders will radially collapse into a line (Jackson et al. 2010). Same as the cylinders, the critical linear masses of filaments determine their gravitational stability. If the mass of a filament is greater than the critical value, the filament is gravitationally unstable and will fragment into clumps along its length, which are expected to evolve into protostars. Indeed, young stellar objects (YSOs) are observed to be distributed along the supercritical filaments in, e.g., Taurus (Goldsmith et al. 2008) and Aquila ). On the contrary, with mass lower than the critical value, the filament is gravitationally unbound and maybe in an expanding state, and is even expected to disperse during the turbulent crossing time, which is ∼0.3 Myr for a typical subcritical filament (Arzoumanian et al. 2013), unless confined by the external pressure (Fischera & Martin 2012).
The critical linear mass can be described as M crit = 2c 2 s /G (Ostriker 1964) when the thermal pressure dominates over the turbulent pressure. In this formula, c s is the isothermal sound speed, which is ∼0.2 km s −1 when the gas temperature is 10 K (Arzoumanian et al. 2013), and G is the gravitational constant, which is given as 1/232 km 2 s −2 M −1 ⊙ pc (Solomon et al. 1987). Since the mean excitation temperature of our identified filaments is around 10 K, we can estimate a critical linear mass of M crit ≈ 18.56 M ⊙ pc −1 . In the case of turbulence dominance, we use a virial linear mass of M vir = 2σ 2 tot /G (Fiege & Pudritz 2000) as the critical linear mass, where σ tot is the total velocity dispersion, which can be calculated as In the calculation, µ is the mean weight of molecular gas with a value of 2.83, and σ NT is the nonthermal velocity dispersion, which can be obtained by subtracting the thermal velocity dispersion (σ T,obs ) from the observed velocity dispersion (σ obs ). We have derived the 13 CO line width (∆v) of filaments, so the observed velocity dispersion (σ obs ) is ∆v/ √ 8ln2. Furthermore, the thermal portion of the velocity dispersion is σ T,obs = k B T kin µ obs m H , where T kin is the kinetic temperature that equals to the excitation temperature and µ obs is the molecular weight of the observed molecule, which is 29 for 13 CO.
The left panel of Figure 20 shows the relationship between total velocity dispersion (σ tot ) and LTE linear mass (M/l) of our identified filaments. Only four filaments (F1, F2, F3, and F6) are identified as thermally supercritical filaments with their linear masses significantly greater than the critical linear mass (M crit ≈ 18.56 M ⊙ pc −1 ), while most of the rest filaments are thermally subcritical. We notice that linear mass of F5 (17.92 M ⊙ pc −1 ) and linear mass of F9 (18.80 M ⊙ pc −1 ) are close to the critical mass. Considering the errors of estimated distance mentioned in Section 4.1, it is hard to identify whether these two filaments are thermally supercritical or thermally subcritical. The four thermally supercritical filaments are exactly the ones detected with the C 18 O emission (see Section 3.2), indicating that they are much denser than the other filaments and more likely to be supercritical. All of the filaments, no matter whether they are thermally subcritical or thermally supercritical, have velocity dispersions of 1.5 times higher than the isothermal sound speed (c s ≈ 0.2 km s −1 ), indicating that the turbulent motions may play an important role in the stability of filaments. Both subcritical and supercritical filaments do not have clear relationships between velocity dispersion and linear mass, the mean velocity dispersion of the subcritical is ∼0.38 km s −1 , while the value is ∼0.42 km s −1 for the supercritical. Arzoumanian et al. (2013) suggested that thermally supercritical filaments tend to be virialized and gravitationally bound and, conversely, thermally subcritical filaments are not virialized or gravitationally unbound. The relationship between the virial parameter α vir = M vir /(M/l) and the linear mass (M/l) of the filaments is shown in the right panel of Figure 20. All the thermally subcritical filaments are far from virialized state with high virial parameters (α vir ≫ 2), and thus tend to be gravitationally unbound. While the supercritical filaments F1 and F2 have virial parameters smaller than two, indicating that they are virialized and gravitationally bound, and F3 and F6 have virial parameters slightly lager than 2, which are 2.24 and 2.16, respectively, close to virialized. Our result is quite similar to the result in Arzoumanian et al. (2013). The relationship between the virial parameter and the linear mass of subcritical filaments can be fitted as α vir ∝ (M/l) −1.30±0.5 , of which the index is comparable to the one (−0.95 ± 0.12) in Arzoumanian et al. (2013) under the certainty.
Thus, in our sample, all of the thermally subcritical filaments are gravitationally unbound, and thus could be in an expanding state or a stable state when they are confined by external pressure. For the thermally supercritical filaments, two of them are gravitationally bound and expected to fragment into clumps that would evolve into protostars in the future, while the other two filaments are close to being virialized. The large turbulent motions (σ tot ≫ c s ) found in our sample of filaments may also support the view that filaments are formed by turbulent compression of interstellar gas (Padoan et al. 2001;Arzoumanian et al. 2013).
Clumps and YSOs along Filaments
We use the GaussClumps algorithm (Stutzki & Guesten 1990) in the CUPID package of Starlink software to identify the clumps within the filamentary molecular cloud. The data we use is 13 CO FITS cubes with a velocity range from −0.5 to 6.5 km s −1 . We set the parameter Thresh, which determines the minimum peak amplitude of clumps fitted by the algorithm, as three times the rms noise level and other parameters as the default values. The morphologies of the clumps are checked by visual inspection after running the GaussClumps algorithm. As a result, 146 clumps in 13 CO data are identified. We measure the radius, line width, excitation temperature, LTE mass, and virial mass of each clump, which are listed in Table 2, with median values of 7.26×10 −2 pc, 0.35 km s −1 , 10.04 K, 0.49 M ⊙ , and 1.80 M ⊙ , respectively. We compare the virial mass (M Vir ) and LTE mass (M LTE ) of the clumps in Figure 21 and find that about 18 clumps (∼12%) are under virial equilibrium with their virial parameters smaller than two. The mass relationship can be fitted with a power law of M Vir ∝ M 0.96±0.08 LTE . This power-law index we derived is slightly larger than the index 0.75 obtained in CO clumps of North American and Pelican Nebula by Zhang et al. (2014), and close to the index 0.97 obtained in CO clumps of Gemini cloud by Li et al. (2015). André et al. (2010) and Men'shchikov et al. (2010) pointed out that there is a close correspondence between the spatial distribution of dense cores and the filamentary network. In our sample of clumps and filaments, we also find this kind of correspondence. Figure 22 illustrates the positions of identified clumps. In the 146 CO clumps, 113 clumps (∼77%) are located within the area around the filaments, where the integrated intensity of the 13 CO emission is greater than three times the rms noise level. A similar result was presented in Henning et al. (2010), they found that about 75% of the detected pre-and protostellar cores are located within the filamentary IRDC G011.11−0.12. Könyves et al. (2015) also found that about 71% -78% of prestellar cores lie within a 0.1 pc width (Arzoumanian et al. 2011) of filaments' footprints traced by the DisPerSE algorithm in the Aquila cloud. They also noticed that prestellar cores are preferentially found in the thermally supercritical filaments with a percentage of 66% -75%. In our sample, the percentage is ∼44% for the clumps associated with the thermally supercritical filaments (F1, F2, F3, and F6). For the virialized CO clumps, 10 clumps (56%) are found to be associated with supercritical filaments F1, F2, and F6, and 7 clumps (40%) are associated with virialized filaments (F1 and F2). We also notice that there is a small group of virialized clumps located in subcritical filaments F7 and F8.
With the IR data from 2MASS and WISE surveys, we also investigate the YSO candidates in the area of filamentary structures. YSOs can be classified into disk-bearing YSOs and diskless YSOs according to whether the circumstellar disks exit or not. The IR emission excess created by dusty circumstellar disks makes the IR colors of disk-bearing YSOs different from that of diskless YSOs. On the other hand, the diskless YSOs are unable to be identified only based on their IR colors. So, in this work, we only investigate the disk-bearing YSOs, namely Class I and Class II objects, according to the YSOs identification and classification scheme provided by Koenig & Leisawitz (2014). Following the scheme, we first remove the star-forming galaxies and broad-line active galactic nuclei (AGNs) as extragalactic contaminants according to their locations in the WISE W 1−W 2 versus W 2−W 3 and W 1 versus W 1−W 3 color-color diagrams (see the detailed description in Koenig & Leisawitz 2014). Then we select the YSO candidates based on their locations in the WISE W 1 − W 2 versus W 2 − W 3 color-color diagram, which is shown in the left panel of Figure 23. With the combination of 2MASS H and K s bands, we use H − K s versus W 1 − W 2 color-color diagram, shown in the right panel of Figure 23, to search for additional YSO candidates among previously unclassified objects. The IR photometric magnitudes and classification of all the identified YSO candidates are listed in Table 3.
In Figure 24, we present the spatial distribution of the identified YSOs overlaid on the integrated intensity maps of 13 CO. Most of the identified YSOs are not associated with the filaments. A group of YSOs is located in the southeast, far away from the filamentary structures. Another group of YSOs is located near the filaments F7 and F8 in the southwest. Two Class II objects are found to be located in the northern area of F2 and the southern area of F6, respectively. One Class I object is found in the central area of main filament F1.
In general, most virialized clumps and associated YSO candidates are found in the supercritical filaments, especially the virialized filaments (F1 and F2), indicating the existence of potential starforming activities in these filaments. For the subcritical filaments F7 and F8, the small groups of virialized clumps and YSO candidates found in them may also indicate that this area contains star-forming activities, while for the rest of the subcritical filaments, almost no virialized clumps or associated YSO candidates are found in them, indicating that the star-forming activities may have not occurred in these filaments.
SUMMARY
In this work, we present large-field mapping observations of the G150 region covering an area with 147.75 • ≤ l ≤ 152 • and 1.5 • ≤ b ≤ 5.25 • in the J = 1 − 0 emission line of CO isotopologues ( 12 CO, 13 CO, and C 18 O), using the PMO 13.7 m telescope. The CO gas spatial distribution and averaged spectra of the G150 region reveal three molecular gas layers with velocity ranges of −10.5 to −5 km s −1 , −5 to −1.5 km s −1 , and −0.5 to 6.5 km s −1 , respectively. We focus on the third component (−0.5 -6.5 km s −1 ) in this work, which has significant 13 CO and C 18 O emission and shows conspicuous filamentary structures. The main results are listed below.
We identify 12 filaments (F1 to F12) in the 13 CO image. The main filament F1 and the sub-filaments F2 to F9 in the surrounding area are associated with each other, while other subfilaments F10 to F12 in the outer area are distributed approximately parallel to the main filament, forming the so-called "ridge-nest" structure together. The main filament in the "ridge" area is uni-directional with higher column density and the sub-filaments in the "nest" area are various directional with lower column density. We also identify four filaments (F1, F2, F3, and F6) in the C 18 O image.
We extract the velocity structure along the length of each filament in both 13 CO and C 18 O data. The main filament and most sub-filaments have continuous velocity structures with slight velocity gradients in 13 CO, and velocity structures in C 18 O display discontinuous features, only concentrating on the densest parts. We derive the radial density profiles of the filaments and find that most profiles present Gaussian-like shapes in the inner parts. Based on the CO observations, the measured median excitation temperature, line width, length, width, and linear mass of the filaments are 9.28 K, 0.85 km s −1 , 7.30 pc, 0.79 pc, and 17.92 M ⊙ pc −1 , respectively, assuming a distance of 400 pc.
After comparing LTE linear mass with critical linear mass (M crit ≈ 18.56 M ⊙ pc −1 ), only four filaments (F1, F2, F3, and F6), which are also detected in the C 18 O image, are identified as thermally supercritical. We find that F1 and F2 have virial parameters smaller than two, suggesting that they are under virial equilibrium and could be under fragmentation, while F3 and F6 are close to virialized. The virial parameter of thermally subcritical filaments are much higher, indicating that they are not virialized and that they tend to be gravitationally unbound. We also find that all of the identified filaments have large turbulent motions with velocity dispersions much greater than the isothermal sound speed (c s ≈ 0.2 km s −1 ).
We find in total 146 clumps in the 13 CO data. Approximately 77% of the clumps are associated with the filaments, and 56% of the virialized clumps are associated with the thermally supercritical filaments. Based on the complementary IR data, one Class I YSO and two Class II YSOs are found to be located in the supercritical filaments F1, F2, and F6, respectively. The existence of virialized clumps and associated YSO candidates suggests that the thermally supercritical filaments, especially the virialized filaments (F1 and F2), may already have star-forming activities.
We are grateful to all the members of the Milky Way Imaging Scroll Painting CO line survey group, especially the staff of Qinghai Radio Station of PMO at Delingha for the support during the observations. We appreciate the anonymous referee for valuable comments and suggestions that helped to improve this paper. This work was supported by the National Natural Science Foundation of China (grants Nos. 11473069, 11233007, 11503086, and U1431231) This preprint was prepared with the AAS L A T E X macros v5.2. The blue area represents the integrated intensity of the 12 CO emission with a velocity range of −0.5 to 6.5 km s −1 , the green area represents the integrated intensity of the 13 CO emission with a velocity range of 0 to 6.5 km s −1 , and the red area represents the integrated intensity of the C 18 O emission with a velocity range of 1.3 to 4.3 km s −1 . Figure 15. The name of each filament is marked beside it. Fig. 13.-CO spectra of the identified filaments. The blue spectrum shows the 12 CO emission, the orange spectrum shows the 13 CO emission multiplied by a factor of 1.5, and the red spectrum shows the C 18 O emission multiplied by a factor of three.
Fig. 14.-13 CO position-velocity plots of the identified filaments extracted along the solid arrowed lines shown in Figure 9 with widths of 15 ′ for F1, 10 ′ for F2, F3, and F6, and 8 ′ for the rest. The contours are overlaid from 10 σ with an interval of 10 σ for F1, F2, F3, and F6, and from 5 σ with an interval of 5 σ for the rest ("σ" is the rms noise level in each plot). Figure 12 with widths of 12.5 ′ for F1 and 8 ′ for F2, F3, and F6. The contours are overlaid from 5 σ with an interval of 2 σ for F1 and F2, and from 3 σ with an interval of 1 σ for F3 and F6 ("σ" is the rms noise level in each plot). Note. -Properties of the identified filaments, including excitation temperature (Column 2), line width in the 13 CO emission (Column 3), LTE mass traced by the 13 CO emission (Column 4), length in the 13 CO emission (Column 5), linear mass traced by the 13 CO emission (Column 6), and LTE mass traced by the C 18 O emission (Column 7). Note. -Properties of the identified clumps. Column 1 is clump name, which contains Galactic longitude and latitude. Column 2 to 5 are LSR velocity, clump radius, excitation temperature, and line width. Column 6 to 8 are LTE mass, virial mass, and virial parameter. | 2017-03-24T02:34:21.000Z | 2017-03-04T00:00:00.000 | {
"year": 2017,
"sha1": "77848d2f0fe36861d1f556aa0e5496f2db1061d0",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.3847/1538-4357/aa6443/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "77848d2f0fe36861d1f556aa0e5496f2db1061d0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
220385027 | pes2o/s2orc | v3-fos-license | Myeloid sarcoma of the nasal cavity in a 15-month-old child
Abstract Introduction: Myeloid sarcoma (MS) is a rare tumor mass. It may occur at any extramedullary anatomic sites but is uncommon in the sinonasal location.MS commonly presents concurrently with acute myeloid leukemia (AML), but it may predate AML over several months or years, named isolated MS. Patient concerns: We report a case of a 15-month-old child who presented with mouth breathing, bilateral rhinorrhea, palpebral edema and proptosis. The routine blood tests were normal for the first few months. Computed tomography scan revealed neoplasm in nasal cavity. Diagnosis: The patient was definitely diagnosed with isolated MS in the nasal cavity through immunohistochemistry combined with clinical features and radiological investigations, and MS further progressed to AML which was confirmed by hematologist. Interventions: Endoscopic sinus surgery was performed to acquire specimens. After diagnosis, the patient was promptly treated with systemic chemotherapy. Outcomes: All symptoms gradually subsided and the mass of nasal cavity was invisible. No relapse occurred during follow-up. Conclusion: Sinonasal MS may be misdiagnosed and should be considered when symptoms persist and worsen. Prompt clinic examinations are essential for cases with suspected MS. Diagnosis of MS is dependent on the immunohistological investigations combined with clinical features, radiological investigations. Early diagnosis and systemic chemotherapy are vital for patients to achieve best prognosis.
Introduction
Myeloid sarcoma (MS) is a tumor mass consisting of myeloid blasts at an anatomic site other than the bone marrow. [1] MS commonly originates in the lymph nodes, skin and gastrointestinal tract but rarely in the sinonasal location. [2,3] MS may occur in isolation, or more commonly in patients with acute myeloid leukemia (AML), myeloproliferative neoplasm, myelodysplastic syndrome, or myeloproliferative neoplasm/myelodysplastic syndrome. [4] The sinonasal area may be a predilection site of isolated MS. [5] Although MS had been described in 1853, the number of the disease is still rare, resulted in the limitation of literatures that define clinical features, diagnosis, treatment and outcomes. Thus, case reports and retrospective series are essential for study and management of MS. Here, we describe a rare case of MS of the nasal cavity in a 15-month-old child. Informed consent was obtained from the patient's guardian for publication of this case report.
Case report
A 15-month-old male child presented to the local ear, nose, and throat outpatient department with a 2-month history of bilateral rhinorrhea and mouth breathing, and the patient was diagnosed as rhinitis. After drug therapy for 3 months in local hospital, the patient's symptoms did not improve but persisted and aggravated. During that time, routine blood tests were normal. As a refractory case, the patient was then admitted to our hospital.
In addition to rhinorrhea and mouth breathing, there existed bilateral palpebral edema and proptosis for this patient on admission. Nasal endoscopy verified neoplasm in bilateral nasal cavity. Computed tomography (CT) scan revealed that entire nasal cavity was occupied by tissue mass (Fig. 1). Hemogram showed increased leucocyte count (11.1 Â 10 9 /L) including elevated myeloblast percentage (24.0%) and decreased neutrophil count
Postoperative hemogram showed remarkable leukocytosis (36.0 Â 10 9 /L), as well as elevated myeloblast percentage (57%). The patient was promptly transferred to the hematology department, and further AML was confirmed by hematologist. It was 5 months since MS occurred in this patient. A week after the surgery, the patient then received conventional chemotherapy containing 5 phases to treat MS and AML. The first and second phases were induction chemotherapy consisted of daunorubicin for 3 days, cytarabine for 7 days and etoposide for 5 days. The remaining phases were intensification chemotherapy: intensification I-cytarabine for 3 days and mitoxantrone for 2 days; intensification II-cytarabine for 3 days and homoharringtonine for 7 days; and intensification III-cytarabine for 3 days.
Three weeks after the initiation of chemotherapy, symptoms of mouth breathing, rhinorrhea, palpebral edema and proptosis gradually subsided, and CT scan revealed that the mass of nasal cavity was invisible (Fig. 3). Five months later, at the completion of chemotherapy, the patient achieved complete remission eventually. No relapse occurred during 24 months' follow-up.
Discussion
Although MS represents an extramedullary proliferation and may occur at nearly any anatomic site, MS involvement in the sinonasal area is rare. Suzuki et al [5] summarized 10 sinonasal MS cases that had been reported between 2000 and 2018, including 9 adults and 1 child. Holsinger et al [6] found 2 pediatric sinonasal MS cases recorded between 1955 and 1999 at university-based, tertiary care referral centers in Houston.
MS commonly presents concurrently with AML, but it may predate AML over several months or years, named isolated MS, which means it can appear before clinical signs of hematological disease. [4,7] Klco et al [3] deemed that no matter blood film or bone marrow biopsy is abnormal or not, the existence of MS is sufficient to establish a clinical diagnosis of AML, since bone marrow disease will develop in nearly all patients who originally presented with isolated MS, with a mean interval of 10 months . Based on the definition of isolated MS, we infer that this patient initially presented with isolated MS, and AML did not occur until 5 months later. MS is an uncommon tumor mass affecting 2.9% to 9.1% of patients in AML, and isolated MS is a very rare entity with an incidence of 2 cases per million adults. [4] However, the sinonasal area may be a predilection site of isolated MS occupied 20% of sinonasal MS cases. [5] MS has a wide age distribution and pediatric population is a preference age group. [3] Nevertheless, some retrospective series suggested the main age range of onset of MS is connected with tumor site to some extent. [8,9] Sinonasal MS is almost seen in adult population but orbital MS is more common in pediatric population. [5,10] Interestingly, their anatomical locations are relatively close. Accordingly, sinonasal MS in child is exceedingly rare. To our knowledge, this report describes the youngest case of sinonasal MS.
Due to its rarity, the diagnosis of MS, especially isolated MS, could be highly challenging. A diagnosis of MS is based on a combination of clinical features, radiological investigations, and immunohistochemistry. [4] Clinical presentation is depended on size and localization. Sinonasal MS generally causes non-specific symptoms such as nasal obstruction, headache and tumor mass effects including facial swelling, proptosis and visual disturbance. [5] Primally, our patient was misdiagnosed as rhinitis on account of limited non-specific symptoms and normal blood film, and then because of aggravated nasal symptoms and additional palpebral edema and proptosis with suspicion of malignant neoplasm, nasal endoscopy and CT scan were performed. CT scan enable evaluation of size and location of the tumor and could distinguish tumor from other lesions, such as hematomas or abscess, but definite diagnosis of MS is dependent on the biopsy of the tumor and immunohistochemical staining. [11] For isolated MS, normal blood film or bone marrow biopsy is essential which, however, also resulted in a misdiagnosis rate of 25% to 47% of isolated MS. [4] Therefore, diagnosis of isolated MS requires a high index of suspicion as well as significant clinical acumen. [4] The treatment of MS include systemic chemotherapy, local therapy involving either surgery or radiotherapy or a combination, and hematopoietic stem cell transplantation (HSCT). [11] Local therapy alone neither appears to delay the transformation from MS to AML or improve the prognosis, which is not recommended. [4] Two patients with sinonasal MS who received local therapy alone were dead in short time. [12,13] The role of HSCT was highlighted since it improves the overall survival of patients with MS. [11] Suzuki et al reviewed 2 sinonasal MS patients underwent HSCT, and both achieved satisfactory shortterm survival. [5] Systemic chemotherapy should be commenced as soon as possible in all cases including isolated MS which will transform to AML eventually. [4] Chemotherapy can increase overall survival and delay or even prevent progression to AML in isolated MS. [4] Because MS is a systemic disease and respond to systemic treatment, chemotherapy could remit symptoms which was caused by extramedullary lesion. Therefore, early diagnosis and chemotherapy are crucial to the prognosis of MS, especially isolated MS. Through chemotherapy, our patient achieved clinical and hematologic remission and have the longest survival compared to other adult patients with sinonasal MS. [5] Originally presented with isolated MS and younger age may be positive prognostic factors. Several series demonstrated outcomes in children with isolated MS are better than those with MS concurrent or following AML. [4,11] Furthermore, Avni et al [14] showed that age less than 47.5 years was associated with a lower risk of death.
In conclusion, we report the youngest case of sinonasal MS who achieved the longest survival among sinonasal MS patients. The patient initially presented with isolated MS which did not progress to AML until 5 months later. MS, especially isolated MS, is highly misdiagnosed. Prompt clinic examinations are essential for cases with suspected MS. Definite diagnosis of MS is dependent on the immunohistological investigations combined with clinical features, radiological investigations. Early diagnosis and systemic chemotherapy are vital for patients to achieve best prognosis. | 2020-07-02T10:09:01.933Z | 2020-07-02T00:00:00.000 | {
"year": 2020,
"sha1": "6d5fabe2c3479e1647747c4b149caf9b538e7952",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/md.0000000000021119",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e5e77f87b8ce079634c5eae61b120860571b4248",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255205403 | pes2o/s2orc | v3-fos-license | E as in Enigma: The Mysterious Role of the Voltage-Dependent Anion Channel Glutamate E73
The voltage-dependent anion channel (VDAC) is the main passageway for ions and metabolites over the outer mitochondrial membrane. It was associated with many physiological processes, including apoptosis and modulation of intracellular Ca2+ signaling. The protein is formed by a barrel of 19 beta-sheets with an N-terminal helix lining the inner pore. Despite its large diameter, the channel can change its selectivity for ions and metabolites based on its open state to regulate transport into and out of mitochondria. VDAC was shown to be regulated by a variety of cellular factors and molecular partners including proteins, lipids and ions. Although the physiological importance of many of these modulatory effects are well described, the binding sites for molecular partners are still largely unknown. The highly symmetrical and sleek structure of the channel makes predictions of functional moieties difficult. However, one residue repeatedly sticks out when reviewing VDAC literature. A glutamate at position 73 (E73) located on the outside of the channel facing the hydrophobic membrane environment was repeatedly proposed to be involved in channel regulation on multiple levels. Here, we review the distinct hypothesized roles of E73 and summarize the open questions around this mysterious residue.
Introduction
The voltage-dependent anion channel (VDAC) is a 32 kDa pore-forming protein in the outer mitochondrial membrane, composed of 283 to 294 amino acids depending on the respective isoform. It is the main passageway over the outer mitochondrial membrane and conducts ions and small metabolites into and out of mitochondria. Higher organisms express 3 isoforms, VDAC1-3, whose distinct roles are still under debate. While VDAC1, as the best researched isoform, was described to be a major regulator of apoptosis and a regulator of cellular energetics by conducting ATP, the contribution of VDAC2 and 3 to these processes are less clear. Both VDAC1 and VDAC2 were described to be regulators of mitochondrial Ca 2+ and to interact with the Ca 2+ release channels of other cellular organelles, such as lysosomes [1] or the endoplasmic/sarcoplasmic reticulum [2,3]. VDAC3 was recently reported to have a major role in cellular ROS signaling [4].
Although high resolution structures are so far only available for VDAC1 [5] and VDAC2 [6], it can be assumed that all three isoforms share high structural similarity. VDAC proteins are composed of 19 β-sheets connected by small linkers and arranged in an antiparallel orientation to form a large barrel in the membrane. The inside of the barrel is lined by an N-terminal α-helix ( Figure 1). Because of their large pore diameter of approximately 1.5-2 nm, VDACs are often described to be freely permeable to ions and small metabolites. However, accumulating evidence contradicts this idea and describes multiple modes of regulation for VDAC permeability [7,8]. These include subcellular localization, interaction with partner proteins, and most importantly, the gating of the channel. localization, interaction with partner proteins, and most importantly, the gating of the channel. Glutamate 73 is shown in red. A six beta sheet region around E73 (purple) was suggested to show higher protein dynamics compared to the rest of the barrel and to interact with the opposite helix to control ion permeation. A glycine stretch previously suggested to mediate protein interaction is shown in magenta and lysines coordinating ATP permeation are shown in green.
When inserted into planar lipid bilayers, VDAC resides in an anion-selective, 4nS high-conductance state at potentials around 0 mV, shows vigorous gating behavior at test potentials around ±20-40 mV and resides in several more cation selective lowconductance states (around 2 nS) at potential larger then 50 mV or lower then −50 mV [9][10][11]. Although it still remains unclear whether the membrane potential across the outer mitochondrial membrane is large enough to serve as the physiological trigger for channel gating, this gating behavior was often linked to channel activity and regulation.
Gating of the channel was shown to be influenced by various factors, such as the lipidic environment [12][13][14], PKA phosphorylation [15], the local Ca 2+ concentration [16] or the presence of VDAC binding partners like hexokinase [17,18] or members of the Bcl-2 family [19]. Physiological experiments, mainly on cultured or freshly isolated cells, further indicate a substantial regulation of the channel, which is highly relevant for its role in apoptosis or Ca 2+ signaling.
Despite several reports about regulation of the channel through the aforementioned factors, the molecular underpinnings still remain largely elusive, especially because the highly symmetrical and very sleek structure of VDAC makes identification of functionally relevant moieties in the channel difficult. Most reports focus on the N-terminal domain, including the helix as a relevant structure for channel gating and interaction with other proteins. As such, a lysine-rich motif in the helix was suggested to guide ATP permeation [20], while a glycine rich element in the distal part of the helix was shown to mediate binding to partner proteins [21] (Figure 1).
On the other hand, motifs in the barrel itself are not well characterized, with the exception of one prominent amino acid, a glutamate at position 73 (E73). This glutamate was first described in 1993 as an important residue for the binding of hexokinase [22]. In further publications, E73 was suggested to be involved in Ca 2+ binding and permeation [23], lipid binding [24,25], and channel dimerization [26]. However, looking at the molecular structure of VDAC, E73 is largely buried inside the membrane, making functional models very difficult. In the following, we will discuss proposed roles of E73 in channel function and regulation. Glutamate 73 is shown in red. A six beta sheet region around E73 (purple) was suggested to show higher protein dynamics compared to the rest of the barrel and to interact with the opposite helix to control ion permeation. A glycine stretch previously suggested to mediate protein interaction is shown in magenta and lysines coordinating ATP permeation are shown in green.
Intrinsic Channel Function
When inserted into planar lipid bilayers, VDAC resides in an anion-selective, 4nS high-conductance state at potentials around 0 mV, shows vigorous gating behavior at test potentials around ±20-40 mV and resides in several more cation selective low-conductance states (around 2 nS) at potential larger then 50 mV or lower then −50 mV [9][10][11]. Although it still remains unclear whether the membrane potential across the outer mitochondrial membrane is large enough to serve as the physiological trigger for channel gating, this gating behavior was often linked to channel activity and regulation.
Gating of the channel was shown to be influenced by various factors, such as the lipidic environment [12][13][14], PKA phosphorylation [15], the local Ca 2+ concentration [16] or the presence of VDAC binding partners like hexokinase [17,18] or members of the Bcl-2 family [19]. Physiological experiments, mainly on cultured or freshly isolated cells, further indicate a substantial regulation of the channel, which is highly relevant for its role in apoptosis or Ca 2+ signaling.
Despite several reports about regulation of the channel through the aforementioned factors, the molecular underpinnings still remain largely elusive, especially because the highly symmetrical and very sleek structure of VDAC makes identification of functionally relevant moieties in the channel difficult. Most reports focus on the N-terminal domain, including the helix as a relevant structure for channel gating and interaction with other proteins. As such, a lysine-rich motif in the helix was suggested to guide ATP permeation [20], while a glycine rich element in the distal part of the helix was shown to mediate binding to partner proteins [21] (Figure 1).
On the other hand, motifs in the barrel itself are not well characterized, with the exception of one prominent amino acid, a glutamate at position 73 (E73). This glutamate was first described in 1993 as an important residue for the binding of hexokinase [22]. In further publications, E73 was suggested to be involved in Ca 2+ binding and permeation [23], lipid binding [24,25], and channel dimerization [26]. However, looking at the molecular structure of VDAC, E73 is largely buried inside the membrane, making functional models very difficult. In the following, we will discuss proposed roles of E73 in channel function and regulation.
Intrinsic Channel Function
Looking at the high-resolution structure of VDAC, E73 is located in the fourth β-sheet of the transmembrane barrel with its side chain located on the outside of the barrel and sticking into the middle plane of the hydrophobic membrane environment (Figure 1). The area around E73 is located opposite of the N-terminal helix, which was proposed to enhance barrel stability through interaction with the barrel. Consequently, the area around E73 is believed to be more flexible due to the lack of the stabilizing effect of the helix, which is in line with b-factors for this area obtained from the crystal structure [27].
This flexibility was proposed to be, at least in part, also induced by E73: molecular dynamics simulations revealed that the charge of E73 enhances protein dynamics of the area comprising the first four to six β-sheets of the barrel (with E73 being located in the fourth β-sheet) [28]. Rendering E73 uncharged by protonation or mutation of E73 to glutamine (E73Q) or valine (E73V) dramatically reduced barrel dynamics. From these experiments, a gating mechanism involving large deformations of the barrel from a round towards a more elliptical state was proposed, with the area around E73 as a critical element of this movement (Table 1). Indeed, NMR spectroscopy of the VDAC1 E73V mutant revealed a more elliptical barrel compared to the available wild-type structures [29,30]. This model is further supported, by the idea that interaction of negative amino acids in the β-sheets around E73 interact with the helix to determine ion selectivity [20,31,32].
However, in later experiments performed in planar lipid bilayers, the exchange of E73 for glutamine or alanine revealed no changes in the voltage-induced gating of recombinant VDAC1 [13], contradicting the hypothesis of E73 being a critical element of channel gating (Table 1). It is still questionable though, if the channel is present in its apo-form under physiological conditions, or if an association with molecular partners, which is absent in planar lipid bilayers is required to unlock the E73 effect in vivo. In this regard, the molecular dynamics experiments also proposed that in addition to the enhanced protein dynamics the charge at E73 would induce a thinning of the membrane in this area [28]. This was interpreted as a sign for the regulation of the channel through interaction with other molecules such as ions, lipids or proteins. Indeed, another series of simulation experiments [33] showed a distortion of the channel in response to voltage by the rearrangement of charged residues in the β-sheets around E73 that was attributed to membrane thinning. In this study, sodium ions were repeatedly observed near E73 in the open state, indicative of a higher accessibility of this residue to charged molecules. It was postulated that the accessibility of E73 to molecular binding partners induced by thinning of the membrane would stabilize the channel in the open state.
One potential binding partner could be the channel itself; several lines of evidence indicate that dimerization of VDAC serves as a mode of channel regulation in apoptosis and metabolic control [34,35]. Bergdoll et al. reported a critical role for an interaction of E73 with S43 (β-sheet 2) in channel dimerization, which is dependent on its protonation state [26]. The formation of a VDAC dimer was reported at low pH, which was largely abolished by mutation of E73 to alanine or glutamine.
Taken together, it appears unlikely that E73 can influence channel gating in its apostate (Table 1), but dimerization or interaction with other protein partners are required to unlock E73 effects. A higher accessibility of E73 to these binding partners is achieved by a higher degree of flexibility of the area around E73. In the following, we will review the most important findings about the proposed E73 ligands: calcium, lipids and proteins.
Calcium
One of the most researched interaction partners of VDAC is Ca 2+ . Mitochondria can take up vast amounts of Ca 2+ and the uptake and release of Ca 2+ from mitochondria is a highly regulated process. Disturbances in mitochondrial Ca 2+ uptake are associated with human diseases such as cardiac, metabolic and neurological diseases [36,37], as well as cancer [38,39], and consequently mitochondrial Ca 2+ uptake was proposed to be a promising pharmacological target structure [40,41].
While in the inner mitochondrial membrane, the mitochondrial calcium uniporter complex (MCUC) is the main uptake route for Ca 2+ , VDAC is the main passageway for Ca 2+ over the OMM, and several reports suggest a regulation of mitochondrial Ca 2+ uptake at this stage [8,42]. Specifically, a critical role for VDAC and E73 was reported for local Ca 2+ transfer from cellular organelles into mitochondria. A tight contact between lysosomes and mitochondria exists and disruption of this cross-talk was shown to be associated with lysosomal storage diseases. The direct coupling is essential for the transfer of Ca 2+ from lysosomes into mitochondria via a TRPML1-VDAC1 axis [1]. Interestingly, this direct shuttling was critically dependent on the presence of E73 in VDAC1, and was completely abolished when E73 was mutated to glutamine.
Similarly, in cardiomyocytes, mitochondria form close contact sites with the sarcoplasmic reticulum, the main cellular Ca 2+ store. This network involves a very close interaction between the SR Ca 2+ release channel, the ryanodine receptor (RyR) and VDAC2 [2,43] and facilitates a local mitochondrial Ca 2+ uptake near the SR Ca 2+ release sites. This mechanism was shown to modulate intracellular Ca 2+ signals during excitation-contraction coupling, and to buffer erratic Ca 2+ release during diastole to prevent arrhythmogenesis [44,45]. Interestingly, E73 was critically required to mediate Ca 2+ transfer between the SR and mitochondria in cardiomyocytes and as such to be protective against arrhythmia [46]. The expression of wild-type VDAC2 but not a mutant in which E73 was mutated to glutamine mediated SR-mitochondria Ca 2+ transfer in cardiomyocytes, and the overexpression of wild-type VDAC2, but not E73Q, restored rhythmic cardiac contractions in a zebrafish arrhythmia model.
Two mechanisms are conceivable to be relevant for a regulation of Ca 2+ flux through VDAC: on the one hand, VDAC was suggested to change its permeability for Ca 2+ depending on its open state [47,48], while on the other hand, Ca 2+ binding to the channel at concentrations larger then 600 nM was itself described to be a modulator of channel gating and thus Ca 2+ permeation [16].
As outlined above, a direct role of E73 in channel gating appears unlikely, as lipid bilayer experiments demonstrated that E73 is not involved in the regulation of channel gating, at least not for monomers of the apo-channel [13]. Furthermore, E73 did not influence the ion selectivity of VDAC1 in lipid bilayers: while the channel is selective for anions in the open state, the closed state is less anion-selective and thus favors Ca 2+ currents [47,48]. However, no differences in ion selectivity were observed between wildtype VDAC1 and the E73Q mutant, neither in the open state nor a blocked state that was induced by the addition of α-synuclein, a known blocker of VDAC1 [49] (Table 1). With the notion that E73 does not affect gating or selectivity of the apo-channel, a role of E73 in Ca 2+ -mediated modulation of the channel or a role in Ca 2+ -mediated interaction of the channel with regulatory partners appears most likely.
For both, a binding site for Ca 2+ involving E73 must exist in the channel. After Gincel et al. had reported that VDAC transports Ca 2+ into mitochondria and that this transport is sensitive to ruthenium red (RuR) [50], the same group reported in 2007 that Ca 2+ competes with 25 µM azido ruthenium (AzRu) for a binding site involving E73 and E202 in a dose-dependent manner with a half-maximal effect at 100 µM [23]. The mutation of E73 to glutamine in VDAC1 eliminated AzRu sensitivity of VDAC in lipid bilayers and AzRu-photolabeling of the channel, and prevented an AzRu induced protection from apoptosis in T-REx-293 cells. Since the molecular structure of VDAC was unknown at the time, the authors postulated a binding site formed by the two residues. This model, however, does not hold true anymore after the molecular structure was resolved. In the commonly accepted molecular structure, which was solved in 2008 by two independent groups simultaneously through NMR and crystallography, respectively [5,51], E73 faces the lipidic environment of the membrane, raising the question how E73 could bind Ca 2+ inside this hydrophobic environment. Ujwal et al. concluded that VDAC could only hold Ca 2+ when two monomers of the channel assemble in an antiparallel dimer and only under very high concentrations of Ca 2+ , both of which are not expected to appear under physiological conditions [5].
In conclusion, a critical role of E73 can be observed for mitochondrial Ca 2+ uptake, which is directly relevant for human diseases and the treatment thereof; however, the molecular underpinnings of the VDAC-Ca 2+ interaction remain elusive. With the notion that E73 was described not to influence gating behavior or ion selectivity and the absence of a defined Ca 2+ binding site involving E73, it remains unclear how E73 can control Ca 2+ flux through VDAC at a molecular level (Table 1). A role of larger interaction partners, such as lipids or proteins, is most feasible.
Lipids
The lipidic environment, in which VDAC is embedded, was repeatedly shown to influence its physiology including voltage gating [12,13], ion selectivity [52] and interaction with partner proteins [53]. Beyond the idea that general changes in protein stability happen in response to the surrounding lipidic environment, several lipids were shown to have distinct binding sites on the channel, particularly sites including E73.
Ceramide is a sphingolipid, which can induce mitochondrial apoptosis and limit cancer cell proliferation by blocking cell cycle transition [54]. Through a chemical screening, VDAC 1 and 2 were found to be among its binding partners [55] and a structural constellation, in which the head group of the ceramide is bound to the negative charge of a glutamate, specifically E73, was proposed. Indeed, the mutation of E73 to Q in VDAC1 rendered colon cancer cells resistant to ceramide-induced apoptosis.
Among the most researched lipids that were shown to interact with VDAC are steroids, with cholesterol as the most prominent binding partner. Binding of cholesterol to VDAC was shown repeatedly [56,57] and cholesterol was shown to modulate VDACs oligomeric state and interaction with hexokinase [58]. However, also a vice versa effect was described in which a protein complex involving VDAC modulates the membrane cholesterol content [59,60]. Multiple binding sites were proposed by molecular docking [61] and NMR [51]. However, a direct investigation of the cholesterol-VDAC interaction domain only confirmed a binding site involving E73. A click chemistry approached, which allowed identification of protein-ligand interaction also in hydrophobic environments, revealed E73 as part of a binding site for neurosteroids in mouse VDAC1 [25]. This work is supported by the findings that sterol-based photolabeling reagents label residues Y62, T83, and F99 additionally to E73 forming a binding pocket around E73 (Figure 2), and that this labeling was competitively prevented using cholesterol and allopregnanolone in a concentration of 30 µM [14]. Furthermore, mutating E73 to Q, or A, lead to a significant decrease in the binding efficiency of cholesterol and allopregnanolone to VDAC1. Interestingly, a similar reduction was observed when the pH was lowered to 5-6 indicating that the charge at E73 is essential for the binding of lipids. Proposed lipid binding pocket. Using a photo-affinity approach, a binding pocket around E73 (red) for cholesterol and allogenanolone was proposed by Cheng et al. [14]. Residues Y62, T83, F99 (magenta) and E73 (red) were photo labeled by sterol-based photolabeling reagents and labeling was competitively inhibited by cholesterol and allopregnanolone (lipid, green).
Interestingly, however, at least two independent reports have investigated the effect of cholesterol and allopregnanolone on VDAC in lipid bilayers and found no differences in channel gating using different concentrations of the lipid [13,14]. It is thus feasible that instead of directly modeling the channels biophysical properties, binding of cholesterol to E73 affects VDAC dimerization or its interaction with hexokinase. Figure 2. Proposed lipid binding pocket. Using a photo-affinity approach, a binding pocket around E73 (red) for cholesterol and allogenanolone was proposed by Cheng et al. [14]. Residues Y62, T83, F99 (magenta) and E73 (red) were photo labeled by sterol-based photolabeling reagents and labeling was competitively inhibited by cholesterol and allopregnanolone (lipid, green). Interestingly, however, at least two independent reports have investigated the effect of cholesterol and allopregnanolone on VDAC in lipid bilayers and found no differences in channel gating using different concentrations of the lipid [13,14]. It is thus feasible that instead of directly modeling the channels biophysical properties, binding of cholesterol to E73 affects VDAC dimerization or its interaction with hexokinase.
Hexokinase
Hexokinases are enzymes which play a key role for glycolysis by phosphorylating glucose to glucose-6-phosphate. Hexokinases I and II are often found to be bound to mitochondria, which ensures availability of both glucose and ATP at the active center of the enzyme, delivered from the cytosol and mitochondria respectively [62]. In 1979, Felgner et al. identified a 31 kDa protein as the binding partner and anchor protein of hexokinase in the mitochondrial membrane [63], which was later identified as VDAC [64]. A particularly important role of this hexokinase-VDAC interaction induced accumulation of hexokinase at the outer mitochondrial membrane was described for cancer: cancer cells have a very high rate of glycolysis to match their energy demand. Consequently, the amount of hexokinase bound to VDAC was described to be elevated in these cells [65], and disruption of this interaction was suggested as an anti-tumor therapy [66]. In addition to the anchoring effect, hexokinase binding to VDAC was shown to induce channel closure and to be protective against apoptosis, securing cancer cell survival [67].
Therefore, a binding site for hexokinase on VDAC must exist. After the discovery that dicyclohexylcarbodiimide (DCCD) can compete out the VDAC-hexokinase interaction [68], De Pinto et al. used [ 14 C]DCCD-labelling of VDAC followed by enzymatic digestion to identify the putative common VDAC-binding site of hexokinase and DCCP [22]. They found a direct binding of [ 14 C]DCCD to E73. These experiments were confirmed using a series of assays performed with VDAC1 and a mutant of VDAC1 in which E73 was mutated to Q: overexpression of hexokinase prevented apoptosis induced by overexpression of wild-type VDAC1 but not E73Q and binding of 30-35 mU of hexokinase 1 to isolated yeast mitochondria was reduced by 70% in mitochondria expressing the E72Q mutant compared to wild-type VDAC1 [69] (Table 1). Furthermore, hexokinase did colocalize with mitochondria in cells expressing wild-type VDAC1 but not VDAC1 E73Q and finally, hexokinase reduced VDAC1 conductance in lipid bilayers, while currents through VDAC1 E73Q remained unaffected [17]. Experiments in which alkalization of the cell diminished hexokinase binding to mitochondria again indicate an important role of the protonation status of E73 for hexokinase binding, comparable to what was previously described for dimerization and cholesterol binding [70].
Coming back to the role of cholesterol on VDAC function, it is of note that an interplay of VDAC, cholesterol and hexokinase at E73 was described; as such, the membrane cholesterol content was shown to be enriched in carcinoma cells, which was associated with a reduction in oxidative phosphorylation. A protein complex involving VDAC was shown to be accountable for this process. The overexpression of VDAC E73Q in contrast increased oxidative phosphorylation and reduces membrane cholesterol content [59].
Taken together, the interaction of VDAC with hexokinase is essential for the regulation of the cellular metabolism, in particular the change from oxidative phosphorylation to enhanced glycolysis in cancer cells. This interaction is mediated by E73 and is maybe the most intensively researched interaction of VDAC with a molecular partner. A reduction in channel conductance together with downstream effects, such as a reduction of cholesterol content and protection against apoptosis, are associated with this interaction. However, most of these experiments were conducted using overexpression systems, and factors regulating the amount of hexokinase being bound to VDAC under physiological conditions are still unknown. The lipidic environment, intracellular pH, and cellular ions such as Ca 2+ might be involved in this process. Table 1. The role of E73 for VDAC channel functions. The passage of ions and small metabolites though VDACs is regulated by channel gating, selectivity of the pore and interaction with molecular partners. E73 was repeatedly suggested to be critically involved in these processes. A direct comparison of key experimental results is presented.
Molecular Process
Results in Favor of an Involvement of E73 Results Contradicting a Direct Involvement of E73 Voltage-gating -In molecular dynamics simulations E73 enhances barrel flexibility to allow transition to a more elliptical state [29,30].
-Replacing E73 by Q did not alter voltage gating in lipid bilayers [13].
Selectivity -The flexible area around E73 was suggested to interact with the helix to determine ion selectivity [20,31,32].
-No differences in ion selectivity were observed in lipid bilayers upon exchange of E73 for Q [49].
Modulation of channel function through interaction (scaffolding) -In simulation experiments the charge E73 induces a thinning of the membrane to facilitate molecular interactions [28,33].
-The molecular structure reveals a location of E73 within the membrane making it poorly accessible [5,6].
Conclusions
Taken together, multiple interactions of VDAC with molecular partners were described to occur around E73. It is conceivable that these mechanisms do not act independently of each other, but a complex interplay between these factors adapts VDAC function to the current state of the cell (Figure 3). In this instance, the protonation state of E73 was identified as a central factor for many of those mechanisms. An increased conformational flexibility of the VDAC barrel around E73 makes this buried area more accessible to ions including protons. Through this mechanism, changes in intracellular pH can influence VDAC dimerization, the binding of hexokinase, as well as the binding of certain lipids, such as cholesterol or neurosteroids, to the channel. The binding of lipids was shown to influence hexokinase interaction and the binding of hexokinase was shown to influence channel gating and conductance. Furthermore, a feedback mechanism appears to exist through which binding of hexokinase can influence the lipidic membrane composition, in particular cholesterol content. appears to exist through which binding of hexokinase can influence the lipidic membrane composition, in particular cholesterol content. This complex interplay of regulatory mechanisms allows VDACs to adapt to changes in the metabolic state of the cell mediated by multiple layers of regulation and fine-tuning. However, many unanswered questions within this network are still open and E73 still remains to be one of the big enigmas of VDAC. The charge at E73 renders the protein more flexible to allow ions and binding partners to access E73 (magenta). The protonation state of E73 is affected by the intracelullar pH (green) and influences channel dimerization and the binding of lipids and hexokinase to the channel (light blue). These interactions affect channel gating and conduction but could also influence each other, for example through a hexokinase induced change in membrane lipid composition. Synthetic channel modifiers such as RuR or DCCD also bind to E73 and directly affect channel gating and conductance (orange).
Author Contributions: A.B.R. and J.S. performed literature review and wrote the manuscript. T.G. and J.S. provided resources and funding. All authors revised and edited the manuscript. All authors have read and agreed to the published version of the manuscript. The charge at E73 renders the protein more flexible to allow ions and binding partners to access E73 (magenta). The protonation state of E73 is affected by the intracelullar pH (green) and influences channel dimerization and the binding of lipids and hexokinase to the channel (light blue). These interactions affect channel gating and conduction but could also influence each other, for example through a hexokinase induced change in membrane lipid composition. Synthetic channel modifiers such as RuR or DCCD also bind to E73 and directly affect channel gating and conductance (orange).
This complex interplay of regulatory mechanisms allows VDACs to adapt to changes in the metabolic state of the cell mediated by multiple layers of regulation and fine-tuning. However, many unanswered questions within this network are still open and E73 still remains to be one of the big enigmas of VDAC. | 2022-12-29T16:12:00.414Z | 2022-12-23T00:00:00.000 | {
"year": 2022,
"sha1": "1356dcf04396816217670ef90aa92b249bb51cee",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/24/1/269/pdf?version=1671794602",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2291daef47174180b94754521f42fffe719c97a9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
249858841 | pes2o/s2orc | v3-fos-license | Clinicopathological Features of 166 Cases of Invasive Ductal Breast Carcinoma and Effect of Primary Tumor Location on Prognosis after Modified Radical Mastectomy
Objective . To investigate the clinicopathological features of 166 cases of invasive ductal carcinoma (IDC) of the breast and to analyze the effect of the location of the primary tumor on the prognosis of modified radical mastectomy. Materials and Methods . The clinical data of 166 patients with IDC who underwent modified radical mastectomy in our hospital from May 2015 to May 2017 were retrospectively analyzed. The clinicopathological features of IDC patients were recorded. Univariate analysis and the multivariate logistic regression model were used to analyze the relationship between the location of the primary tumor and the prognosis of IDC patients after modified radical surgery. The effect of primary tumor location on the prognosis of modified radical resection was used with Survival curve analysis. Results . Among the patients in the central region, 13.33% had tumors > 5cm in diameter, which was higher than those in the other four groups. Among the patients in the upper inner quadrant, 59.38% received hormone therapy after operation, which was higher than those in the other four groups ( P < 0 . 05). There were no significant differences in age, menopause, histological grading, molecular typing, lymph node metastasis, vascular invasion, radiation therapy, and chemotherapy among different groups ( P > 0 . 05). Univariate analysis showed that molecular typing, lymph node metastasis, vascular invasion, and location of the primary tumor were all related to the prognosis of IDC patients after modified radical surgery, and the differences were statistically significant ( P < 0 . 05). Logistic regression analysis showed that molecular typing, lymph node metastasis, vascular invasion, and primary tumor location were all independent influencing factors for prognosis of IDC patients after modified radical surgery ( P < 0 . 05). As of 31 May 2021, there were 11 patients with recurrence and metastasis and 20 patients with death. The median survival time in the outer upper quadrant group was 80 months, which was higher than that in the outer lower quadrant group by 72 months, the median survival time in the central region group by 71 months, the median survival time in the inner upper quadrant group by 67 months, and the median survival time in the inner lower quadrant group by 61 months. The log-rank test showed all P < 0 . 001. Conclusion . Patients with primary tumors located in the central area have larger tumor diameters. Patients located in the central area, upper inner quadrant, and lower inner quadrant are more likely to have lymphatic metastasis, have a more serious condition, and have a shorter prognosis survival time. Unluminal type, multiple lymph node metastases, vascular invasion, and the location of the primary tumor in the inner quadrant are all independent risk factors for prognosis in patients after modified radical surgery for IDC.
Introduction
Breast cancer is one of the most common malignancies in women. In recent years, its incidence has been increasing year by year in the global scope. Invasive ductal carcinoma (IDC) of the breast is the most common type of breast cancer and belongs to nonspecific invasive carcinoma of the breast. Clinically, patients often have breast lumps and nipple discharge [1,2]. IDC often affects the physical and psychological health of women due to the younger onset group, the tendency of lymphatic metastasis and recurrence in advanced patients, and the influence on the appearance of breasts after surgical resection [3,4]. erefore, early diagnosis and treatment of IDC are extremely important.
At present, the treatment methods for IDC mainly include radical surgery, modified radical surgery, and postoperative chemoradiotherapy [5,6]. Among them, modified radical surgery is an effective method for the treatment of IDC. It can preserve the pectoralis minor and pectoralis major muscles of the affected side of the patient and minimize the damage to the shape of the breast. Compared with traditional surgery, modified radical surgery not only ensures the thoroughness of tumor resection but also satisfies the pursuit of beauty for female patients. It is the most commonly used surgical method in clinical practice at present. [7,8]. With the nipple as the center, the breast can be divided into five positions, including four quadrants and a central area. e primary tumor is located in different locations, and the progression speed of the disease is also different. In this study, we retrospectively analyzed the follow-up data of 166 IDC patients within five years after modified radical surgery, summarized the clinicopathologic features of these patients, and analyzed the impact of the location of primary tumors on the prognosis of modified radical surgery. e specific report is as follows.
General Information.
A total of 166 IDC patients undergoing modified radical mastectomy in our hospital from May 2015 to May 2017 were retrospectively collected. eir age ranged from 25 to 75 years old, with an average of 50.79 ± 9.61 years old.
Inclusion
Criteria. Inclusion criteria were as follows: (1) Meets breast cancer IDC diagnostic criteria; (2) the pathological diagnosis is unilateral IDC; (3) receiving modified radical mastectomy; (4) having complete clinical and pathological data and follow-up information.
Exclusion Criteria.
Exclusion criteria were as follows: (1) Preoperative neoadjuvant chemotherapy; (2) distant metastasis has occurred at the time of initial treatment; (3) the location of the tumor is located in the boundary area of each quadrant; (4) multifocal breast cancer; (5) male breast cancer.
Primary Tumor Location.
Tumor location was determined on the basis of a preoperative imaging report (color Doppler ultrasound, mammography, or MRI) closest to the date of surgery and intraoperative measurements. A horizontal and vertical line was drawn with the nipple as the center, which divided the breast into outer upper, outer lower, inner upper, and inner lower. e nipple and areola were the central area, 5 regions in total. e location of the primary tumor was divided into the outer upper quadrant, the outer lower quadrant, the inner upper quadrant, the inner lower quadrant, and the central region, which included the nipples and areola complex.
Postoperative Follow-Up.
A follow-up was performed every 3 months for the 2-year postoperative period, semiannually from the 3rd year, and annually after the 5th year. e follow-up included breast tumor markers, breast ultrasound, mammography chest X-rays and abdominal ultrasound, and CT and whole-body bone scans when necessary. e follow-up deadline of all patients was May 31, 2021. If a patient had tumor recurrence, metastasis, or death, the follow-up would be deemed as terminated. e median follow-up time was 73 months. During the follow-up period, there were 11 patients with recurrence and metastasis and 20 patients with death. e clinical and pathological characteristics such as age, menopause or not, tumor diameter, histological grade, molecular classification, lymph node metastasis, vascular invasion, radiotherapy, chemotherapy, endocrine therapy, and the location of the primary tumor were recorded.
Statistical
Processing. Data processing was performed using SPSS22.0 software. e enumeration data were expressed as %, and the comparison was performed using the χ2 test. e multivariate logistic regression model was used for multivariate analysis. Kaplan-Meier survival curve was used to analyze the relationship between the location of the primary tumor and the prognosis of modified radical mastectomy. e log-rank test was used for comparison. e test level was α � 0.05, and P < 0.05 indicated that the difference was statistically significant.
Clinical Pathological Features of IDC.
Among the patients in the central region, 13.33% (2/15) had tumors >5 cm in diameter, which was higher than those in the other four groups. Among the patients in the upper inner quadrant, 59.38% (19/32) received hormone therapy postoperatively, which was higher than those in the other four groups (P < 0.05).
ere were no significant differences in age, menopause, histological grading, molecular classification, lymph node metastasis, vascular invasion, radiation therapy, and chemotherapy among different groups (P > 0.05) (see Table 1).
Univariate Analysis of Prognosis of IDC Patients after
Modified Radical Mastectomy. Univariate analysis showed that molecular typing, lymph node metastasis, vascular invasion, and location of the primary tumor were all related to the prognosis of IDC patients after modified radical mastectomy, and the differences were statistically significant (P < 0.05) (see Table 2).
Multivariate Logistic Regression Analysis on Prognosis of IDC Patients after Modified Radical Mastectomy.
Multivariate logistic regression analysis showed that molecular typing, lymph node metastasis, vascular invasion, and primary tumor location were all independent risk factors for prognosis of IDC patients after modified radical mastectomy (P < 0.05) (see Tables 3 and 4).
e Prognostic Effect of Different Tumor Locations on IDC
Patients after Modified Radical Surgery. As of 31 May 2021, there were 11 patients with recurrence and metastasis and 20 patients with death. e median survival time in the outer upper quadrant group was 80 months, which was higher than that in the outer lower quadrant group by 72 months, the median survival time in the central region group by 71 months, the median survival time in the inner upper quadrant group by 67 months, and the median survival time in the inner lower quadrant group by 61 months. e logrank test showed all P < 0.001 (see Figure 1).
Discussion
Invasive breast cancer is a common type in breast cancer patients, and IDC is in the majority. IDC accounts for 70%-80% of invasive breast cancer, and patients are often accompanied by breast masses, pitting nipples, and other clinical manifestations [9,10]. At present, the main examination methods of IDC include mammography, color Doppler ultrasound, CT, and MRI [11,12]. e primary tumor is located in different locations and lymph node metastasis occurs at different rates. For early IDC patients with tumor diameter <3 cm and no axillary lymph node metastasis or only slight metastasis and no distant metastasis, the therapeutic effect is good, and more than 90% of patients can be cured for a long time [13,14]. However, early IDC patients did not have typical clinical symptoms and signs, and it was difficult to detect them during normal times. e diagnosis needed to be made based on imaging and pathology examinations, which easily missed the optimal treatment period and posed a serious threat to women's health [15,16]. erefore, early diagnosis and treatment of IDC are very important.
Among the patients in the central region, 13.33% (2/15) had tumors >5 cm in diameter, which was higher than those in the other four groups. Among the patients in the upper inner quadrant, 59.38% (19/32) received endocrine therapy after operation, which was higher than that in the other four groups. ere were no significant differences in age, menopause or not, histological grade, molecular classification, lymph node metastasis, vascular invasion, radiotherapy, and chemotherapy of patients among different groups. e reason for this was analyzed as the existence of a nippleareolar complex in the central area made it difficult to detect the primary tumor, resulting in a larger tumor volume for the first detection. After the operation, the patient needs postoperative radiotherapy and chemotherapy to prevent further development of the disease [17]. Lymph node metastasis is more likely to occur in tumors in the upper inner Emergency Medicine International quadrant, and endocrine therapy can improve endocrine function of patients, thereby avoiding the growth and metastasis of cancer cells in patients [18,19].
Our results show that un-luminal type, multiple lymph node metastases, vascular invasion, and the location of the primary tumor in the inner quadrant are all independent risk factors for the prognosis of IDC patients after modified radical surgery. e reason was analyzed as follows: IDC is a highly heterogeneous tumor, different molecular types of IDC have different biological characteristics, prognosis, and sensitivity to treatment, which affect the prognosis of patients after modified radical surgery [20,21]. e more lymph node metastasis is, the more serious the vascular invasion, which often indicates the hematogenous metastasis and lymphatic metastasis of the tumor. erefore, the more serious the disease is, which is not conducive to the prognosis of the patient [22]. Unluminal type includes Her-2 overexpression type and triple-negative breast cancer. Although these two types are sensitive to chemotherapy, they have been found in clinical practice to have a poorer prognosis than the luminal type, which has been widely recognized in clinical practice. Lymphatic metastasis is the most common metastasis mode of IDC. e prognosis of IDC patients with primary tumors located in the central area, upper inner quadrant, and lower inner quadrant is
Vascular infiltration
No � "0", yes � "1" Primary tumor location Outer upper quadrant � "0," outer lower quadrant � "1," central area � "2," inner upper quadrant � "3," inner lower quadrant � "4" significantly worse than those of patients in the upper outer quadrant and lower outer quadrant. ere are rich lymphatic vessels at the nipples in the central area, and the cancer cells located in the central area are easy to undergo lymphatic metastasis through the rich lymphatic vessels around, which are the independent risk factors for patients with IDC after modified radical surgery [23,24]. e internal mammary gland is the second largest lymphatic metastasis pathway after the axillary lymph node. For IDC patients with primary tumors located in the upper inner quadrant, lower inner quadrant, and central region, the tumor was closer to the internal mammary gland lymphatic drainage pathway, and it was more prone to lymphatic metastasis, which was not conducive to the prognosis of patients [25]. In addition, internal mammary lymph nodes are characterized by deep anatomical location and small size, which are difficult to be detected clearly by mammography and color Doppler ultrasound, thus delaying the treatment time and unfavorable to the prognosis [26]. As of 31 May 2021, the median survival time in the outer upper quadrant group was 80 months, which was higher than that in the outer lower quadrant group by 72 months, the median survival time in the central region group by 71 months, the median survival time in the inner upper quadrant group by 67 months, and the median survival time in the inner lower quadrant group by 61 months. e reason was analyzed as follows: lymphatic metastasis is the most important mode of metastasis of IDC tumors. e closer the primary tumor is to the internal mammary lymphatic metastasis pathway, the more likely the cancer cells will develop lymphatic metastasis and the severer the disease will be, which will affect the prognosis of patients undergoing modified radical mastectomy [27]. As a result, the five-year survival rate of IDC patients will be reduced, and the fiveyear survival rate of patients with primary tumors located in the central area, the inner upper quadrant, and inner lower quadrant will be lower than that of patients located in the outer upper quadrant and outer lower quadrant. In addition, due to the excessive penetration of mammography to the nipple-areolar complex, tumors in the central region are often overlooked, requiring the combination of multiple imaging techniques [28]. e molybdenum target detection rate of breast cancer in the central region is low, and the tumor is detected in a late stage, which delays the treatment time and reduces the five-year survival rate of patients [29].
In conclusion, patients with primary tumors located in the central area have larger tumor diameters. Patients located in the central area, upper inner quadrant, and lower inner quadrant are more likely to have lymphatic metastasis, have a more serious condition, and have a shorter prognosis survival time.
ey are the independent risk factors for prognosis after modified radical surgery. A good understanding of IDC and timely diagnosis and treatment can effectively improve the prognosis and increase the five-year survival rate of patients.
Data Availability
e datasets used and/or analyzed in the current study are available from the corresponding author upon request.
Ethical Approval e study was reviewed and approved by the hospital ethics committee.
Consent
All observed subjects and their families gave informed consent to the study.
Disclosure
Shiman Chen and Liang Yang are co-first authors.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2022-06-20T15:02:48.253Z | 2022-06-18T00:00:00.000 | {
"year": 2022,
"sha1": "eee8c93e74ee56d1eb62490e5ef2379750693f8b",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/emi/2022/3158956.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "443ecb987b3804f20c576710b32d81859bb129b4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235607547 | pes2o/s2orc | v3-fos-license | Role of bile acids in liver diseases mediated by the gut microbiome
The intensive crosstalk between the liver and the intestine performs many essential functions. This crosstalk is important for natural immune surveillance, adaptive immune response regulation and nutrient metabolism and elimination of toxic bacterial metabolites. The interaction between the gut microbiome and bile acids is bidirectional. The gut microbiome regulates the synthesis of bile acids and their biological signaling activity and circulation via enzymes. Similarly, bile acids also shape the composition of the gut microbiome by modulating the host’s natural antibacterial defense and the intestinal immune system. The interaction between bile acids and the gut microbiome has been implicated in the pathophysiology of many intestinal and extra intestinal diseases, especially liver diseases. As essential mediators of the gut-liver crosstalk, bile acids regulate specific host metabolic pathways and modulate the inflammatory responses through farnesoid X-activated receptor and G protein-coupled bile acid receptor 1. Several clinical trials have demonstrated the signaling effects of bile acids in the context of liver diseases. We hypothesize the existence of a gut microbiome-bile acids-liver triangle and explore the potential therapeutic strategies for liver diseases targeting the triangle.
INTRODUCTION
Approximately 70%-75% of the liver blood supply is derived from the intestine, which forms the basis of the gut-liver axis [1]. Liver plays an important role in the immune response of the human body; it serves as the central hub of the crosstalk between the host metabolism and the intestinal microenvironment. Therefore, the liver is exposed to a large number of bacterial components, metabolites and microbiome-derived signals. Recent studies have indicated that bile acids act as pleiotropic signaling molecules that mediate the gut-liver crosstalk [2,3]. The complex relationship between bile acids and the gut microbiome (mutual dependence and inhibition) plays an important role in the maintenance of mammalian homeostasis.
The gut microbiome participates in the conversion of bile acids and regulates the synthesis and reabsorption of bile acids in the liver (Figure 1). Studies have found decreased activity and gene expression levels of cholesterol 7α-hydroxylase (CYP7A1), a rate-limiting enzyme in the classical synthesis pathway of bile acids, in CONV-R mice compared with those in germ-free mice [4,5]. Moreover, the volume of the bile acid pool in CONV-R mice is smaller, which may be related to the decreased expression of ileal sodium-dependent bile acid transporter, which leads to decreased reabsorption of bile acids in distal ileum and increased fecal excretion of bile acids [5]. The 7α-dehydroxylation reaction of the intestinal microbiome is an essential part of biological conversion [6]. The genus Clostridium exhibits 7α-dehydroxylation activity that can transform primary bile acids into secondary bile acids [7]. Most Gram-positive bacteria in the intestine have bile salt hydrolases activity, while only Bacteroides among the Gram-negative bacteria can hydrolyze conjugated bile acids into unconjugated bile acids. The biological functions of bile acids in the body vary depending upon their state [8]. Unconjugated bile acids can eliminate the pH differential across the cell membrane. The consequent disappearance of bio-energy driven by the proton pump can cause direct damage to the cell membrane. Eventually, bile acids inhibit the growth of certain bacteria and participate in the formation of the gut microbiome [9].
Bile acids, as essential signal molecules for bidirectional regulation between the liver and the intestinal tract, are mainly activated by the following two signaling pathways: binding of a signaling molecule to G-protein coupled bile acid receptor 1 (GPBAR1 or TGR5) and activation of the expression of the farnesoid X-activated receptor (FXR). The above two pathways control the balance of energy metabolism, regulate hepatic steatosis and inflammatory response and influence the composition of the gut microbiome by shaping intestinal immunity and some antimicrobial properties of endogenous peptides. Therefore, the use of bile acids as signaling molecules by the gut microbiome may play a role in the pathophysiology of liver diseases[10-13]. Better characterization of the specific sites of action of the gut microbiome and bile acids in different signaling pathways in liver diseases can lay the foundation for novel therapies targeting bile acids.
LIVER CANCER AND BILE ACIDS
In a seminal study by Ma et al [10], ABX (primaxin, neomycin and vancomycin) fed mice exhibited fewer and smaller primary or metastatic liver cancer lesions, and this result was not related to the age, strain or sex. These results suggested that regulation Figure 1 Bile acids play bidirectional action between the liver and the gut microbiome. Bile acids can upregulate CXCL16 on liver sinusoidal endothelial cells to induce the accumulation of CXCR6 + natural killer T cells, which kill tumor cells directly or through interferon-.The farnesoid X receptor improves the expression of fibroblast growth factor 19 (FGF19) in the enterocyte after being activated by bile acids. FGF19 is the ligand of FGFR4 and can suppress the expression of cholesterol 7α-hydroxylase with the help of Klotho beta. The presence of Klotho beta can activate FGFR4. In other words, bile acids can activate the farnesoid X receptor to increase the expression of bile salt export protein, organic anion transporting polypeptides 1B3, and small heterodimer partner. However, small heterodimer partner can suppress the activation of cholesterol 7α-hydroxylase and Na + -taurocholate cotransporting polypeptide. Red and green arrows indicate positive and negative effects, respectively. NKT: Natural killer T; BA: Bile acid; IFN-: Interferon-; BSEP: Bile salt export protein; FGF19: Fibroblast growth factor 19; FGFR4: Fibroblast growth factor receptor 4; FXR: Farnesoid X receptor; KLB: Klotho beta; NTCP: Na + -taurocholate cotransporting polypeptide; SHP: Small heterodimer partner; ASBT: sodium-dependent bile acid transporter; CYP7A1: Cholesterol 7α-hydroxylase; MRP2: Multidrug resistance-associated protein 2; OATPs: Organic anion transporting polypeptides; LSEC: Liver sinusoidal endothelial cells; OATP1B3: Organic anion transporting polypeptides 1B3.
of the gut microbiome may usefully alter the growth kinetics of liver tumors.
Several studies have investigated the mechanism linking the gut microbiome and liver tumor immunity and surveillance. In a study, CD8 + T cells and natural killer T (NKT) cells in the liver tissues of ABX-treated EL4-tumor-bearing mice were significantly increased compared with that in the control group. However, in the ABXtreated MYC mice, only NKT cells were increased, and similar results were observed in normal mice without tumors. Moreover, ABX-treated mice showed increased expression of CXCL16 mRNA in the liver sinusoidal endothelial cells (LSEC). Further studies showed that CXCL16 is the only ligand of CXCR6, which can induce the accumulation of CXCR6 + hepatic NKT cells in the liver [14]. NKT cells can directly kill CD1d-expressing tumors (B16, EL4 and A20) and can also suppress liver tumors by secreting interferon-.
To identify the potential link between bile acids and NKT cells, the bile acid profile of ABX-treated mice was determined. The results showed significantly increased primary bile acids compared with that in H 2 O-treated mice. The findings were further confirmed after treatment of isolated LSECs with various bile acids or a combination of tauro-β-muricholic acid with -muricholic acid or tauro--muricholic acid; the results showed that primary bile acids could indeed upregulate the expression of CXCL16 mRNA. Predictably, the secondary bile acids (lithocholic acid) reversed the ABXinduced suppression of the growth of intrahepatic tumors growth and increased liver surface metastasis.
The above findings suggest that, the level of CXCL16 on LSECs can be upregulated by the gut microbiome through the primary bile acids, which leads to the accumu-lation of CXCR6 + hepatic NKT cells in the liver, while the secondary bile acids have the opposite effect. Although, ABX kills most of the bacteria in the mice intestine, the regulatory impact of the remaining bacteria on NKT cells cannot be excluded. To address this issue, the above experiment was repeated with germ-free mice. The results showed greater accumulation of NKT cells and increased expression of CXCL16 mRNA in the liver of germ-free mice. Some antibiotics such as cefoperazone and vancomycin were used against Gram-positive bacteria to increase primary bile acids and NKT cells in the liver, while depleting secondary bile acids. The 7-α dehydroxylation reaction is a key step in the transformation of primary bile acids into secondary bile acids[6]; Clostridium XIV among Gram-positive bacteria synthesizes 7-α dehydroxylase [7]. To explore the role of Clostridium in the accumulation of hepatic NKT cells, mice were first fed vancomycin to increase the NKT cells. One week later, the antibiotic treatment was replaced with Clostridium scidens, a type of Clostridium occurring in both humans and mice. On the second day after successful colonization of Clostridium scidens, hepatic NKT levels began to decline with a decrease in primary bile acids. The antibiotic effect was offset when antibiotic-treated mice were gavaged with bile acids metabolizing bacteria or were fed with secondary bile acids. Both the inhibition of intrahepatic tumor growth and accumulation of hepatic NKT cells were reversed in mice with an altered gut microbiome. Clostridium species is a key bacterium for regulating bile acid signal NKT cell accumulation.
LSECs isolated from human samples were treated with chenodeoxycholic acid (CDCA) and taurocholic acid. The results were similar to the mouse study in that primary bile acids were also found to upregulate the expression of CXCL16 mRNA. Normal tissues excised from patients with cholangiocarcinoma and hepatocellular carcinoma showed a positive correlation between CDCA and CXCL16, while secondary bile acids were observed to have opposite results. The above findings indicate that gut microbiome-mediated bile acid metabolism regulates liver cancer via NKT cells, which is also applicable to the human body.
NONALCOHOLIC FATTY LIVER DISEASE AND BILE ACIDS
Nonalcoholic fatty liver disease (NAFLD) is a clinicopathologic syndrome characterized by excessive fat deposition in hepatocytes and other tissues caused by factors other than alcohol and other definitive causes of liver damage. In the absence of effective interventions, NAFLD may progress to fibrosis, cirrhosis and hepatocellular carcinoma. Environmental, genetic and metabolic factors as well as altered gut microbiome have been implicated in the pathogenesis of NAFLD. Bile acids and their metabolites help maintain the homeostasis of glucose, cholesterol and triglycerides in the liver and regulate inflammation; this is considered a potential therapeutic target for NAFLD [15,16].
Recent studies have revealed that the role of bile acids as signaling molecules may affect the development of NAFLD at multiple levels [17]. These regulatory activities are mainly achieved by FXR and TGR5, and different bile acids have different effects on these two signaling pathways [18]. FXR can regulate lipid and glucose metabolism through different pathways, such as by inhibiting the expression of hepatic gluconeogenesis genes and increasing liver glycogen synthesis and insulin sensitivity [19,20]. It can also induce the expression and secretion of liver fibroblast growth factor 21(FGF21), which as a metabolic regulator can stimulate the uptake of glucose in adipose tissue [21]. In addition, FXR activation inhibits lipogenesis, promotes fatty acid oxidation and affects cholesterol transport [22]. Bile acids can also activate the G protein-coupled receptor. TGR5 is expressed in nonparenchymal hepatocytes, monocytes and a variety of macrophages that secrete various inflammatory mediators and play an important role in regulating inflammatory responses [23]. Bile acids can inhibit the lipopolysaccharide-induced secretion of interleukin (IL)-6, IL-1A and IL-1B; in addition, bile acids can inhibit the secretion of tumor necrosis factor by Kupffer cells through a TGR5-cAMP-dependent pathway [24]. TGR5 can also regulate glucose homeostasis by inducing the expression of glucagon-like peptide-1 and inhibiting the activation of Nod-like receptor protein 3 inflammasome. Activation of TGR5 results in increased energy expenditure and weight loss [25,26]. TGR5 or FXR agonists can reduce lipogenesis, improve cholesterolemia, induce energy consumption and reduce liver inflammation in NAFLD patients [27,28].
Clinical studies have implicated disorders of bile acid homeostasis and related signaling pathways in the occurrence of NAFLD [29]. The serum concentration of total bile acids in patients with nonalcoholic steatohepatitis was three times higher than that in healthy individuals. Moreover, the composition of the bile acid pool was also different in these two groups [30]. Bile acids have a direct antibacterial effect; through FXR, they can induce the production of antibacterial peptides, such as angiogenin 1, which participates in the shaping of the gut microbiome [31]. During the development and progression of NAFLD, disruption of bile acid balance is also accompanied by disruption of the gut microbiome. Therefore, the liver-bile acid-gut microbiome triangle is a good entry point for the treatment of NAFLD.
In two human studies, obeticholic acid (OCA), an FXR agonist, was shown to decrease liver fibrosis markers and improve insulin resistance in patients with NAFLD [32]. Patients receiving OCA treatment showed a significant decrease in high-density lipoprotein and an increase in low-density lipoprotein levels. Compared with the placebo-treated group, OCA significantly improved the NAFLD Activity Score while significantly reducing liver fibrosis [33,34]. As a derivative of cholic acid, INT-777 is a selective agonist of TGR5 that increases energy expenditure and induces weight loss in high fat diet-fed mice [35]. McMahan et al [27] found that INT-767, a dual agonist of FXR/TGR5, can reduce the expression of proinflammatory factors, decrease hepatic steatosis and transform monocytes and macrophages to the anti-inflammatory M2 phenotype.
ALCOHOL-RELATED LIVER DISEASE AND BILE ACIDS
Alcohol-related liver disease (ALD) is caused by chronic consumption of alcohol. It is initially characterized by liver steatosis, which in turn can develop into alcoholic hepatitis, liver fibrosis and cirrhosis. Alcohol can cause damage to multiple target organs, especially the brain, intestines and liver. It worth noting that alcohol intake can alter the structure of the gut microbiome prior to the occurrence of overt liver diseases.
Continued drinking in patients with alcoholic cirrhosis can aggravate gut microbiome disorders, reduce the detection of gut commensal bacteria in feces and worsen the function of the duodenal and colonic mucosa. Chronic alcohol exposure is associated with gut microbiome dysbiosis in preclinical models and the human gut, which is associated with the pathogenesis of ALD. Previous studies have shown altered gut microbiome in patients with alcoholic cirrhosis, which was characterized by an increase in endotoxin-producing bacteria and a decrease in the gut commensal bacteria [36]. Changes in composition of the gut microbiome may alter brain function, and alcohol abuse can also affect the gut-brain axis; this may further aggravate alcohol dependence and induce affective disorders, ultimately, accelerating the development of hepatic encephalopathy. Compared to people who do not have the disease and do not drink alcohol, patients with alcoholic cirrhosis have higher levels of endotoxins and worse gut microbiome dysbiosis even after cessation of alcohol intake [36]. This indicates that alcohol-induced damage to the gut microbiome continues even after cessation of intake and that the damage also extends to the gut-brain axis, leading to cognitive impairment [37].
FXR negatively regulates CYP7A1, which is a rate-limiting enzyme for bile acid synthesis via FGF15. In long-term alcohol-fed mice, ethanol was shown to negate the negative regulation of bile acids by conjugated CDCA through the FXR pathway and increase the expression of CYP7A1 protein in liver cells; this ultimately led to an increased concentration of bile acids in the serum and liver. FXR can induce the expression of antimicrobial molecules in intestinal epithelial cells to prevent alcoholinduced damage to the enteric tight junctions and avoid loss of intestinal barrier integrity [38]. Intragastric administration of the FXR agonist fexaramine caused a decrease in serum alanine aminotransferase levels and hepatic IL-1B and tumor necrosis factor protein. This is because FXR can increase the small heterodimer partner protein to inhibit intestinal inflammation and protect the integrity of the intestinal barrier[11]. The above finding indicates that FXR agonists can negatively regulate the synthesis of bile acids and reduce the concentration of serum bile acids, which can alleviate alcohol-induced steatosis and liver inflammation. Administration of antibiotics to chronic drinkers was found to reduce alcohol-related liver disease. This is because antibiotics can kill gut commensal bacteria, reduce the concentration of bile acids and inhibit the hydrolysis of bile acids; this can only reduce the toxicity of deoxycholic acid to hepatocytes and stabilize the intestinal barrier function.
In summary, alcohol-related changes in the gut microbiome can ultimately alter the bile acid profile. Interventions targeting the bile acid-FXR-FGF15 signaling pathway to regulate the synthesis of CYP7A1 and lipid metabolism can reduce the occurrence of ALD in mice. As a signaling molecule, bile acids can modulate the complex June 14, 2021 Volume 27 Issue 22 interactions between the gut microbiome and alcohol through the gut-liver-brain axis. Therefore, FGF19 and fexaramine are good candidates for the treatment of ALD. [39][40][41][42]. The progress of autoimmune cholestatic liver diseases also affects the composition of the gut microbiome, aggravating the development of cholestasis in this interactive cycle [43]. Cholestatic liver diseases are often accompanied by gut microbiome dysbiosis and reduced bacterial diversity [44]. Kummen et al[45] found that the gut microbiome of patients with primary sclerosing cholangitis (PSC) was significantly different from that of patients with ulcerative colitis without liver disease as well as that of healthy individuals. The Veillonella genus was overexpressed only in the intestines of patients with PSC. It is worth noting that Veillonella showed a positive correlation with the pathogenesis of fibrosis, not only in PSC but also in other fibrotic diseases, such as idiopathic pulmonary fibrosis [46]. The changes in gut commensal bacteria are related to the pathogenesis of NAFLD and ALD and to primary biliary cholangitis (PBC) and PSC. This may be related to the abnormal development of immunity caused by an imbalance of the gut microbiome, resulting in imbalanced production of injurious vs cytoprotective metabolites [46].
CHOLESTATIC LIVER DISEASES AND BILE ACIDS
In previous studies, bile acids have been considered a tissue-damaging factor that promotes inflammation owing to its chemical properties; its detergent effect can destroy the cell and mitochondrial membranes [47]. Overall, there are three important pathways of cytotoxicity induced by bile acids: (1) oxidative stress in the endoplasmic reticulum and mitochondria; (2) direct activation of death receptors Traill2 and Fas; and (3) lysis of the plasma membrane of liver cells. Thus, accumulation of hydrophobic bile acids is the leading cause of cholestatic liver diseases. However, a recent article suggested that two derivatives of lithocholic acid (LCA) (isoalloLCA and 3-oxolCA) can influence the adaptive immune response by regulating the differentiation of T helper (Th)17 and regulatory T (Treg) cells [48]. They are mutually restricted in function, and the change in their ratio has a decisive role in the pathogenesis and clinical prognosis of autoimmune and inflammatory diseases. Many studies have shown an imbalance between Th17 and Treg cells in patients with PBC[49-52]. These patients presented with a defective CD8 + Treg cell subset and preferentially activated Th17 cells [53]. IsoalloLCA can promote the differentiation of Treg cells by increasing mitochondrial reactive oxygen species synthesis and the expression of H3K27ac in the Foxp3 promoter region under the induction of the TGF-β signal [48]. Another LCA derivative, 3-oxoLCA, inhibits Th17 cells differentiation, manifested by significantly reduced IL-17a, thereby inhibiting inflammation [48]. The results demonstrated that LCA derivatives (isoalloLCA and 3-oxoLCA) can regulate the balance between Th17 and Treg cells, which is of great significance for the treatment of cholestatic liver diseases. Moreover, isoalloLCA and 3-oxoLCA are expected to be used for the treatment of autoimmune or inflammatory diseases mediated by Th17/Treg cells imbalance in the future.
Bile acids play an important role in regulating both adaptive and innate immune systems through the gut-bile acids-liver triangle. Among them, ursodeoxycholic acid (UDCA) is currently the only drug that has been approved for the treatment of PBC. It can effectively reduce the retention of toxic bile acids in liver cells and alleviate liver damage [54,55]. However, UDCA has limited efficacy in cholestatic liver disease [56]; in addition, some patients are unable to tolerate the adverse effects of UDCA (such as gastrointestinal symptoms) [57].
Development of effective medical therapy for cholestatic liver disease is a key imperative. 24-norursodeoxycholic acid (norUDCA), a C23 homologue of UDCA with shortened lateral chains, has effective antifibrosis, anticholestatic and anti-inflammatory properties [58,59]. In a phase II clinical study, 12-wk treatment with norUDCA caused a significant dose-dependent reduction in serum alkaline phosphatase levels in PSC patients. A multicenter randomized controlled trial evaluated the efficacy and safety of norUDCA (500 mg/d, 1000 mg/d or 1500 mg/d) compared with placebo in PSC patients. NorUDCA was shown to have an excellent safety profile similar to placebo[60]. Another bile acid, OCA (an FXR agonist), has shown potential benefits for PBC. It has approximately 100 times greater potency in activating FXR than CDCA [61]. OCA protects hepatocytes from the toxic effects of bile acids by activating the FXR receptor, reducing bile acids synthesis and improving choleresis. In addition to the effect of FXR on bile acid homeostasis, OCA monotherapy can improve the secretion of IgM and tumor necrosis factor-α and has direct immunomodulatory, antifibrosis and anti-inflammatory effects [62,63]. In clinical trials, OCA monotherapy caused a significantly greater decrease in alkaline phosphatase and bilirubin levels from baseline as compared to placebo. However, OCA treatment caused a dose-related increase in pruritus [34,57]. In conclusion, OCA may represent a new treatment option for PBC patients who cannot tolerate UDCA. Discovery of new bile acids and understanding the best way to use various bile acids may help develop new treatments for cholestatic liver disease.
LIVER FIBROSIS AND BILE ACIDS
Previous studies have shown that gut microbiome changes or dysbiosis in patients with chronic liver disease or cirrhosis are often accompanied by a significant reduction in total bile acids and secondary bile acid/primary bile acid ratio [8]. The dysbiosis is characterized by a decrease in bile acid 7α-dehydroxylating bacteria, a change in Bacteroides/Firmicutes ratio and an increase in pathogenic Gram-negative bacte-ria [64][65][66]. During the progression of cirrhosis, there is overgrowth of pathogenic bacteria in the small intestine, due to translocation of lipopolysaccharide, endotoxins and other metabolites and the resultant inammation. In fact, studies have demonstrated a positive correlation of Enterobacteriaceae with endotoxemia inflammation and CDCA levels in feces. These metabolites derived from oxidative stress and the metabolism of ammonia and aromatic amino acids were positively related to Porphyromonadaceae and Enterobacteriaceae and closely related to the occurrence of hepatic encephalopathy[67]. Kakiyama et al[8] proposed that gut microbiome dysbiosis in cirrhotic patients is partly attributable to the decreased concentration of bile acids in the intestine. For example, the decrease in the number of bile acid 7α-dehydroxylating bacteria is caused by the reduction in the level of primary bile acids in the colon, which serves as an energy source [7,68]. Reduced levels of bile acids entering the small intestine can lead to overgrowth of proinammatory and pathogenic bacteria and induce the release of inammatory markers as well as an increase in liver inflammation [69]. Liver inflammation triggers a positive-feedback mechanism, which can further inhibit the synthesis of bile acids [70]. The size and composition of the bile acid pool can significantly regulate the gut microbiome structure and serve as an indicator of the severity of hepatic diseases. In summary, the balance of the liver-bile acid-gut microbiome axis is essential for human health and liver fibrosis.
DCA, the most effective antimicrobial in bile acids, is produced by bile acid 7αdehydroxylating bacteria [71]. Studies have shown that increasing the DCA/cholic acid ratio in patients with cirrhosis can improve the toxic metabolites from the gut microbiome and increase the incidence of endotoxemia and hepatic encephalopathy, which may be related to the destruction of the intestinal mucosal barrier by DCA[8, 72,73]. Compared with DCA, which exacerbates barrier dysfunction, LCA has a much less destructive effect on the gut microbiome due to its insolubility in water and easy excretion with feces [72,73]. TGR5 is a membrane receptor that can be activated by a variety of bile acids, among which LCA is its most potent natural agonist [18]. In the study by Guo et al[74], LCA inhibited the activation of Nod-like receptor protein 3 inflammasome via the TGR5-cAMP-protein kinase A axis, which significantly repressed the maturation of caspase-1 and the secretion of IL-B or IL-18. In addition, LCA was also shown to reduce the release of proinflammatory cytokines induced by lipopolysaccharide and the phagocytic activity of macrophages through TGR5, thus inhibiting liver inflammation[24, [75][76][77]. Recent studies have shown that two different metabolites of LCA can also control host immune responses both in human and mice [78][79][80]. In clinical studies, LCA content in the stool of patients with advanced cirrhosis was significantly lower than that in patients with early cirrhosis [8]. Previous studies have focused on the cytotoxicity of bile acids. However, recent studies have revealed the anti-inflammatory and immunomodulatory effects of bile acids. Increasing evidence has shown that bile acids are a potential therapeutic target against inflammatory diseases. An appropriate increase in the concentration of bile acids in patients with liver cirrhosis may repress liver inflammation and improve liver fibrosis, which deserves further discussion.
CONCLUSION
Numerous studies in recent decades have shown that the function of bile acids is beyond that of "digestive surfactants." At the host level, there is a clear relationship between bile acid signaling and innate immunity in the liver and intestine. In other words, bile acids are the cornerstone of the immune axis between the liver and the gut microbiome. As a mediator in the gut-liver axis, bile acids can regulate the inflammatory response, host metabolism and innate immunity, which are effective therapeutic targets in the context of various hepatic diseases (Table 1). However, most contemporary research on bile acids is based on genetically modified mice models, while the immune system, bile acid metabolism and the gut microbiome of mice are vastly different from that of humans. Recent clinical trials of FXR agonists have yielded promising results. Preclinical data suggest that gut microbiome metabolism of bile acids is also a potential therapeutic target. Changes in the gut microbiome to regulate the composition of bile acids can improve liver health through the use of antibiotics, probiotics, prebiotics and fecal bacteria transplantation [81,82]. The complex interactions between bile acids and host-microbiome in the gut-liver axis are only beginning to be understood. Clinical trials and further in-depth studies can help characterize the different roles of bile acids in healthy individuals and patients with hepatic diseases, allowing their optimal utilization as potential therapeutic targets. | 2021-06-24T05:24:45.994Z | 2021-06-14T00:00:00.000 | {
"year": 2021,
"sha1": "8caf87e8da2481a9d98f179d49dac292a347cf99",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v27.i22.3010",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8caf87e8da2481a9d98f179d49dac292a347cf99",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7685887 | pes2o/s2orc | v3-fos-license | Phagocytosis of gelatin-latex particles by a murine macrophage line is dependent on fibronectin and heparin.
It has been suggested that fibronectin plays a role in clearing particles from the circulation by promoting binding to phagocytes of the reticuloendothelial system. By use of a well-defined system to investigate the possible opsonic role of fibronectin, we have studied the uptake of gelatin-coated latex particles by a murine macrophage cell line (P388D1). Fibronectin promotes binding of gelatin-coated beads to these cells in both suspension and monolayer cultures. In both cases there is a requirement for heparin as a cofactor. Other glycosaminoglycans (chondroitin sulfates A and C, dermatan sulfate, and keratan sulfate) were inactive, whereas heparan sulfate was somewhat active. Proof that beads were actually endocytosed was obtained by electron microscopy, which showed beads internalized in membrane-bounded vesicles, and by immunofluorescence analyses, using antibodies to fibronectin to stain external beads. Two rapid assays for the opsonic activity of fibronectin were developed based on differential centrifugation of cell-associated beads and on the immunofluorescence procedure. Binding and endocytosis were time- and temperature-dependent and varied with the amount of gelatin on the beads and with the concentrations of fibronectin and heparin added, and could be inhibited by F(ab')2 antifibronectin. These studies provide a sound basis for a detailed analysis of the interaction of fibronectin with the cell surface and of its involvement in endocytosis.
ABSTRACT It has been suggested that fibronectin plays a role in clearing particles from the circulation by promoting binding to phagocytes of the reticuloendothelial system . By use of a well-defined system to investigate the possible opsonic role of fibronectin, we have studied the uptake of gelatin-coated latex particles by a murine macrophage cell line (P388D 1 ) .
Fibronectin promotes binding of gelatin-coated beads to these cells in both suspension and monolayer cultures . In both cases there is a requirement for heparin as a cofactor . Other glycosaminoglycans (chondroitin sulfates A and C, dermatan sulfate, and keratan sulfate) were inactive, whereas heparan sulfate was somewhat active .
Proof that beads were actually endocytosed was obtained by electron microscopy, which showed beads internalized in membrane-bounded vesicles, and by immunofluorescence analyses, using antibodies to fibronectin to stain external beads. Two rapid assays for the opsonic activity of fibronectin were developed based on differential centrifugation of cellassociated beads and on the immunofluorescence procedure . Binding and endocytosis were time-and temperature-dependent and varied with the amount of gelatin on the beads and with the concentrations of fibronectin and heparin added, and could be inhibited by F(ab') 2 antifibronectin . These studies provide a sound basis for a detailed analysis of the interaction of fibronectin with the cell surface and of its involvement in endocytosis.
Phagocytosis is the process by which particles (bacteria, damaged cells, fibrin aggregates, etc .) are bound and endocytosed by phagocytic cells . Recognition of these particles by the cells is due to the presence of humoral factors (opsonins) such as immunoglobulins or complement, which coat the particles and promote their phagocytosis (44). Recently, an a-2-globulin from plasma has been shown to facilitate clearance of certain particles from the circulation (31,32,42) . It also promotes the binding of particles to liver slices (12,16,31,40) and the uptake of gelatin-coated particles by rat peritoneal macrophages (13,18) . This a-2-globulin has recently been purified from rat and human sera and found to be immunologically and functionally identical with plasma fibronectin (2,33), a dimeric glycoprotein with subunits of 230,000 daltons (3) . Fibronectin is known to be involved in adhesion of cells to solid substrata and has specific binding sites for gelatin, glycosaminoglycans (i .e ., heparin), fibrinogen, and fibrin, and can be crosslinked to fibrin or collagen by Factor XIII (reviewed in references 17, 21, 35, 32 47, 49) . Fibronectin has also been shown to bind to bacteria' (27,36), but has not been shown to promote phagocytosis of bacteria .
To define the conditions necessary for endocytosis of fibronectin-coated particles, we have studied an established mouse macrophagelike cell line (P388D,) by using a centrifugation assay, as well as electron and immunofluorescence microscopy. We find that endocytosis of gelatin-conjugated latex beads by these cells is dependent on both fibronectin and heparin. The assays described that use this cell line will be useful for determinations of the opsonic activity of samples containing fibronectin and for further analysis of the mechanism of uptake of these particles . medium with 10% heat-inactivated fetal calf serum on tissue-culture dishes . Adherent cells were harvested with 0.02% EDTA, washed three times in Ca2'Mg2'-free phosphate-buffered saline (PBS), and resuspended in PBS plus divalent cations. (Nonadherent cells were also tested and found to be less active in uptake of particles.) Another macrophage Line, RAW 309 Cr.l (39), was obtained from Dr. J. Unkeless, Rockefeller University, and grown in the same medium on plastic petri dishes (Falcon Labware Div., Becton & Dickinson, Oxford, Calif.), to which they were loosely adherent . Cells were harvested by pipetting (without EDTA) and washed in PBS. Mouse peritoneal macrophages were collected from BALB/c mice 4 d after intraperitoneal injection of 1 ml of thioglycollate (46) . Contaminating erythrocytes were removed by lysis in 0.8% ammonium chloride (7), and cells were washed three times in PBS, resuspended in PBS plus divalent cations, and used directly for phagocytosis (see below) . NIL8 cells were grown as previously described (20) .
Metabolic Labeling of Cells and Conditioned Media
Cells were labeled for 18-20 h in growth medium containing one-tenth the usual concentration of unlabeled methionine and [' S S]methionine (20 ACi/ml; 442 Ci/mmol; New England Nuclear, Boston, Mass.) . Conditioned medium was centrifuged at low speed to pellet any cells ; adherent cells were scraped from the dishes in 0.2% sodium dodecyl sulfate (SDS) in 0.1 M Tris pH 8.8 with 2 mM phenyl methyl sulfonyl fluoride (PMSF), 2 mM EDTA, l mM N-ethyl maleimide (NEM), 1 mM iodoacetic acid (IAA), and heated to 90°C for 3 min. Lysates and conditioned media were centrifuged at 10,000 x g for 10 min.
Purification of Fibronectin and Preparation of Opsonins
Plasma fibronectin (pFN) was purified from human plasma on gelatin-Sepharose columns by a modification of the method of Engvall and Ruoslahti (14). Buffers for the washing, elution and dialysis of protein were degassed and flushed with nitrogen . Human plasma was applied to a gelatin-Sepharose column (8 mg gelatin/ml Sepharose) at a ratio of 1 ml of plasma/ml of gel bed. The column was washed with 5-10 vols of PBS, eluted with 8 M urea in CAPS buffer (10 mM cyclohexylaminopropane sulfonic acid, 150 mM NaCl, 1 mM CaC12, pH 11), dialyzed, and stored at -30°C in CAPS buffer without urea. In some cases, protein was eluted with 1 M NaBr in 0.05 M Tris-HCI, pH 5.3 (11), or with 1 M arginine in 0.05 M Tris-HCI, pH 7.5 (48) . Protein concentrations were determined by the absorbance at 280 run (1 .3 OD unit per mg/ml) or by the method of Lowry et al. (29), using BSA as the standard . Normal human serum was depleted of plasma fibronectin by chromatography on gelatin-Sepharose (l ml of serum/ ml gel bed), and flow-through fractions were pooled . Control experiments showed that this procedure depletes plasma of >99% of its fibronectin. Depleted plasma is negative for fibronectin as assayed by Ouchterlony double diffusion, antibody staining of gels (5), and rerunning on a second gelatin-Sepharose column . Samples of normal human serum or pFN-depleted human serum were heatinactivated at 56°for 30 min.
Antisera
Rabbit antisera to hamster cellular fibronectin have been described and characterized (8,30,43). The antisera were raised to hamster cellular fibronectin purified by successive runs of SDS-gel electrophoresis. The preparation of affinity-purified, rhodamine-conjugated antifibronectin has been described (22) . F(ab') 2 fragments of rabbit antifibronectin were prepared as follows. The immunoglobulin fraction (from 10 ml of serum) in 0.1 M sodium phosphate, pH 7.0, was passed over a 5-ml protein A-Sepharose column (Pharmacia Inc., Piscataway, N.J .) ; the column was washed, and IgG was eluted with 0.1 M glycine-HCI, pH 3.0 . The eluted protein was digested for 20 h at 37°C with pepsin (Sigma Chemical Co ., St . Louis, Mo .; 2 mg/ 100 mg Ig) in 0.1 M sodium acetate, pH 4.5 . The reaction was stopped by raising the pH to 7.5, and the F(ab') 2 fragments (in 0.1 M sodium phosphate, pH 7.0) were purified by application to a protein A-Sepharose column (5 ml); the flow-through contained purified F(ab')2 fragments, as confirmed by SDS-polyacrylamide gel analysis.
Rabbit antisera to human plasma fibronectin were raised to plasma fibronectin purified on gelatin-Sepharose and subsequently by preparative SDS-gel electrophoresis of reduced samples . The sera give single immunoprecipitation lines in Ouchterlony double diffusion against human plasma or plasma fibronectin and no precipitation line with pFN-depleted human serum. Direct rhodamine-conjugated, affinity-purified Ig against human fibronectin was prepared as described (22) . Irnmunoprecipitation Samples of cell lysates or conditioned media were mixed with an equalvolume of 1% Nonidet P40 (NP40), 1% sodium deoxycholate (DOC), 0.1 M Tris HCI, pH 8.8, 2 mM NEM, t mM IAA, 2 mM PMSF, 2 mM EDTA, or with PBS, respectively, and either preimmune or anti-hamster cellular fibronectin rabbit serum was added. Samples were incubated for l h at 37°C with the first rabbit serum; goat anti-rabbit IgG (N . L. Cappel Laboratories Inc., Cochranville, Pa .) was then added for an additional hour . Samples were then incubated overnight at 4°C. Samples were washed three times in 0.1% SDS, 0.5% NP40, 0.5% DOC in 0.1 M Tris, pH 8.8, 2 mM PMSF, 2 mM EDTA, 1 mM NEM, l mM IAA, and washed pellets were resuspended in electrophoresis sample buffer and heated to 90°C for 3 min.
Electrophoresis and Fluorography
Samples were prepared and analyzed on 5% slab gels, using the buffers described by Laemmli (28) . Gels were fixed and stained by the method of Fairbanks et al. (15) . SDS gels containing labeled proteins (["S]methionine) were impregnated with 2,5-diphenyloxazole (PPO) as described by Bonner and Laskey (4), dried, and placed in contact with Kodak X-Omat R film .
Preparation of Gelatin-conjugated Latex
Gelatin was conjugated to latex by a modification of the procedure of Molnar et al. (33) . One part gelatin (10 mg/ml) in distilled water, two parts latex particles (Dow Diagnostics, Indianapolis, Ind. ; 10% vol/vol, 0.455 pin), and one part 1cyclohexyl-3(2-morpholinoethyl)-carbodiimide-metho-p-toluene sulfonate in 0.2 M acetate buffer, pH 6, were mixed and incubated for 3 h at room temperature and then incubated overnight at 4°C. The conjugated latex was washed and stored in PBS with 2 mM azide. In one set of experiments ( Fig . 1 c), the gelatin concentration during conjugation was varied to generate beads with 2.9, 1.2, 0.7, and 0.4 fg/bead .
The conjugated beads were iodinated by the method of Hunter and Greenwood (19) . A mixture of 100 pl of 25% (vol/vol) gelatin-Latex beads in 0.5 M sodium phosphate buffer (pH 7.2), 250 pCi Na . ..Iodine, and 10 pl chloramine T (5 mg/ml stock) was incubated at room temperature for 60 s. The reaction was stopped by the addition of 0.5 ml of sodium metabisulfite (25 mg/ml in 0.05 M sodium phosphate buffer); the particles were then washed 5-10 times and stored in PBS with 2 mM sodium azide. Before use, beads were sonicated to ensure a suspension of single beads. The concentration of beads was determined by comparing their OD . to a standard set of diluted beads of known concentration (as supplied by Dow) .
Centrifugation Assay for Phagocytosis
Iodinated gelatin-latex (50 pl) was added to a mixture of heparin (50 pl of 100 U/ml stock) and serum (100 Al) with or without pFN, and the mixture was preincubated at 37°C, usually for two min, in a 12 x 75-mm polystyrene tube. Washed cells (250 pl ; 2 x 107 cell/ml) were then added and the incubation was continued at 37°C . At intervals, aliquots (100 pl) were transferred to tubes containing 2 ml of ice-cold PBS with divalent cations and 1 mM NEM. Samples were then washed two times by rapid centrifugation in a clinical centrifuge (700 gfor 10 s) and counted for "'Iodine in a gamma counter (Beckman Gamma 300; Beckman Instruments, Fullerton, Calif.).
Immunofluorescence Assay for Phagocytosis
Two immunofluorescence procedures have been used to determine whether opsonized particles were internalized by cells. In the first, a mixture of gelatinlatex particles (10 Al), heparin (10Al; 100 U/ml), and serum with or without pFN (20 pl) was layered over a cover slip of cells which had been washed three times in PBS. The cells were incubated for 30 min at 37°C, washed in PBS, and fixed in 3.7% formaldehyde in PBS for 30 min at room temperature. Cells were then stained for 30 min at 37°C with rhodamine-labeled, affinity-purified antifrbronectin, washed in PBS, and then stained for 30 min at 37°C with rhodamineconjugated F(ab') 2 fragment of goat anti-rabbit F(ab')2 fragments (Cappel) . The cover slips were then washed in PBS, mounted in Gelvatol (38), and viewed and photographed in a Zeiss Photo microscope III with epifluorescence optics .
In the second procedure, cells were stained before fixation. Cover slips incubated with opsonized beads were washed in PBS and immediately stained for 30 min at 37°C with rhodamine-labeled, affinity-purified antifibronectin in PBS plus t mM azide. The cells were then washed in PBS, fixed in 3.7% formaldehyde, washed again in PBS, and mounted in Gelvatol. This second staining procedure resulted in a very low background staining, but the cells had a more rounded morphology. Parallel cover slips stained by either procedure contained the same percentage of cells with internalized beads. With either protocol, all beads that were not associated with cells were stained and were readily visible in both phase and fluorescence . Beads associated with cells were of two classes: fluorescent (i .e ., not internalized) and unlabeled (i .e ., internalized). Accurate quantitation of the percentage of cells showing phagocytosis required that the plane of focus be changed to be certain that all beads were visualized . Controls using preimmune serum or anti£bronectin preabsorbed with fibronectin-Sepharose (22) gave no staining in either protocol . In the case of beads incubated in the absence of fibronectin, thevery few beadsthat do associate with cells naturally do not stain. In other experiments, these have been visualized by inserting an extra staining step using fibronectin to label the gelatin.
Electron Microscopy
Mixtures of cells and beads were prepared as for the centrifugation assay, incubated at 37°C for different times, and immediately fixed in 2% glutaraldehyde, 0.2% tannic acid in 0.I M cacodylate buffer, pH 7.2, washed, postfixed in 1% Os0,, followed by 1% uranyl acetate, dehydrated through graded ethanols, embedded in Epon/Araldite, and sectioned. Sections were stained with uranyl acetate and lead citrate and viewed in a Philips EM 201 microscope .
Fibronectin Promotes the Binding of the Latex Particles by Phagocytic Cells
Initially, to compare our data with those of others, we used an assay in which slices of rat liver were incubated with the beads plus various serum preparations and heparin (33) . In agreement with others (16,33,40), we observed that fibronectin and heparin were required for the binding of gelatin-latex to liver slices (data not shown) . Whereas the data are consistent with the phagocytosis of these particles by Kupffer cells as suggested by others (3), this assay is unable to discriminate between ingestion and membrane attachment of these particles or between binding to hepatocytes and binding to other cells .
Because of the difficulties in adequately defining ingestion in a complex system such as the liver, we investigated endo-cytosis of gelatin-conjugated latex particles by cultured cells. In screening different populations of phagocytic cells for efficient uptake of gelatin-latex, we observed a high level of binding by the P388D, cell line. These cells can be easily grown in continuous culture, are a homogeneous cell type, and can be easily labeled with radioactive precursors (26). Experiments were first carried out using these cells in a suspension assay with iodinated gelatin-latex, various sera, and heparin. We observed binding of gelatin-latex particles to the cells which was dependent upon the presence of pFN (Fig . 1) . The activity of human sera depleted of FN could be restored by the addition of affinity-purified pFN in a dose-dependent fashion (Fig . 16 ) . Similar results were obtained with human plasma depleted of FN with and without heat inactivation (not shown) . The increase in cell-associated gelatin-latex particles reached a maximum within a 20-min incubation at 37°C. The maximum level of cell-associated counts in the presence of pFN ranged from 10 to 30% of the total particles and generally represented a tenfold stimulation of binding over background. If incubations were carried out at 37°C but without cells present, little or no activity was observed above background (Fig. 1 a and c) . Thus, the observed pelleting of particles was dependent on the presence of cells and did not merely represent aggregation of the beads by pFN. The binding of beads by cells was also dependent on the amount of gelatin conjugated per bead (Fig. 1 c) . This panel also shows that the association of beads with cells is inhibited by low temperatures .
Previous reports (33,40) had indicated a dependence on heparin of the liver slice assay, so we investigated its role in our assay system. If heparin was omitted from the incubation mixture, the level of binding in the presence of pFN approximated that of samples without pFN (Fig. 1 a). Fig . 1 d shows the dependence of binding on heparin concentration . 10 U/ml (59 /Ag/ml) were used in all subsequent experiments. Experiments (not shown) were also carried out using chondroitin sulfate, dermatan sulfate, keratan sulfate, or hyaluronic acid at the same concentration as heparin . None of these compounds substituted for heparin . Heparan sulfate (59 gg/ml), on the other hand, did stimulate phagocytosis, but less well than did the commercial heparin used in the experiments described .
In conclusion, association of gelatin-coated latex with macrophages is dependent on temperature and on the concentrations of fibronectin, gelatin, and heparin .
Evidence that Gelatin-Latex Particles Are Ingested : Electron Microscopy
The centrifugation experiments suggested that particles were bound by cells . To determine whether the test particles were actually ingested or simply associated with the cell surface, we used electron and light microscopy.
Samples were prepared for electron microscopy by mixing opsonin and latex beads with cells in suspension, washing, fixing the pellets, and sectioning . When human serum was used as an opsonin and incubated at 37°C with P388D, cells, beads were found inside the cells (Fig. 2 a). Ifthe serum was depleted of pFN, very few beads were associated with the cells (Fig. 26) . The decrease in endocytosis by depletion of pFN could be reversed by reconstituting the sample with purified pFN (Fig. 2 c) . Samples prepared with pFN present (Fig. 2 a and c) had beads both inside and outside the cells. The internalized beads were enclosed in membrane-bounded vesicles (Fig. 2 d) and these vesicles often contained more than one bead . If samples were incubated at 4°C in the presence of pFN, beads were not internalized (not shown) .
To quantify binding and endocytosis, samples were prepared with or without pFN, incubated at either 0°or 37°C, and the percentage of cells with beads inside and the numbers of beads inside cells were determined . In the absence of pFN, very few beads were found inside the cells . However, in the presence of pFN at 37°C an increase was observed in the percentage of cells with beads internalized (Fig. 3 a). A corresponding increase in the number of beads inside the cells (Fig. 3b) was also observed with pFN at 37°C. If a mixture of pFN, beads, heparin, and P388D, cells was incubated at 0-4°C instead of 37°C, very few beads were observed inside the cells (Fig. 3 a and b) .
Evidence that the Gelatin-Latex Particles Are Ingested : Immunofluorescence Microscopy Because the electron microscopic assay was time consuming, we developed an immunofluorescence assay that allowed ready quantitation of binding and uptake of beads . P388D, cells grown oncover slips were incubated with gelatin-latex, heparin, and different opsonins. After incubation, the cover slips were washed and stained for fibronectin by indirect immunofluorescence (see Materials and Methods). The cells were not permeabilized, allowing visualization of beads outside (fluorescent and phase-dense) contrasted with those inside (phase-dense only) . Examples ares shown in Fig . 4. When human serum was used as an opsonin, beads were found inside the cells (Fig. 4 a). If serum fractions depleted of FN were used, very few beads were found in the field (Fig. 4b) . If serum was reconstituted with fibronectin, phagocytosis was again clearly observed (Fig. 4 c) . In all samples containing fibronectin, beads were found in aggregates both inside and bound to the outside of the cells.
To obtain a more detailed picture ofthe conditions necessary for phagocytosis, we determined the percentage of total cells with associated beads (inside and outside) as a function of various conditions (Fig . 5). Binding and internalization were dependent on both fibronectin and heparin, and both were inhibited by low temperature . Addition of F(ab')2 fragments of antifibronectin antibody inhibited both binding and internalization, indicating that fibronectin is involved in both processes .
Do Macrophages Produce or Bind Fibronectin?
If P388D, cells are to serve as a useful assay for exogenous fibronectin, it is important to determine whether they produce fibronectin . As shown in Fig . 6, they do not. Whereas fibronectin can readily be detected in the culture medium of two other phagocytic murine cell types (RAW and peritoneal exudate cells [PECI) by immunoprecipitation, none could be detected in either culture medium or cell lysates of P388D, cells. P388D, cells were also negative for fibronectin by indirect immunofluorescence (data not shown) . Therefore, assays for fibronectin effects on endocytosis in this system are not complicated by endogenous production of fibronectin . The same is not true for all phagocytic cells. DISCUSSION Our results provide convincing evidence for fibronectin-mediated stimulation of binding and endocytosis of particles by a pure population of phagocytic cells. We have developed two rapid assays for binding and endocytosis of gelatin-coated latex by the mouse macrophagelike cell line, P388D,, first isolated VAN by Dawe and Potter (10) and characterized in detail by Koren et al . (26). This cell line is easily grown in suspension or monolayer culture, can be metabolically labeled, will readily phagocytose particles coated with IgG or complement, and secretes neutral hydrolases characteristic of macrophages (see reference 34 for review).
The dependence of binding and endocytosis on fibronectin was shown by a strict dose dependence (Fig . I b) and by inhibition with F(ab')2 antifibronectin (Fig. 5) . Fibronectin active in our assays can be prepared by elution from gelatin-Sepharose with either urea or sodium bromide and with or without low levels of reducing agent (0.01 % 2-mercaptoethanol, 3 6 THE JOURNAL OF CELL BIOLOGY " VOLUME 90, 1981 data not shown) . This latter result is in contrast with reports that inclusion of mercaptoethanol during purification is a requirement for protein active in the liver slice assay (2) . Binding and internalization were also dependent on heparin, consistent with the results of others using other systems (3,13,16,18) . The reason for the heparin dependence is obscure . Several studies have shown that fibronectin, gelatin, and heparin can participate in the formation of ternary aggregates (6,23,24,45) . However, aggregation is not a sufficient explanation for our results because (a) in the presence of F(ab')2 fragments of antifibronectin, aggregates of beads still formed but internali-zation was inhibited (Fig. 5); and (b) aggregation of beads in the absence of fibronectin sometimes occurs and is not associated with phagocytosis (e.g., when beads are not sonicated before use) .
We cannot, however, rule out the possibility that aggregation is a necessary concomitant of phagocytosis in this system, and some observations are consistent with this idea . As the amount of gelatin on the beads is varied (Fig. 1 c), lower levels of gelatin lead to reduced aggregation and also produce lower FIGURE 5 Quantitation of cell-associated beads by immunofluorescence microscopy . Procedure as in Fig . 4 . Cover slips were stained by the second procedure described in Materials and Methods and scored for the percent of total cells with beads associated (shaded bars) or internalized (open bars) . In some samples, heparin was omitted or rabbit anti-hamster cellular fibronectin F(ab') 2 fragments (--50pg/ttg FN) was preincubated with pFN (30 min at 37°C) before addition to the bead mixture. Incubation was carried out with cells for 30 min at 37°C in all cases but one (pFN, 4°C) . FIGURE 4 Analysis of cell-associated beads by immunofluorescence microscopy . Samples of human serum (a ), fibronectindepleted human serum (b), or fibronectin-depleted human serum reconstituted with purified plasma fibronectin (c) at a final concentration of 0 .2 mg/ml were mixed with gelatin-latex . An aliquot (20 fLI) of the mixture was placed over P388D, cells grown on cover slips, incubated for 30 min at 37°C, washed, fixed, and stained by the first procedure outlined in Materials and Methods . Internalized beads (black arrows) are unstained, while external beads (white arrows) are stained by fluorescent antifibronectin . Fixation before staining results in higher background immunofluorescence but flatter cells more suitable for photography . Bar, 10 VAN levels of cell association of aggregates . Conceivably, the heparin acts as a cofactor by promoting aggregation, and, while the role of heparin in endocytosis remains unclear, it would be premature to suggest that fibronectin acts as an opsonin analogous to IgG or C3b. However, the current results show clearly that, in the cell system studied, addition of fibronectin increases endocytosis from a very low level to a significant level, whether by binding to specific receptors on the cell surface or by promoting aggregation or both .
Less marked stimulation of phagocytosis was observed with several other phagocytic cell types (mouse peritoneal macrophages, RAW 309 Cr. I cells, and human neutrophils) . It may be relevant that we observed that RAW 309 Cr.l cells and PEC both synthesize fibronectin, whereas P388D, cells do not (Fig. 6) . Pearlstein et al. (37) reported that PEC cells do not have surface fibronectin, whereas Colvin et al . (9) found that they do . Recently, Johansson et al. (25) observed that these cells can synthesize and secrete fibronectin . A report has also appeared on synthesis of FN by human monocytes (1). So it is possible that some phagocytic cells such as P388D, require exogenous FN whereas other cell types do not, either because they make their own or because they do not use FN at all. The same might also be true for the requirement for heparin: some cells may be able to synthesize polyanionic species such as heparan sulfate which could replace the exogenous heparin. 38 THE JOURNAL OF CELL BIOLOGY " VOLUME 90, 1981 There has been much interest in the possibility that fibronectin may act as an opsonin in reticuloendothelial clearance of particulates from the circulation (3,33,41). The establishment of rapid and straightforward assays using well-defined cells and particles should aid in the further analysis of the proposed opsonic activity of fibronectin . | 2014-10-01T00:00:00.000Z | 1981-07-01T00:00:00.000 | {
"year": 1981,
"sha1": "2c8a1e7893b06fed107e09d791d1700ce3019966",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/90/1/32/1389328/32.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "2c8a1e7893b06fed107e09d791d1700ce3019966",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Biology",
"Medicine"
]
} |
225517038 | pes2o/s2orc | v3-fos-license | LIFE CYCLE ASSESSMENT IN SUPPLY CHAIN MANAGEMENT GAME
Educational games are essential in explaining theories to students as such activities create a fun learning environment. Most educational games in supply chain management (SCM) are focused on SCM or logistics theories. In the last decades, the research in environmentally conscious SCM has increased. However, the educational games related to such SCM are limited. This work is the first to use the detailed life cycle assessment (LCA) approach in teaching students SCM. The research output is a game called “Robo Factory,” which involves a simulation of a robot production supply chain. The research objective is to educate undergraduate students about the SCM structure, the duties and responsibilities of SCM actors, the LCA approach, and the cost types in SCM and LCA. The paper describes the game design process. The game design entails three steps: (1) game conceptualization and prototype design; (2) prototype trial, evaluation, and finalization of game rules; and (3) final games. Evaluation results indicate that the game successfully teaches undergraduate students about the theory of LCA in SCM in an enjoyable manner. The posttest shows an overall increase in students’ knowledge. The paper presents the future research directions and implications for scholars to enhance their contributions.
the inventory as a retailer, wholesaler, distributor, or manufacturer. Other SCM games include the innovative practical games published by Vanany andSyamil in 2016 (Vanany &Syamil, 2016). This board game uses Lego® blocks to emphasize the role of a supply chain manager, which is crucial in determining the victory of a team. The supply chain manager is responsible for ordering, calculating, and assigning jobs to the operator. The game uses supply chain costs, which are calculated to determine the winning team, which reports the least cost. A summary of previous research on teaching SCM is available in the work of Vanany and Syamil (2016), where they collected 24 papers from 1998 to 2013; 62.5% of these papers used or developed games while 37.5% developed contents (curriculum) for teaching SCM subjects. The summary itself highlights the wide use of the beer game and other simulation games. Grandzol and Grandzol (2018) designed the "Chantey Castings" Simulation to teach SCM with a focus on a demand-driven approach and constraint management.
The game uses Play-Doh to teach students to match customer demand while learning the concept of SCM. Sato et al. (2017) designed a specific SCM game for milk to increase the awareness of food waste among university students in Japan. Liu (2017) used an open-source simulation video game to teach supply chain and logistics management. The educational games in SCM are based on different objectives. Hence, how lecturers develop educational materials to reinforce students' learning and enrich their hands-on experience is an interesting topic.
In the Scopus database, a search for papers written in English and those in the final publication stage by using key terms such as "Supply Chain Management" in the title and "green" OR "sustainability" OR "sustainable" within the results yields 3,041 documents from 1995 to 2019. The key terms used in the search are based on how research papers usually refer to the environment using word such as "green," "sustainable," or "sustainability." The analysis of the results shows a sharp rise in the consideration of the environment in the field of SCM in the past 10 years (Figure 1). This trend implies the growing importance of the environment in SCM. Figure 1. Analysis of search results from the Scopus database using the key terms "supply chain" in the title and "green" OR "sustainability" OR "sustainable" within the results from 1995 to 2019 Source: Scopus (2020) Sustainable supply chain management (SSCM) has become a major research hotspot due to the extensive efforts to protect our environment. Carter et al. (2019) performed a systematic literature review on the evolution of SSCM in the last 28 years and detailed the possible future direction of SSCM research. Different methods and frameworks for measuring SSCM performance have also been suggested (Beske-Janssen et al., 2015;Brandenburg et al., 2019;Jensen, 2012;Khalid & Seuring, 2019;Lu et al., 2018;Ni & Sun 2019;Parmigiani et al., 2011;Paulraj et al., 2017;Qorri et al., 2018;Ramezankhani et al., 2018;Rebs et al., 2019;Taghikhah et al., 2019;Yun et al., 2019;Zimon et al., 2019). One of these methods is life cycle assessment (LCA) approach. The focus of SSCM research has shifted to three directions, namely, social and environmental problems in SSCM, measurement and management of sustainability performance in SSCM, and measurement of the impact of SSCM on company finances (Beske-Janssen et al., 2015).
Another term used in SSCM research is green supply chain management (GSCM). GSCM involves the integration of the environment in SCM (Chin et al., 2015). In SSCM and GSCM, researchers have carried out extensive discussions and implementations of environmental measurements to reduce environmental impact while increasing business profit. One environmentally relevant approach is LCA, whose strengths include the completeness of the life cycle perspective and its environment scope (Hauschild, 2018). According to ISO 14040, the LCA approach is a method for analyzing the environmental aspects and impacts related to products by compiling inputs and outputs from related inventories, evaluating potential environmental impacts associated with these inputs and outputs, and interpreting the results of the inventory analysis and the impact assessment phase in relation to research objectives (ISO 14040:2006(ISO 14040: , 2016. In LCA, the potential impact of certain factors on the environment is evaluated using groups of data obtained at the inventory analysis stage. IMPACT 2002+ is one of the most commonly used methods. This method uses transactions between industry sectors, including environmental data emissions (e.g., sulfur dioxide, particle matter, and carbon dioxide) and consumption of natural resources (e.g., coal, natural gas, and petroleum products), to determine the environmental impact of the entire supply chain within the economy. The integration of LCA in SCM was proposed by Fornasiero et al. (2017), Genovese et al. (2017), and Blass and Corbett (2018).
A serious game that teaches university students about environmental decisions in enterprises and supply chains was created by Qualters et al. (2006), Zhang and Zwolinski (2015), and Cuesta and Nakano (2017). The environmental issues in supply chains have gained popularity among researchers, who have thus designed education games related to environmental decisions in SCM. However, to the best of our knowledge, no educational game includes a detailed LCA in SCM. Hence, the current work attempts to fill the research gap by creating a game called the "Robo Factory," which is expected to help explain the importance of environmental aspects in SCM. The study considers the "life cycle assessment" course offered by the University of Pelita Harapan.
In the SCM course, the Robo Factory game could be used as an introduction for students and as a bridge to understand the relation of LCA and SCM. Through this game, undergraduate students can distinguish the link between sustainability issues in SCM. The proposed game is designed to educate players about the SCM structure, the duties and responsibilities of SCM actors, the LCA approach, and types of costs in SCM and LCA. This research focuses only on LCA as a performance measurement of SCM. The paper is structured as follows. Following the introduction, the research methodology for game design and development is discussed, and the final game is analyzed. Conclusions are then presented, along with suggestions for further research.
Methods
In this part of the study, the literature related to SCM and its games is reviewed. Previous studies employed different approaches in developing games, but their goals are essentially the same, that is, to engage students' interest and to enrich their experience. The purposes of Bloom' s taxonomy is widely used in the design of learning objectives for cognitive learning skills and in the measurement of learning outcomes for educational purposes (Adams, 2015;Adesoji, 2018;Ramirez, 2016). Bloom's taxonomy can connect the conceptual aspects of a game to the cognitive level (Brewer & Brewer, 2010). Moreover, it helps ensure that the steps for measuring the specifications of learning objectives are being followed. However, the only previous work related to SCM games based on Bloom's taxonomy is that by Vanany & Syamil (2016).
Herein, the selected process is the game design process by Duke (1981). The original work is not accessible but is detailed in the study of Kuijpers (2009). The selected process comprises three stages ( Figure 2). The proposed Robo Factory game uses this game design process with modifications in the evaluation phase (stage 2) based on Bloom's taxonomy. The evaluation involves the use of open-ended questions. For the test, the question is designed and developed using the original framework of Bloom's taxonomy (Bloom, 1956). The structure has six main categories: knowledge, comprehension, application, analysis, synthesis, and evaluation.
First Stage: Game Conceptualization and Prototype Design
Initial conceptualization is the heart of game development; it is the most crucial stage. It involves naming the game, defining its purpose, identifying its parts, and deriving its description. In this work, the proposed game is the SCM in a robot factory, hence the name Figure 4 shows the parts used for the game prototype. For the "parts of the robot," 540 Goldkids building blocks that resemble Lego® blocks are used. The "day marker" is made of paper on which the positions of shipping and players are marked. For the "transportation" prototype, a brown paper bag is used as a vehicle. The "turn marker" prototype indicates the turn in the game. This game is a turn-based game that mimics the process in the supply chain of robot production. The players are the students in the SCM class. They are divided into several groups.
Each group consists of two competing factories. Other players include a supplier, a customer, a supply chain manager, an operator, and a moderator. The moderator provides a pretest, introduces the rules of the game, and then assigns the students their different roles. The game commences. A posttest after is conducted after the game. The game begins with an order from the customer to the supply chain manager, who forwards the order to the supplier. The order is then assembled by the operator. The game ends when the customer receives the order. The group with the least total cost or the highest total score wins the game. The prototype is designed using the guidelines set by Peters et al. (1998), as explained by Kuijpers (2009). The framework is used to overcome the errors found in game development (Table 1). In designing the game, the input and output of the game must be described. Meijer (2009) described the input and output of a game session ( Figure 5). The input is the game design, game situation, and players. The game design consists of the roles, rules, and objectives of the game.
The game situation refers to the event card, which provides several delivery scenarios; it is explained further in the next section. The output of the game session is a player with knowledge and data to be processed (pretest and posttest data). The details of the game are presented in the final phase of game development. They are explained briefly in this section to shorten the length of the paper. As the process is a step-bystep improvement, discussing all details may be redundant. This prototype comprises one customer, two factory teams (each team consists of one supply chain manager and one operator), two suppliers (supplier X and supplier Y), and one moderator. The customer takes a demand card and checks whether or not the demand that arrives has the right specifications or whether the delivery is delayed or on time. The supply chain manager is responsible for determining the number of orders, size of inventory, number of shipments, and transportation used. The supply chain manager also supervises the operator. The operator assembles the products quickly and accurately and then sends the demand according to the instructions of the supply chain manager. The supplier is responsible for fulfilling the orders of the supply chain manager according to the sequence of the teams' orders, starting from those that were placed first. The moderator oversees monitoring the players and changing the turn marker in each turn.
For this game, the two products are the Alpha and Beta, which are represented by the Goldkids building blocks and serve as the basic ingredients of acrylonitrile butadiene styrene.
The product Alpha consists of one A part, two C parts, one D part, one E part, three F parts, five G parts, and one J part. The product Beta consists of one B part, three C parts, one D part, three F parts, one G part, one H part, one I part, one J part, and one K part. The bills of materials of Alpha and Beta are shown in Figure 6.
A truck, a ship, and an airplane are used as the vehicles for the game. The design covers the demand arrival standard, holding cost calculation, and LCA score. The details of the final prototype design are provided after the trial and error phase. The trial and error is carried out by simulating the game with three different scenarios. After the game trial, the researcher and players discuss the results to improve the game further. The results of the trial and error of the Robo Factory prototype are analyzed to improve the game. First, the order from the supply chain manager to the supplier, with the product as a reference, is converted into parts, the numbers of which are limited as follows: three head parts for Alpha and three head parts for Beta in one turn. This limit is set to help the supplier in checking the availability of goods and the supply chain manager in devising strategies to win the game (i.e., the manager may want to deplete the supplier's stock so that the opponent cannot fulfill its product requirements). Second, three types of vehicles are used: trucks, ships, and aircraft. As the experiment progresses, only trucks and ships are used to simplify the game.
The aircraft is excluded because the transportation time is classified as too fast (one turn), and 540 parts are not enough (they may run out too quickly).
Third, the initial plan is to provide additional scores for players who successfully fulfill orders that arrive earlier than the demand arrival standard. However, any additional score is no longer provided to simplify the calculation. The scores remain subject to reduction for each late-arriving demand, the value of which is one. Fourth, the initial plan is to provide additional scores for players who successfully match the orders with the bill of materials. However, this idea is abandoned to simplify the calculation. Nonmatching of orders for each product receives However, if the ship only holds fewer than 39 parts, then the shipment is made in the next turn. Figure 7 shows the distance under the assumption that the customer is located in Semarang Tawang, Semarang; the factory is located in Gading Nias, North Jakarta; and the supplier is located in Margomulyo, Surabaya. The CO2e values are obtained from product weight data of 4.5 tons (15 parts) and distance data processed using the SimaPro application (Tables 2 and 3).
The locations are determined on the basis of the locations with easy access to the port. Source: Authors (2020) The demand arrival standard is set to calculate the delay of product arrival (Table 4). This standard is used by the customer to declare whether the demand is delayed or delivered on time.
The demand card contains the customer requests per turn for the Alpha and Beta products. The quantity of demand for each product ranges from 0 to 2. This value is obtained from the total available products (20 Alpha products and 20 Beta products) divided by the first demand arrival standard (turn 7); the result is 2.85, rounded down to two Alpha products and two Beta products.
The event cards are created to mimic natural events that may occur in real life. Its purpose is to add an unexpected event that may lengthen or shorten the transportation time. This card is held by one of the suppliers, is taken at each turn before shipping, and applies only to the vehicle that will depart from the supplier to the factory. This card is separated from the deck after being taken. Two cards are used for each event card on the deck. Table 5 shows the content of the event cards in this game. A new toll opens; the trip is faster by one day (i.e., the truck is placed on the day 2 marker).
A new road opens; the trip is faster by one day (i.e., the truck is placed on the day 2 marker).
Empty harbor; the trip is faster by one day (i.e., the ship is placed on the day 2 marker).
A new and faster ship is available; the trip is faster by one day (i.e., the ship is placed on the day 2 marker).
Source: Authors (2020) The evaluation phase is designed to measure the knowledge of the players before and after the game (pretest and posttest). In stage 1, the questionnaire for the prototype trial evaluation consists of seven questions. Questions 1 and 2 refer to the supply chain structure; question 3 refers to tasks and responsibilities; questions 4 and 5 discuss LCA; and questions 6 and 7 refer to the supply chain cost, LCA, and performance. The questionnaire for the prototype trial and the final prototype based on Bloom's taxonomy is presented in Figure 12. The game rules and instructions are described in the finalized version (improvement after trial).
Second Stage: Prototype Trial, Evaluation, and Finalization of Game Rules
The prototype trial is conducted at the University of Pelita Harapan. The trial involves seven players and three researchers, with one serving as the moderator and the other two serving as observers (Figure 8).
Figure 8. Prototype trial documentation
Source: Authors (2020) According to the pretest and posttest results, the players achieve an overall increase in their knowledge after playing the game. 2. Transportation is difficult to distinguish because the paper bags do not greatly differ.
3. Upon reaching the supplier, the order lists of both teams are often mixed.
4. The explanation of the game is still difficult to understand.
5. The game instructions are too long.
6. Suppliers and customers do not participate in the calculation of the LCA cost. The objective of all players being able to perform the calculation is not achieved.
The improvements for the finalized game rules based on the prototype trial are listed below. The game boundaries are listed below.
1. The supply chain manager can choose to use a truck or a ship, and the number of available vehicles is 20 units each.
2. The supply chain consists of the supplier, factory, and customer.
3. The LCA is limited to the calculation of CO 2 e and kWh.
4. The CO 2 e calculation is performed in Impact 2002+.
5. The calculation unit of the transportation score is changed from CO 2 e to rupiah. This improvement is aimed providing knowledge about conversions in LCA. The conversion factor of CO 2 e to kWh is added and then converted to rupiah.
The assumptions in this game are provided below.
2. The truck used in this game is a double box colt diesel type with a capacity of 6.5 tons.
3. The holding cost is assumed to be 20% of the shipping cost.
6. Electricity costs per kWh are used for household needs > 6,600 VA for Rp. 1,352.
7. The dividing factor in LCA cost is 10 9 in rupiah.
8. Each month has 20 working days; hence, one year has 240 working days.
Tables 6, 7, 8, and 9 present the conversion from cost to score for this game after the discussion between the players and the researcher. (Table 6).
As explained in the previous section, the holding score is initially calculated on the basis of the parts only. However, on the basis of the final decision, the holding cost is added as a conversion factor to obtain the holding score that mimics real-life events. A demand is deemed delayed if it arrives beyond the standard time of arrival presented in Table 4. The value of delay is always 1, and the number of days of delay are not considered.
For example, demand #1 should arrive at the customer at turn 7, but it reaches the customer at turn 8; hence, demand #1 is delayed by 1. The next example is order #1 of the customer at turn 10; demand #1 is delayed by 1. A fast delivery gains no additional points. Total delay is then converted to delay score (DyS) based on Table 7. The defect score (DfS) is only counted if the customer receives a defective product. In this game, a defect occurs when the product does not match the specifications; for example, a defect may be a wrong color, wrong parts in the product, etc. Every mistake is counted as one defect cost; hence, if a product received by the customer has the wrong color and wrong parts, two defect costs are incurred. The total defect is then converted to the defect score on the basis of Table 8. The defect score reduces the total score. The transportation score (TS) is obtained from the conversion of the total CO 2 e into rupiah.
The CO 2 e value is then converted to kWh using the following formula: kWh value = CO 2 e × kWh to CO 2 e converting factor, LCA cost = value of kWh × (electricity cost per kWh)/(dividing factor).
The converting factor is 0.35156 CO 2 e, the electricity cost per kWh is Rp. 1,352, and the dividing factor is 10 9 in rupiah. Based on equations 1 and 2, LCA cost is equal to CO 2 e value × 3.8457 × 10 6 . The results are converted to TS on the basis of Table 9. The total score is obtained by adding the holding score (HS), delayed score (DyS), defect score (DS), and transportation score (TS).
Third Stage: Final Game
The third stage is the final game. The improvements from the prototype to the final game are listed with some explanations. First, Tables 2 and 3 in the prototype are changed into a concise and simple table in the final game for players to use in the calculation. For ease of reference, the final table is presented in Figure 9. Second, this table shows the improvement for CO2e transportation from the supplier to the factory. Third, the paper bags used for transportation are marked with stickers to avoid confusion. Different colors are assigned to factories A and B. Fourth, the day marker is fixed and printed in a 250 cm × 160 cm banner ( Figure 10).
Previous day marker could be seen in Figure 4 (b). Fifth, the game rules are simplified from a five-page document to a one-page document. Sixth, player role has been through some changes based on evaluation in stage 2. In this article, the final role of a player is presented in Figure 11. (1) Planning the supply with considering every boundary, planning the shipment strategy to customer, choose the transportation, and give order to operator Supplier (4) Send the part to factory as demand from SCM Manager, took and read the event cards.
And seventh, the pretest and posttest questionnaires for the prototype are improved by rephrasing the questions to eliminate ambiguity. The difference between the previous questionnaire and the final questionnaire is only the points in each question, and there is one question added. To avoid repetition and make the article more concise, only the final questionnaire presented in Table 10. The final game is conducted at the University of Pelita Harapan. The players are industrial engineering students in the SCM course. The total number of students is 35. The game is split into three sessions from 08:00 to 12:40. The documentation process is presented in Figure 12.
The final game evaluation is carried out by verifying and validating the game.
Final Game Verification
An evaluation about whether the basic needs of the game are met or not is conducted to determine the performance of the proposed Robo Factory game. The basic requirements of this game are based on those by Kuijpers (2009 Basic needs numbers 1, 2, 3, 4, and 6 are measured in the pretest and posttest.
Final Game Validation
The guidelines (Table 1) for ensuring game validity are followed in the process of developing the game; hence, the game has no errors. The process of developing a game is not a pure sequential order, and some steps need to cycle back to the previous step to improve the final game. The "Robo Factory" has followed the steps and using the guideline to eliminate error while ensuring the objective of the game is achieved.
The objective is to improve the knowledge of students, which is measured using a questionnaire in the pretest and posttest. The game is deemed satisfactory if it improves at least 50% of the students' knowledge. Table 11 shows the increasing percentage from the pretest to the posttest; the minimum percentage is 51.43%. The percentage is calculated from the mean difference between the posttest and the pretest. All the players have no knowledge about LCA and improve almost 90%.
The game design process is a repetitive cycle that calls for improvement in every stage, and it involves a step-by-step approach to creating a successful educational game design. The verification and validation are a subjective task, and the researcher should follow the steps carefully to ensure the whole process could be somewhat more objective. This process also gives a direction for the goal of the game and improvement in the developing process.
Conclusion
The results of this research are satisfactory. The Robo Factory successfully teaches undergraduate students about LCA in SCM within an enjoyable environment. The game proves to increase the knowledge of students and is deemed enjoyable. The knowledge target related to supply chain structure increases by 51.43%, and the highest improvement is that for LCA at 89.53%. The game is rated to be fun and very fun by 85.7% of the students. Meanwhile, 14.3% of the students are neutral about the game. The contribution of this research is the Robo Factory game, which is expected to enrich the area of supply chain games with LCA to achieve an environmentally conscious SCM. The intellectual property rights of the Robo Factory game were registered in Indonesia on 21 March 2019 (no. EC00201933456).
The game design process and the parts of the game are described thoroughly in this work to help other researchers who are interested in developing the proposed game further. Further research can improve the game by extending its application, adding other actors in the game, adding other elements in the LCA, and adding or changing an environmental aspect in the game. | 2020-08-20T10:03:24.635Z | 2020-07-31T00:00:00.000 | {
"year": 2020,
"sha1": "7476b971f3d143a0ab71e23aed11c0c7422d9d5e",
"oa_license": null,
"oa_url": "https://scholarhub.ui.ac.id/cgi/viewcontent.cgi?article=1045&context=jessd",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "37c7356a5bb96c62a75ecfc8a4eda9c5b5c75fcd",
"s2fieldsofstudy": [
"Environmental Science",
"Business",
"Engineering"
],
"extfieldsofstudy": [
"Business"
]
} |
54591108 | pes2o/s2orc | v3-fos-license | A New Approach for Accurate Prediction of Liquid Loading of Directional Gas Wells in Transition Flow or Turbulent Flow
Current commonmodels for calculating continuous liquid-carrying critical gas velocity are established based on vertical wells and laminar flow without considering the influence of deviation angle and Reynolds number on liquid-carrying. With the increase of the directional well in transition flow or turbulent flow, the current common models cannot accurately predict the critical gas velocity of these wells. So we built a new model to predict continuous liquid-carrying critical gas velocity for directional well in transition flowor turbulent flow. It is shown from sensitivity analysis that the correction coefficient ismainly influenced byReynolds number and deviation angle.With the increase of Reynolds number, the critical liquid-carrying gas velocity increases first and then decreases. And with the increase of deviation angle, the critical liquid-carrying gas velocity gradually decreases. It is indicated from the case calculation analysis that the calculation error of this new model is less than 10%, where accuracy is much higher than those of current common models. It is demonstrated that the continuous liquid-carrying critical gas velocity of directional well in transition flow or turbulent flow can be predicted accurately by using this new model.
Introduction
Gas well is usually exploited by using depletion type.The ultimate recovery of pure gas reservoir is generally more than 90%, which is far more than the oil recovery [1].Liquid loading is the most common phenomenon in gas wells.And the main cause of this phenomenon is that in the later period during the gas well production, the formation pressure, gas velocity, and liquid-carrying capacity will be reduced, and the part of formation water in the wellbore will stay in well bottom causing fluid accumulation [2,3].Liquid loading will create an increased backpressure on the formation and reduce production pressure differential, which decreases the gas rate and even kills the gas well [4,5].Based on the theory of hydrodynamics, the formation water can be drawn to the surface when current gas velocity is higher than critical gas velocity.Therefore, it is of great significance to accurately predict the critical gas velocity in order to avoid liquid loading and increase gas recovery rate.
Under the condition of vertical well and laminar flow (10 4 ≤ Re ≤ 2 × 10 5 ), Turner et al. [6] assumed that droplets in gas wells are spherical and derived continuous liquid-carrying critical gas velocity formula.In order to make calculation convenient, the drag coefficient was taken as 0.44.In order to fit their experimental data, the calculated results were improved by 20% upward.Coleman et al. [7,8] applied Turner model to their experimental data.They concluded that the Turner model is suitable for wellhead pressure less than 3.45 MPa and that the 20% upward adjustment is not necessary to calculate the critical gas velocity.Li et al. [9] assumed that droplets in gas wells are elliptical and derived a new formula for calculating the critical liquid-carrying gas velocity, whose result was only 1/3 of the result calculated by the Turner model.Later, many scholars have made various changes to the Turner model and the Li model [9], but they are only suitable for the vertical well and laminar flow.Based on the previous study [6][7][8], Nosseir et al. [10] considered the influence of flow regime on the calculation result.And they took drag coefficient as 0.2 and extended the application scope of the Turner model to transition flow and turbulence flow (2 × 10 5 ≤ Re ≤ 10 6 ).Based on the Turner model, Zhou and Yuan [11] believed that liquid concentration will also affect the continuous liquid-carrying capacity and thereby proposed a new model to calculate critical gas velocity.Based on the Li model [9], Luan and He [12] used dimensionless loss factor to evaluate gas energy loss and took the impact of the variation of gas lift efficiency into consideration and thereby derived a new formula to calculate critical gas velocity.At low pressure, the calculated results of the model are better than those calculated by the Turner model and the Li model.
Belfroid et al. [13] proposed that using the Turner model to predict critical gas velocity in directional well will cause large errors and adjusted the Turner model with the Fiedler shape function to fit well data for all inclination angles.Based on the force analysis of droplet in directional well, Yang et al. [14] put forward the calculation method of continuous liquidcarrying critical gas velocity in directional well and laminar flow condition.They believed that droplet is not impacted by the tubing wall and always rises up along the central line of the tubing.Based on the force analysis of liquid film in directional well, Chen et al. [15] derived a new formula for evaluating liquid-carrying capacity in directional well and laminar flow condition.
As known from the standard experimental drag curve, drag coefficient fluctuates heavily under the condition of transition flow and turbulence flow (2 × 10 5 ≤ Re ≤ 10 6 ).It is obviously unreasonable to take drag coefficient as a fixed value.Engelund and Hansen [16], Clift and Gauvin [17], and Yen [18] fitted the experimental data, respectively, and derived the formulas relating to drag coefficient and Reynolds number in laminar flow (10 4 ≤ Re ≤ 2 × 10 5 ).Later, Barati et al. [19] used a multigene Genetic Programming (GP) procedure to obtain a hyperbolic tangent function about the two parameters, whose result was more accurate than before in laminar flow.
At present, there are more and more directional gas wells under the condition of transition flow and turbulence flow, such as Sulige gas field in China.And the calculation model which meets the above conditions is not established.When the existing models are used to calculate the critical gas velocity, the computational error will be large.Therefore, in the directional gas well, it is necessary to consider the impact of Reynolds number on drag coefficient under the condition of transition flow and turbulence flow.So the calculation model needs to be modified.
This paper presents a new model to calculate critical gas velocity of directional well in transition flow and turbulence flow.The new model analyzes the force state of droplet in a new way.And gas field data in Chinese western block and northern block were employed to validate the new model.
New Model
Zhao et al. [20] did some experiments and concluded that the liquid phase in the directional well is dispersed into small droplets to be taken out of the wellbore by natural gas, which means that the droplet model is more reliable than the liquid film model in the process of liquid-carrying.We assume that the droplet is spherical and the collision between droplets is neglected.In critical state, the velocity of liquid phase and gas phase is basically the same, so the droplet is free from the friction of airstream.We conclude that the droplet will only be carried along the tubing out of wellbore; otherwise the horizontal component of (see Figure 1) cannot be balanced.The force balance is shown in Figure 1.
For the droplet (see Figure 1), we get the force balance: It is assumed that the sphere surface is smooth and the equivalent diameter is , m; , , and can be expressed as where is the liquid density, kg/m 3 ; is gravitational acceleration, m/s 2 ; is the gas density, kg/m 3 ; is the drag coefficient, dimensionless; is the critical gas velocity, m/s. is the wall friction to the droplet, N, which can be expressed as where is the wall friction factor, dimensionless, which is related to tubing roughness and Reynolds number.
Combining ( 2), (3), and (4), we can express the wall friction as In addition, the droplet is affected by the surface tension that makes it complete and the inertia force that causes it to rupture.When the Weber number ranges from 20 to 30, the droplet will break.Turner et al. [6] concluded that if the maximum diameter droplet is taken out of the wellbore, liquid loading will not happen.Li et al. [1] took the Weber number as 30, and the maximum diameter of droplet can be expressed as where is the gas-liquid interfacial tension, N; is the maximum droplet diameter, m.Substituting (3), ( 5), and ( 6) into (1), we can derive the general calculation model of continuous liquid-carrying critical gas velocity: The wall friction factor is related to the wall roughness and Reynolds number, and the conventional friction factor ranges from 0.01 to 0.1.Li et al. [1] and Chen et al. [15] demonstrated that the wall friction factor has little impact on critical gas velocity.Taking as 0.1, we can get As known by the standard experimental drag curve, drag coefficient fluctuates heavily under the condition of transition flow and turbulence flow (2 × 10 5 ≤ Re ≤ 10 6 ).It is obviously unreasonable to take drag coefficient as a fixed value.Thus, we use SPSS to conduct nonlinear fitting of the experimental data in transition flow and turbulence flow (see Table 1 and Figure 2).
As can be seen from Table 1 and Figure 2, square of cubic equation is 0.940, which shows that the fitting of this model is better than others.The statistic was 25.902, indicating that regression model fitting results are good.Therefore, we take the cubic model in transition flow and turbulence flow: 2 that when Reynolds number ranges from 3.2 × 10 5 to 10 6 , the result obtained by Nosseir model is less than the actual data and Nosseir model will cause considerable errors.Zhao et al. [21] used Nosseir model to predict critical gas velocity in Sulige gas field, but the calculation results have a large deviation.
In a word, the critical liquid-removal gas velocity of directional well in transient flow and turbulence flow can be written as follows: number is a correction term, which can be introduced:
Correction Term
As can be seen from ( 10) and (11), the correction term is mainly dependent upon Reynolds number and deviation angle.As shown in Figure 3, the correction term is plotted as function of deviation angle at different Reynolds number.
It can be seen from Figure 3 and Table 3 that, with the increase of Reynolds number, the correction term increases significantly and then decreases slowly, but it shows a rising trend in general.Furthermore, with the increase of the correction term, the critical gas velocity increases and liquidcarrying capacity decreases.With the increase of deviation angle, the influence of Reynolds number on the correction term is becoming smaller.While at an identical Reynolds number, as the deviation angle increases, the correction term decreases, and this decreasing trend increases as the pipe deviates, which means that critical gas velocity decreases and liquid-carrying capacity is enhanced.This is because as the deviation angle increases, the gravitational force in the flow direction decreases which will reduce the critical gas velocity [22].For the convenience of site application, the curve in Figure 3 is transformed into a correction term reference table (see Table 3).And the noticed point is that the table should be modified based on the actual gas field conditions.
Application of Field Data
In order to validate the applicability to directional gas well in transition flow and turbulence flow, a large amount of field data collected from Chinese gas reservoir including both northern block and western block is used to verify the prediction accuracy of the new model.In this paper, calculation results obtained by the new model will be compared with those from other models.
Application of Field Data in Chinese Western Block.
Western block consists of 16 directional gas wells, which includes 4 unloaded wells and 12 loaded wells.Among all the directional gas wells in the block, the Reynolds number ranges from 2.3 × 10 5 to 7.7 × 10 5 and the deviation angle ranges from 24 ∘ to 50 ∘ .The detailed field data are listed in Table 4.And the gas rate is converted to the superficial velocity used to compare with the calculated critical gas velocity, as shown in column 7. Column 9 and column 11 represent the critical gas velocity calculated by Belfroid model and by new model, respectively.If the critical gas velocity is higher than current gas velocity, the well is considered to be loaded.On the contrary, if the critical gas velocity is lower than current gas velocity, water cannot accumulate in bottom hole and the well is considered to be unloaded.Column 10 and column 12 represent the predicting status from the two models.Column 13 represents the test status which is the actual status of 16 directional gas wells in Chinese western block.If the predicting status is consistent with the test status, then the predicted result is correct.On the other hand, if the predicting status is inconsistent with the test status, the predicted result is incorrect.
As can be seen in Table 4, when the Belfroid model is used to predict liquid loading status, 6 wells are predicted incorrectly, including well number 3, well number 4, well number 8, well number 9, well number 11, and well number 14, with 62.5% accuracy.The main reason is due to the fail to consider the influence of Reynolds number on drag coefficient in transition flow and turbulent flow.When the new model is used to predict liquid loading status, only 2 wells are predicted incorrectly, including well number 3 and well number 4, with up to 87.5% accuracy, which is in good agreement with the actual state.The prediction results show great improvement over the Belfroid model.Therefore, in the aspect of predicting liquid loading, the new model is better than the Belfroid model. 5.
In this paper, the new model and several common models are used to calculate critical liquid-carrying gas velocity.The results are shown in Table 6.
As can be seen in Tables 5 and 6, under the condition of transition flow and turbulent flow, the error of the calculated results by the new model is less than 10%, which is more accurate than the common methods.The Turner model, as a whole, overestimates the critical gas velocity for directional well in transition flow and turbulent flow, which is possibly due to the overlook of the impact of deviation angle and flow regime on critical gas velocity.The prediction results of Belfroid model are better than Turner model.But the prediction error is still large, which may be because the model fails to consider the impact of flow regime on critical gas velocity.In addition, the prediction error of Chen model [15] is also large.This is mainly because the model is obtained by force analysis of liquid film, but the droplet model is more reliable than the liquid film model in the process of liquidcarrying [20].(2) A correction term formula is introduced.The impact of deviation angle and Reynolds number on the correction term is discussed.And a correction term reference table is given for the convenience of site application.(3) A large quantity of field data collected from Chinese gas reservoir, including both northern block and western block, is used to verify the prediction accuracy of the new model.It can be consistently seen that the new model is superior to the several common models in predicting liquid loading of directional wells in transition flow and turbulent flow.
Figure 1 :
Figure1: Force balance of droplet in directional well, where is the deviation angle, degrees, with = 90 ∘ which represents the horizontal well; is the drag force of airstream on droplet, N; is the droplet gravity, N; is the wall friction to the droplet, N; is the bracing force, N; is the buoyancy force, N.
Table 1 :
Nonlinear fitting of Reynolds number and drag coefficient in transition flow and turbulence flow.
Figure 3 :
Figure 3: Relationship between deviation angle and correction coefficient under different Reynolds number.
( 1 )
The balance of forces acting on droplet in directional gas well is analyzed.By nonlinear simulation of the experiment data (2 × 10 5 ≤ Re ≤ 10 6 ), we obtain a function relating to Reynolds number and drag coefficient.Eventually we derive a new model to predict critical gas velocity of directional well in transition flow and turbulent flow.
Table 4 :
Field data in western block and liquid loading predicting results.
Table 5 :
Field data of 4 wells in northern block.Application of Field Data in Chinese Northern Block.There are 4 wells in Chinese northern block, including A1, A2, B1, and B2.A1 and A2 have been shut in due to bottom overstock liquid terrible.On the other hand, B1 and B2 are being produced.The detailed field data are listed in Table Deviation angle, degrees : Drag force of airstream on droplet, N : Droplet gravity, N : Wall friction to the droplet, N : Bracing force, N : Buoyancy force, N : Equivalent diameter, m : Maximum droplet diameter, m : Liquid density, kg/m 3 : Gravity acceleration, m/s 2 : Gas density, kg/m 3 : : Drag coefficient, dimensionless : Critical gas velocity, m/s : Wall friction factor, dimensionless : Gas-liquid interfacial tension, N Re: Reynolds number, dimensionless : number, dimensionless. | 2018-12-03T03:30:40.209Z | 2017-05-23T00:00:00.000 | {
"year": 2017,
"sha1": "7bde2f0cac4a0c46adb62ad32d07f2ed8da61185",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/jchem/2017/4969765.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7bde2f0cac4a0c46adb62ad32d07f2ed8da61185",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
250572078 | pes2o/s2orc | v3-fos-license | Immune Checkpoint Inhibitor-Mediated Cancer Theranostics with Radiolabeled Anti-Granzyme B Peptide
Although immune checkpoint inhibitors (ICI) have revolutionized cancer management, patient response can be heterogeneous, and the development of ICI resistance is increasingly reported. Novel treatment strategies are necessary not only to expand the use of ICI to previously unresponsive tumor types but also to overcome resistance. Targeted radionuclide therapy may synergize well with ICIs since it can promote a pro-inflammatory tumor microenvironment. We investigated the use of a granzyme B targeted peptide (GZP) as a cancer theranostic agent, radiolabeled with 68Ga (68Ga-GZP) as a PET imaging agent and radiolabeled with 90Y (90Y-GZP) as a targeted radionuclide therapy agent for combinational therapy with ICI in murine models of colon cancer. Our results demonstrate that GZP increasingly accumulates in tumor tissue after ICI and that the combination of ICI with 90Y-GZP promotes a dose-dependent response, achieving curative response in some settings and increased overall survival.
Introduction
Immune checkpoint inhibitors (ICI) have revolutionized cancer treatment. While conventional therapies rely on cytotoxic effects directly affecting tumor cells, ICIs work primarily at restoring antitumor immune responses by suppressing co-inhibitory T-cell signaling [1]. Even though the role of immune checkpoints is constantly evolving, ICI efficacy depends on the activation and expansion of CD8 T cells present in the tumor microenvironment (TME). For that, tumor antigens are recognized by T cells through antigen-presenting cells (APCs) via major histocompatibility complex (MHC) interaction. Activation is successful with an appropriate cytokine environment and a secondary signal (via CD28 receptor on T cells). The newly differentiated tumor-specific effector T cells proliferate and migrate to the TME where they release granzyme B and perforin to promote tumor cell death [2]. Effector memory T cells are also generated and can allow life-long immunologic memory [3,4]. Several co-receptors can be present at different molecular checkpoints. For example, CTLA-4 competes with CD28 and is involved in Regulatory T cells (Treg) suppressive function, while PD-L1 signal promotes T-cell exhaustion and immune suppression [1,5]. Anti-CLA-4 and anti PD1/PD-L1 monoclonal antibody therapies increase cytotoxic T-cell activation and have been approved and successfully employed for a diverse range of cancer types, including microsatellite instability-high colon adenocarcinoma [6][7][8]. Although ICIs have generated incredible results, patient response is variable and heterogeneous even among the same cancer type, while some patients do not respond at all. In addition, the development of ICI resistance is increasingly reported [9,10]. ICI failure and acquired resistance mechanisms can be due to the tumor, TME or patient immune system characteristics. In fact, more than one mechanism of resistance may be present within one tumor and patients with the same tumor type can have different failure mechanisms [11]. In this context, novel combination treatments able to expand ICI use or overcome ICI resistance are needed. Targeted radionuclide therapy (TRT) is a radiotherapy modality consisting of systemic delivery of radioactive atoms to induce DNA damage in tumor cells, with localized accumulation through targeting ligands. Consequently, TRT can deliver its tumoricidal radiation irrespective of tumor location and burden in a selective manner that minimizes collateral tissue damage [12]. TRT agents typically target specific receptors and metabolic transporters that are differentially overexpressed in tumor cells. Our group has previously developed a peptide-based radiotracer ( 68 Ga-GZP) that can selectively bind to granzyme B released by activated CD8 T cells in response to ICI [13,14]. We have shown that granzyme B PET imaging with 68 Ga-GZP can be used as a biomarker for predicting tumoral immunotherapy response [14] and can non-invasively identify immune-related adverse events [15]. Since GZP increasingly accumulates in the tumor tissue after immunotherapy, we investigated the use of GZP as a cancer theranostic agent, radiolabeled with 68 Ga ( 68 Ga-GZP) as a PET imaging agent and radiolabeled with 90 Y ( 90 Y-GZP) as a TRT agent for combinational therapy with ICI in murine models of colon cancer.
Syngeneic Colon Cancer Animal Model
Murine colon adenocarcinoma MC38 cells derived from C57BL6 mice (Kerafast, Boston, MA, USA) and murine colon carcinoma CT26 derived from BALB/c (ATCC) were cultured in DMEM or RPMI medium (respectively) supplemented with 10% FBS at 37 • C and 5% CO 2 .
All experimental procedures and animal studies were performed under the approval of the Institutional Animal Care and Use Committee (IACUC). Syngeneic allograft tumors were implanted subcutaneously in the upper right flank of mice (1 × 10 6 cells in a 1:1 ratio in Matrigel).
Ga-68 and Y-90 Radiolabeling of GZP
Radiolabeling of NOTA-GZP was performed as previously described [14]. In brief, 100 µg (approximately 1e-7mols) of NOTA-GZP (in 100 µL of PBS) was mixed with 370 MBq of 68 Ga in 2 M HEPES buffer (pH 3.5-4.0) for 10 min at room temperature. The reaction product was purified with a C18 Sep-Pak solid-phase extraction cartridge, eluted with 200 µL of 70% ethanol and diluted with saline to a final concentration of less than 10% ethanol prior to administration. Radiolabeling yield was calculated through instant thinlayer chromatography (iTLC) using two solvent systems as reported elsewhere [16,17]. 90 YCl 3 was purchased from Eckert and Ziegler (Germany); 370 MBq of activity in 500 µL of sodium acetate buffer (pH 5.5) was mixed with 100 µg of DOTA-GZP (in 100 µL of PBS) for 1 h at 90 • C. The reaction was purified as described above and radiolabeling yield was calculated through iTLC using 50 mM EDTA as solvent.
PET/MR Imaging
Mice were intravenously injected with 7.4-11.11 MBq of 68 Ga-GZ. PET/MR Images were acquired 1 h post injection (p.i) on anesthetized mice on a preclinical multimodal 4.7T MR/PET scanner (Bruker, Billerica, MA, USA). Static PET images were acquired for 15 min followed by acquisition of fat-saturated T1-and T2-weighted images. Images were processed using AMIDE 2 processing software. Uptake values, presented as percent injected activity per cubic centimeter (%ID/cc) for each organ, were calculated in a 3D region of interest manually drawn using MR images. Imaging was carried out in MC38 and CT26 tumor-bearing mice three days after the last treatment with either PBS or a regimen with immune checkpoint inhibitors (ICI). ICI was given intraperitoneally for three doses at three days apart of anti-PD-1 (250 µg, clone RMP1-14) + anti-CTLA-4 (100 µg, clone 3H3) (Bioxcell, Lebanon, NH, USA).
Therapeutic Studies
A schematic representation of experimental timeline can be found in Figure S1. Mice bearing either MC38 or CT26 subcutaneous tumors were employed for therapy. Groups of 5 female mice were randomized, administered either PBS (control; intravenous), three doses of immune checkpoint inhibitors anti-PD1 (250 µg, clone RPMI-14; Bioxcell) and anti-CTLA-4 (100 µg, clone 9D9; Bioxcell) three days apart (ICI, intraperitoneal), ICI followed by intravenous injection of 90 Y-GZP at two injected activities, 2.22 MBq (Low Dose) or 22.2 MBq (High Dose), or ICI followed by intravenous injection of free 90 Y (22.2 MBq). Tumor volume was measured by a blinded researcher with calipers 2-3 times per week. Humane end points were weight loss of more than 20% of initial body weight, tumor growth of more than 500% of its initial size (at start of treatment) and overall health decline. No animals, experimental units or data points were excluded from the analysis.
Toxicity Evaluation
Animal weight was measured twice per week, and general well-being was assessed daily by veterinarian staff. At the end of the study, heart, lungs, kidney, spleen, liver, and tumor were excised from mice of all groups and submitted to the Specialized Histopathology Services of the MGH Pathology Core for processing, sectioning and H&E staining. For immunohistochemical studies, tumors were collected in 4% paraformaldehyde for 24 h, then placed in ethanol and embedded in paraffin for sectioning. Hematoxylin and Eosin (H&E), CD3, CD4 and CD8 staining were carried out through standard procedures by the Specialized Histopathology Services of the MGH Pathology Core. Brightfield images at 10× magnification were taken from at least 10 fields of view on a Nikon Eclipse Ti microscope. Images were analyzed on ImageJ software (version 1.8.0_172, Bethesda, MD, USA) to determine staining intensity.
Immunofluorescence
Tumor tissue was also stained against granzyme B after deparaffinization and rehydration. Antigen retrieval was carried out using standard heat-based antigen retrieval techniques [18]. Immunofluorescence staining was carried out according to previously published procedures [15]. After blocking with 5% goat serum, rabbit anti-mouse granzyme B primary antibody (ab255598; Abcam, Cambridge, UK) was incubated overnight at 4 • C. On the following day, washing was carried out followed by incubation with AlexaFluor 647 conjugated goat anti-rabbit IgG secondary antibody (A32733; ThermoFisher, Waltham, MA, USA) at room temperature in a moist dark chamber for 1 h. DAPI (4 , 6-diamidino-2-phenylindole) was used for staining cell nuclei. Semi-quantitative data of immunofluorescence were retrieved by measuring total fluorescence in the red channel (granzyme B) divided by area (integrated density). All images were acquired using Biotek Cytation 5 Cell Imaging Multi-Mode Reader and analyzed through Biotek Gen5 software (Agilent, v3.11, Santa Clara, CA, USA).
Statistical Analysis
A sample size of n = 3 was selected for PET/MR imaging. A minimum sample size of n = 5 was used for the other experiments. Two-tailed student t-test was used for statistical analysis. A p < 0.05 value was considered for statistical significance. Kaplan-Meier curves and log-rank test were used for overall survival analysis. All statistical analyses were performed using GraphPad Prism 7 (San Diego, CA, USA). Quantitative data are expressed as mean ± standard deviation.
PET/MR Imaging
Longitudinal PET/MR imaging was acquired to access the overall biodistribution of 68 Ga-GZP and the differential tumor uptake between animals that received PBS and the immune checkpoint inhibitor regimen in two murine models of colon cancer. Figure 1A shows maximum intensity projection (MIP) PET images of the in vivo distribution at 1 h after intravenous injection of the radiotracer in MC38 and CT26 tumor-bearing mice at day 12 after the tumor was implanted. Either PBS or a combination of anti-PD-1 + anti-CTLA-4 was given three times, three days apart (days 3, 6 and 9) and animals were imaged three days after the last treatment. PET imaging revealed a high signal in kidneys and bladder for all groups reflecting its renal excretion as observed previously [13]. Aside from the kidneys, liver was the organ with the highest uptake. A higher tumor uptake in animals that received ICI is observed in both tumor models when compared to the PBS-injected group ( Figure S2A). The quantitative region of interest analysis of the PET images ( Figure 1B and Figure S2A,B) revealed significantly higher tumor uptake in the ICI-injected group when compared to the PBS group in both MC38 tumor-bearing mice (12.8 ± 2.25 vs. 24.0 ± 9.5%IA/cc, p = 0.04) and CT26 tumor-bearing mice (3.7 ± 1.1 vs. 7.5 ± 1.7%IA/cc, p = 0.03). Of note, blood pool uptake was not statistically different for any of the groups investigated. Tumor-to-blood ratios were significantly (p < 0.05) higher for the group of mice that received immunotherapy when compared to PBS injected only with values of 0.7 ± 0.3 (PBS) vs. 4.1 ± 0.8 (ICI) (p = 0.0028) and 2.8 ± 0.7 (PBS) vs. 4.8 ± 0.9 (ICI) (p = 0.04) and for CT26 and MC38, respectively ( Figure S2C). Tumor-to-muscle ratios ( Figure S2D) followed the same trend, reaching 4.1 ± 0.8 and 4.8 ± 2.3 for CT26 and MC38 animals that received ICI, respectively.
Therapeutic Studies
A schematic representation of the therapeutic study groups, dosage and administration regimen can be found in Table S1. For therapeutic studies, animals with MC38 or CT26 tumors were injected with a high (22.2 MBq) and a low (2.2 MBq) dose of 90 Y-GZP after ICI. The ICI regimen was carried out with the intraperitoneal injection of anti-PD-1 and anti-CTLA-4 (three doses, three days apart, starting three days after tumor implantation). Growth curves are presented in Figure 2. Importantly, both CT26 and MC38 tumorbearing animals show complete tumor response after treatment with ICI followed by receiving a high dose of 90 Y-GZP. Animals that received just PBS had exponentially growing
Therapeutic Studies
A schematic representation of the therapeutic study groups, dosage and administration regimen can be found in Table S1. For therapeutic studies, animals with MC38 or CT26 tumors were injected with a high (22.2 MBq) and a low (2.2 MBq) dose of 90 Y-GZP after ICI. The ICI regimen was carried out with the intraperitoneal injection of anti-PD-1 and anti-CTLA-4 (three doses, three days apart, starting three days after tumor implantation). Growth curves are presented in Figure 2. Importantly, both CT26 and MC38 tumor-bearing animals show complete tumor response after treatment with ICI followed by receiving a high dose of 90 Y-GZP. Animals that received just PBS had exponentially growing tumors throughout the experiment and only survived for around 20 days. In both tumor models, animals that received ICI, ICI + GZP or free 90 Y had comparable tumor growths that were significantly higher (p < 0.05) than the high-dose 90 Y-GZP group. Of note, a higher dose of 90 Y-GZP was able to achieve full response and curative treatment in both sets of tumorbearing mice. In CT26 tumor-bearing mice, a low dose of 90 Y-GZP was not sufficient to elicit a significant tumor growth delay response and tumors had growth similar to those of the control groups. In contrast, in MC38 tumor-bearing mice, a low dose of 90 Y-GZP resulted in significantly smaller (p < 0.05) tumor volumes when compared to PBS group but not when compared to groups that received ICI.
Pharmaceutics 2022, 14, x FOR PEER REVIEW 6 of 13 MC38 tumor model, animals injected with ICI + 90 Y-GZP (low dose) also survived longer with a median survival of 28 days when compared to other groups. Since MC38 tumors usually respond well to immunotherapy alone and the tumors were slightly smaller in animals that received ICI + 90 Y-GZP (high dose) at day 12 when compared to control groups, we performed additional experiments comparing ICI alone and 90 Y-GZP (high dose) when tumors were a little bigger and statistically identical between those groups ( Figure S8). Treatment also resulted in significantly smaller tumor volumes for animals injected with ICI + 90 Y-GZP (high dose) when compared to the ICI alone group. After tumor tissue collection ( Figure S3), an increased necrotic area was observed in tumors of animals injected with a high dose of 90 Y-GZP when compared to the other groups investigated. More importantly, an increased overall survival was found for animals that received a high dose of 90 Y-GZP in both tumor models investigated ( Figure 3A). Median survival remained undefined (>35 days) in CT26 tumor-bearing mice who received a higher dose of 90 Y-GZP when compared to 20, 21, 21, 21 and 23 days for PBS, ICI + free 90 Y (high dose), ICI, ICI + GZP and ICI + 90 Y-GZP (low dose), respectively. Similarly, in MC38 tumor-bearing mice, a median survival was undefined (>35 days) for the 90 Y-GZP high-dose group, which significantly increased (p < 0.05) when compared to 17, 26, 20, 23 and 28 days found for PBS, ICI + free 90 Y (high dose), ICI and ICI + GZP and ICI groups. In addition, in the MC38 tumor model, animals injected with ICI + 90 Y-GZP (low dose) also survived longer with a median survival of 28 days when compared to other groups. Since MC38 tumors usually respond well to immunotherapy alone and the tumors were slightly smaller in animals that received ICI + 90 Y-GZP (high dose) at day 12 when compared to control groups, we performed additional experiments comparing ICI alone and 90 Y-GZP (high dose) when tumors were a little bigger and statistically identical between those groups ( Figure S8). Treatment also resulted in significantly smaller tumor volumes for animals injected with ICI + 90 Y-GZP (high dose) when compared to the ICI alone group. After tumor tissue collection ( Figure S3), an increased necrotic area was observed in tumors of animals injected with a high dose of 90 Y-GZP when compared to the other groups investigated.
Toxicity Evaluation
Overall systemic toxicity was investigated through differences in body weight. Throughout the experiments, no major changes in body weight were found for any of the groups investigated ( Figure 3B) and none of the groups reached the humane endpoint of more than 20% body weight decrease.
To further investigate potential toxicities, at the end of the study, major organs were excised and H&E stained. H&E staining of tissue slides of heart, liver, lungs, spleen and kidney ( Figure 4) demonstrated no morphological alterations in any of the organs investigated in any of the groups.
Toxicity Evaluation
Overall systemic toxicity was investigated through differences in body weight. Throughout the experiments, no major changes in body weight were found for any of the groups investigated ( Figure 3B) and none of the groups reached the humane endpoint of more than 20% body weight decrease.
To further investigate potential toxicities, at the end of the study, major organs were excised and H&E stained. H&E staining of tissue slides of heart, liver, lungs, spleen and kidney (Figure 4) demonstrated no morphological alterations in any of the organs investigated in any of the groups.
Histopathological Analysis of Tumor
CD3, CD4 and CD8 IHC staining was performed in tumor tissues collected at different time points after treatment ( Figure S1). Representative images of CD3, CD4 and CD8 staining of tumor tissues in animals that received ICI, ICI + lose-dose 90 Y-GZP and highdose 90 Y-GZP can be seen in Figures S4 and S5. Heterogeneous staining was found within the groups, indicating that some animals responded to ICI therapy and some animals did not. In the same manner, a low or high dose of 90 Y-GZP did not deplete the amount of immune cell infiltrate within the tumors and in those animals that responded to ICI, 90 Y-GZP increased expression of CD4 and CD8. Similar levels of these markers were found in all three groups, indicating that targeted radionuclide therapy with 90 Y-GZP did not deplete immune cell levels within the tumors on the days analyzed. Relatively high levels of CD4 and CD8 were found in animals that received a high dose of 90 Y-GZP compared to the groups that received a lower dose of 90 Y-GZP. Immunofluorescence staining of tumor tissue (Figures 5 and S7) also demonstrated that very high levels of GZB were present in animals that received ICI, indicating good response to immunotherapy and good potential for our targeting agent to deliver a toxic radiation dose to the tissue, as well as high levels of GZB present after targeted radiotherapy with 90 Y-GZP. These results indicate that CD8 T cells remain active or are additionally recruited after targeted radionuclide therapy. Western blot studies were carried out to validate findings from IHC and the results ( Figure S6) demonstrate that, in MC38 tumor-bearing mice, no statistical significance was found for CD8 and GZB levels of animals that received 90 Y-GZP when compared to animals that received ICI only. In contrast, in CT26 tumor-bearing mice, even though levels of CD8 were not significant between groups, slightly lower levels of GZB were found for the animals that received 90 Y-GZP after ICI.
Histopathological Analysis of Tumor
CD3, CD4 and CD8 IHC staining was performed in tumor tissues collected at different time points after treatment ( Figure S1). Representative images of CD3, CD4 and CD8 staining of tumor tissues in animals that received ICI, ICI + lose-dose 90 Y-GZP and highdose 90 Y-GZP can be seen in Figures S4 and S5. Heterogeneous staining was found within the groups, indicating that some animals responded to ICI therapy and some animals did not. In the same manner, a low or high dose of 90 Y-GZP did not deplete the amount of immune cell infiltrate within the tumors and in those animals that responded to ICI, 90 Y-GZP increased expression of CD4 and CD8. Similar levels of these markers were found in all three groups, indicating that targeted radionuclide therapy with 90 Y-GZP did not deplete immune cell levels within the tumors on the days analyzed. Relatively high levels of CD4 and CD8 were found in animals that received a high dose of 90 Y-GZP compared to the groups that received a lower dose of 90 Y-GZP. Immunofluorescence staining of tumor tissue ( Figure 5 and Figure S7) also demonstrated that very high levels of GZB were present in animals that received ICI, indicating good response to immunotherapy and good potential for our targeting agent to deliver a toxic radiation dose to the tissue, as well as high levels of GZB present after targeted radiotherapy with 90 Y-GZP. These results indicate that CD8 T cells remain active or are additionally recruited after targeted radionuclide therapy. Western blot studies were carried out to validate findings from IHC and the results ( Figure S6) demonstrate that, in MC38 tumor-bearing mice, no statistical significance was found for CD8 and GZB levels of animals that received 90 Y-GZP when compared to animals that received ICI only. In contrast, in CT26 tumor-bearing mice, even though levels of CD8 were not significant between groups, slightly lower levels of GZB were found for the animals that received 90 Y-GZP after ICI.
Discussion
Metastatic colorectal cancer (mCRC) often presents with multiple metastatic sites that are unresectable [19]. In colon cancer, pembrolizumab and nivolumab (anti-PD1 antibodies) have been FDA approved as monotherapy or combined with ipilimumab (anti-CTLA-4 antibody) as treatment for patients with microsatellite instability high (MSI-H)/DNA mismatch repair-deficient (dMMR) mCRC. Unfortunately, ICI's clinical benefit is somewhat limited in mCRC as most (>95%) mCRC patients are microsatellite stable (MSS)/DNA mismatch repair-proficient (pMMR) (MSS/pMMR). Several mechanisms of resistance to ICI therapies have been proposed in CRC, but the factors that drive MSS/pMMR patients to be resistant to ICI are still unknown [20]. In addition, even among MSI/dMMR patients, heterogeneity in response to ICI has been demonstrated, either through a primary resistance mechanism or because of a misdiagnosis of their MSI or dMMR status. Therefore, to expand and improve the clinical outcomes of mCRC patients that receive ICI, we need to either (i) develop a better method of identifying patients that will most likely respond to ICI or (ii) combine ICI with other forms of therapy for a synergistic therapeutic response.
Chemotherapy and radiotherapy (RT) are known to damage tumor cells, which can lead to dendritic cells recognition of tumor antigens and CD8+ T cells activation [21,22]. Hence, combining ICIs with chemotherapy or RT could potentially be synergistic and overcome ICI resistance in patients with mCRC. RT for mCRC is mostly performed through an external beam irradiation (EBRT-External Beam Radiation Therapy), which is limited in the number and size of tumor lesions it can treat, as treatments with large radiation fields are associated with significant toxicity [23]. Unlike EBRT, which often cannot target the entire metastatic burden, ligand-directed molecularly targeted radionuclide therapy (TRT) is a radiation therapy modality consisting of systemic delivery of radioactive atoms to induce DNA damage in tumor cells. Consequently, TRT can deliver its tumoricidal radiation irrespective of tumor location and burden in a selective manner that
Discussion
Metastatic colorectal cancer (mCRC) often presents with multiple metastatic sites that are unresectable [19]. In colon cancer, pembrolizumab and nivolumab (anti-PD1 antibodies) have been FDA approved as monotherapy or combined with ipilimumab (anti-CTLA-4 antibody) as treatment for patients with microsatellite instability high (MSI-H)/DNA mismatch repair-deficient (dMMR) mCRC. Unfortunately, ICI's clinical benefit is somewhat limited in mCRC as most (>95%) mCRC patients are microsatellite stable (MSS)/DNA mismatch repair-proficient (pMMR) (MSS/pMMR). Several mechanisms of resistance to ICI therapies have been proposed in CRC, but the factors that drive MSS/pMMR patients to be resistant to ICI are still unknown [20]. In addition, even among MSI/dMMR patients, heterogeneity in response to ICI has been demonstrated, either through a primary resistance mechanism or because of a misdiagnosis of their MSI or dMMR status. Therefore, to expand and improve the clinical outcomes of mCRC patients that receive ICI, we need to either (i) develop a better method of identifying patients that will most likely respond to ICI or (ii) combine ICI with other forms of therapy for a synergistic therapeutic response.
Chemotherapy and radiotherapy (RT) are known to damage tumor cells, which can lead to dendritic cells recognition of tumor antigens and CD8+ T cells activation [21,22]. Hence, combining ICIs with chemotherapy or RT could potentially be synergistic and overcome ICI resistance in patients with mCRC. RT for mCRC is mostly performed through an external beam irradiation (EBRT-External Beam Radiation Therapy), which is limited in the number and size of tumor lesions it can treat, as treatments with large radiation fields are associated with significant toxicity [23]. Unlike EBRT, which often cannot target the entire metastatic burden, ligand-directed molecularly targeted radionuclide therapy (TRT) is a radiation therapy modality consisting of systemic delivery of radioactive atoms to induce DNA damage in tumor cells. Consequently, TRT can deliver its tumoricidal radiation irrespective of tumor location and burden in a selective manner that minimizes normal tissue damage. TRT agents typically target specific receptors and metabolic transporters overexpressed in tumor cells [24]. Therefore, this systemic mode of radiation delivery is optimally suited for targeting metastatic lesions. Several targets have been proposed for mCRC [25], but its success is hampered by lack of specificity, sensitivity, and tumor heterogeneity [26][27][28]. We overcome the limitation that many mCRC tumor cells lack specific targets addressable by TRT by utilizing a target that will potentially be present after ICI in the tumor microenvironment. Our group has previously demonstrated that an anti-granzyme B peptide (GZP) specifically binds to granzyme B, a marker of CD8+ T cell activation. ICI ultimately works by promoting activation of CD8+ T cells to produce and release granzyme B resulting in tumor cell toxicity.
Herein, we aimed at using radiolabeled GZP as a novel theranostic agent for CRC in combination with ICI therapy. Our results reveal that 68 Ga-GZP successfully accumulated in two tumor models of CRC after ICI. We further show that there was a statistically significant higher tumor uptake in animals after ICI treatment when compared to animals without ICI treatment. We also observed intrinsic differences between CT26 and MC38 tumor uptake, with the latter having a significantly higher uptake. These data are in agreement with literature findings that MC38 tumors are usually more immunogenic and respond better to ICI [29]. We hypothesized that activated T cells could not only be used for patient selection, but could also be harnessed for selective targeted radiotherapy. For therapeutic purposes, tissue depth penetration and tumor dosimetry are favorable for β− emitters, with isotopes such as 90 Y extensively used in clinical settings [12]. Since we confirmed higher tumor uptake of GZP after ICI therapy, we administered two different doses of 90 Y-GZP after ICI to investigate if TRT could contribute to antitumor effects and have a synergistic relation with ICI. Indeed, we found that in both tumor models, a high dose of 90 Y-GZP promoted total tumor regression and increased survival, a major potential advantage when treating such tumors. Interestingly, a low dose of 90 Y-GZP contributed to a slightly (not statistically significant) worse antitumor response than ICI alone in CT26 tumors. Radiotherapy can promote both stimulatory and immunosuppressive response in immune cells [30,31]. Dose, fractionation and tumor type likely influence pro-or anti-inflammatory radiotherapy traits [32][33][34]. However, a low dose of 90 Y-GZP actually contributed positively to tumor regression in MC38 tumors, most likely because MC38 is more immunogenic and had significantly higher tumor uptake so that MC38 tumors likely received a much higher dose of radiation than CT26 tumors, even if the same activity was injected for both. A dosimetry study is indicated to identify the actual dose received by each tumor model. Notably, this is one of the biggest advantages of a theranostic approach-by screening tumor uptake before TRT, it is possible to calculate in a personalized manner how much of radioactivity should be injected to achieve a dose capable of tumor eradication, especially when synergistic with ICI. It has also been shown that the type of dosage (fractionated or single dose) as well as timing between radiotherapy administration and ICI can influence antitumor response since it influences lymphoid and myeloid responses and CD8+/T reg ratios along with modulation of PD-L1 and other suppression/activation markers. In future studies, we plan to better understand the effects of dose timing and fractionation to optimize outcomes. We will also use metastatic models to see if this approach can also treat multiple tumor sites and diminish overall metastatic burden. We also understand that 68 Ga may not be the perfect surrogate for 90 Y. However, since imaging with 90 Y is not feasible and 68 Ga is a widely available PET isotope, 68 Ga is routinely used as a theranostic "twin" to therapeutic isotopes such as 177 Lu and 90 Y in both preclinical and clinical settings [35][36][37][38]. For example, a NETSPOT ( 68 Ga-DOTATATE) scan is routinely used in the clinic as a theranostic "twin" in the identification of neuroendocrine patients that will benefit from treatment with Lutathera ( 177 Lu-DOTATATE) [39], and Locametz or Illuccix ( 68 Ga-PSMA-11) are also FDA-approved complementary diagnostic imaging agents for radioligand therapy with Pluvicto ( 177 Lu-PSMA-617) [40] in metastatic castration-resistant prostate cancer. Several studies and clinical trials also use 68 Ga-PSMA agents as companion diagnostics for PSMA-based radionuclide therapy 225 Ac (clinical trials NCT04597411 and NCT04886986). Further studies with 86 Y-GZP are potentially needed for confirmation of similar pharmacokinetics and dosimetry calculations of 90 Y-GZP.
We evaluated the toxicity of our combinational treatment through morphological changes to major organs and found no signs of toxicity in any of the groups or organs, including the liver which had the highest off-target radioprobe uptake. However, one CT26 tumor-bearing mouse and one MC38 tumor-bearing mouse administered with ICI + 90 Y-GZP high dose died of unknown causes during our study. Other animals injected with 90 Y-GZP (low dose) also reached humane endpoints due to overall health decline and had to be euthanized. Further toxicity studies such as complete blood count and comprehensive metabolic panel are needed to elucidate potential toxicities.
It has been demonstrated in preclinical settings that radiation can also have immunosuppressive effects and a few studies have found diminished response to ICI after radiotherapy [41]. Even though a thorough investigation of the changes of the tumor immune microenvironment after TRT + ICI is out of the scope of this study, we stained tumor tissues of different treatment groups for basic immune cell phenotypes. We demonstrate that the presence of immune cell infiltrates was not depleted (or was replenished to a level above that at pre-radiotherapy treatment) after TRT. We also noted that high granzyme B levels were still present in tumor tissues after 90 Y-GZP as measured by immunofluorescence in both tumor models, but lower levels of granzyme B were found in CT26 tumor-bearing mice as measured by Western blot (WB). It is known that activated T cells transiently release GZB into a pericellular space that can be internalized by a target cell or can be inhibited or diffuse from the tumor [42,43]. The active granzyme B in the extracellular tumor microenvironment is dynamic, and variation in GZB levels might be due to timing rather than downregulation of GZB expression. Further studies are needed to understand the immunomodulatory effects of TRT in combination with ICI at different time points and with different administration regimens (fractionated doses, for example).
Conclusions
PET/MR imaging allowed clear visualization of activated CD8+ T cells by imaging granzyme B expression with 68 Ga-GZP after immune checkpoint inhibitor therapy. In our animal models of colorectal cancer, ICI was not sufficient to promote complete response in two different tumor models. However, when ICI was combined with a relatively high dose of targeted radionuclide therapy ( 90 Y-GZP), animals in both tumor models (MC38 and CT26) had complete and durable tumor regression and increased overall survival. Treatments were also well tolerated as no morphological changes were found in major organs of the animals. In summary, this study demonstrates that GZP can be used as a theranostic agent. When labeled with 68 Ga, it can select responders to immune checkpoint inhibitors and, when labeled with 90 Y, it can improve the therapeutic efficacy of ICI. | 2022-07-16T15:17:52.700Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "5f1c5dd5ce10a94f5a5e189be9d5f3d0d703cfed",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bba015a5fdd8686b39d067c18fc97ce166bceb2b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248601418 | pes2o/s2orc | v3-fos-license | Aster saponin A2 inhibits osteoclastogenesis through mitogen-activated protein kinase-c-Fos-NFATc1 signaling pathway
Background In lipopolysaccharide-induced RAW264.7 cells, Aster tataricus (AT) inhibits the nuclear factor kappa-light-chain-enhancer of activated B cells and MAPKs pathways and critical pathways of osteoclast development and bone resorption. Objectives This study examined how aster saponin A2 (AS-A2) isolated from AT affects the processes and function of osteoclastogenesis induced by receptor activator of nuclear factor kappa-B ligand (RANKL) in RAW264.7 cells and bone marrow macrophages (BMMs). Methods The cell viability, tartrate-resistant acid phosphatase staining, pit formation assay, polymerase chain reaction, and western blot were carried out to determine the effects of AS-A2 on osteoclastogenesis. Results In RAW264.7 and BMMs, AS-A2 decreased RANKL-initiated osteoclast differentiation in a concentration-dependent manner. In AS-A2-treated cells, the phosphorylation of ERK1/2, JNK, and p38 protein expression were reduced considerably compared to the control cells. In RAW264.7 cells, AS-A2 suppressed the RANKL-induced activation of osteoclast-related genes. During osteoclast differentiation, AS-A2 suppressed the transcriptional and translational expression of NFATc1 and c-Fos. AS-A2 inhibited osteoclast development, reducing the size of the bone resorption pit area. Conclusion AS-A2 isolated from AT appears to be a viable therapeutic therapy for osteolytic illnesses, such as osteoporosis, Paget’s disease, and osteogenesis imperfecta.
INTRODUCTION
Osteoporosis is caused by a discrepancy between bone resorption and bone formation. Overactive osteoclasts cause uncontrolled bone resorption, leading to metabolic bone illnesses, such as osteoporosis and rheumatoid arthritis [1]. Receptor activator of nuclear factor kappa-B ligand (RANKL) and macrophage colony-stimulating factor (M-CSF) drive monocyte-macrophages to develop into osteoclasts [2][3][4]. After pairing RANKL to the 3/11 https://vetsci.org h in 96-well plates. The cells in each well were then treated with varying doses of AS-A2 for 24 h. The cells were incubated for 2 h after adding the CCK-8 solution. A microplate reader (Power wave HT; BioTek, USA) measured the optical density at 540 nm. The outcomes are reported as the mean ± SD from three wells, and the cell viability is reported as a percentage of the control.
Osteoclast formation and TRAP staining assay
For osteoclast formation, 1 × 10 3 RAW264.7 cells/well were seeded in 96 well plates in the presence of 50 ng/mL of RANKL with AS-A2 at concentrations of 0.05, 0.5, and 5 µM. The cell culture medium was switched on alternate days for six days. Before being stained using a TRAP staining kit, the cells were treated for 10 min with 4% paraformaldehyde for fixation. The TRAP-positive multinucleated cells (TRAP + MNCs) were counted as osteoclast-like cells using an optical microscope (IX71; Olympus, Japan).
Preparation of mouse BMMs and bone resorption pit assays
Osteoclast progenitor cells differentiated from BMMs were extracted from the tibia and femur of C57BL/6 mice using α-MEM [14]. Briefly, a threefold volume of Gey's solution was added for at least 15 min to separate the blood cells. The BMMs were then incubated in α-MEM containing 10% FBS for one day. After proliferation, suspended cells were assembled and kept for three days in α-MEM having 10% FBS and 30 ng/mL M-CSF. BMMs adhering to the cell culture plate base were identified and differentiated into osteoclasts. BMMs (3 × 10 3 cells/well) were seeded in a medium holding 30 ng/mL M-CSF and 50 ng/mL RANKL were used to co-culture the BMMs for six days with or without AS-A2 at a dose of 0.05, 0.5, and 5 µM to generate osteoclastogenesis.
In the presence of 30 ng/mL M-CSF and 50 ng/mL RANKL, 5 × 10 3 BMMs were treated with various concentrations of AS-A2 (0.005 to 5 µM) in 96-well plates for one day to determine the cytotoxicity to the BMMs.
A bone resorption assay kit (#CSF-BRA-48 KIT; Cosmo Bio, Japan) covered with a calcium phosphate plate was used to analyze the bone resorption pit tests. For six days, BMMs cells were cultivated in the α-MEM medium with 30 ng/mL M-CSF, 50 ng/mL RANKL, and AS-A2. Every other day, the culture media were replaced. A TRAP staining kit was used to stain the cells for osteoclast staining. Image J software (National Institutes of Health, USA) was used for the measurement of the pit area.
RNA extraction and reverse-transcription polymerase chain reaction (RT-PCR)
RAW264.7 cells (1 × 10 5 ) were cultured in six-well plates and treated with AS-A2 (1, 5, and 10 µM) or vehicle in the presence of RANKL. For six days, the culture medium was switched every other day. A TRizol reagent (Invitrogen, USA) was used to extract the total RNA, and reverse transcription into cDNA was performed using a ReverTra Ace qPCR RT kit (Toyobo Biotechnology, Japan). HiPi DNA polymerase premix (ElpisBio, Korea) or i-MaxTMII DNA polymerase (iNtRON Biotechnology, Korea) were used for the RT-PCR steps. Bioneer (Korea) provided the PCR primers for RT-PCR, and all experiments were performed in triplicate. Table 1 lists the specific primer sequences. A small amount of Redsafe (iNtRON) was added to 2% agarose gel to make the nuclei acid staining solution, which was used to separate the PCR products. Specific quantities of genes were visualized under a UV transilluminator (Gel-Doc; Bio-Rad, USA) and were analyzed using a standard size 100 bp DNA loading ladder (iNtRON). https://doi.org/10.4142/jvs.21246 Anti-osteoclastogenesis effect of aster saponin A 2 from A. tataricus 4/11 https://vetsci.org Western blot analysis RAW264.7 (5 × 10 5 ) were cultured in a 60 mm cell culture dish in an α-MEM medium. The cells were treated with RANKL (50 ng/mL) and AS-A2 (1, 5, and 10 µM). After 24 h, the cells were lysed using a protein extraction reagent (iNtRON), as reported previously [15]. Sodium dodecyl sulfate-polyacrylamide gel electrophoresis was used to isolate equal quantities of proteins, which were then transferred to polyvinylidene fluoride nitrocellulose membranes (Bio-Rad). For 1 h, 5% non-fat skim milk blocked the membrane. The specific primary antibodies were then used for probing at 4°C. After overnight incubation, horseradish peroxidase-conjugated secondary antibodies were used to probe the primary antibodies for 1 h. A Clarity Western ECL Substrate kit (Bio-Rad) was used to observe the immunoreactive bands, and Image J software was used for quantitative analysis.
Statistically analysis
A triplicate test was conducted for all experiments, and the data are provided as mean ± SD. A one-way analysis of variance and t-test were used to examine the statistical significance. A p-value less than 0.05 was considered significant. Anti-osteoclastogenesis effect of aster saponin A 2 from A. tataricus
AS-A2 reduces osteoclast differentiation in BMMs
This study examined how AS-A2 affected the RANKL-induced osteoclast development in BMMs. The CCK-8 kit determined the cell viability. Compared to the control group, AS-A2 does not have cytotoxic effects on BMMs at the concentrations utilized ( Fig. 2A). For the RANKL-induced osteoclast differentiation assay, varying concentrations (0.01, 0.1, 1, and 5 µM) of AS-A2 were treated in BMMs for six days in the RANKL-induced osteoclast differentiation assay. RANKL increased osteoclast development significantly as evaluated by MNC formation from the TRAP staining data. In the BMMs, the quantity of MNCs in the TRAP-positive stained cells in AS-A2-treated groups was reduced dramatically by a treatment of AS-A2 with 0.1, 1, and 10 µM doses (Fig. 2C). AS-A2 could prevent RANKL-induced osteoclast development from BMMs in vitro.
AS-A2 inhibits RANKL-induced osteoclast differentiation signaling pathways
MAPKs, an essential signaling pathway during osteoclast differentiation [16], were examined to learn more about the molecular mechanism of AS-A2 on RANKL-induced differentiation in RAW264.7 cells. In RANKL-induced osteoclasts, the phosphorylation of ERK1/2, JNK, and p38 protein expression was increased within 15 min compared to the vehicle group. The phosphorylation of ERK1/2, JNK, and p38 protein in the AS-A2-treated cells was reduced considerably (Fig. 3). These results confirm that AS-A2 downregulates the MAPKs pathway during the inhibition of RANKL-induced differentiation of osteoclast in RAW264.7 cells.
AS-A2 suppresses RANKL-induced expression of NFATc1 and c-Fos in RAW264.7 cells
This study examined the effect of AS-A2 on the mRNA and protein expression of NFATc1, a vital transcriptional factor of osteoclast differentiation [15]. c-Fos expression, which has been linked to osteoclast development via NFATc1 downregulation, was also studied [18]. RT-PCR showed that the mRNA expression of NFATc1 and c-Fos were increased by a treatment with RANKL at a concentration of 10 µM, whereas AS-A2 diminished the level of NFATc1 and c-Fos transcription dramatically (Fig. 4A). Furthermore, a western blotting study showed that RANKL induced protein expressions of NFATc1 and c-Fos, whereas treatment with 10 µM AS-A2 decreased the NFATc1 and c-Fos significantly for six days (Fig. 5A). These findings showed that AS-A2 suppresses NFATc1 and c-Fos transcription during osteoclast differentiation.
AS-A2 suppresses bone resorption in BMMs
M-CSF and RANKL favored the formation of resorption pits when BMMs were cultivated in calcium-coated plates for six days during the pit formation assay. The formation of resorption pits with a RANKL treatment had a large number and area compared to the vehicle cells, as shown in Fig. 6A, but AS-A2 decreased the RANKL-induced size of the bone resorption pits at 5 and 10 µM of AS-A2. These findings suggest that AS-A2 inhibits osteoclast development, resulting in a smaller bone pit region (Fig. 6B). Fig. 4. Effects of AS-A2 on osteoclast-related gene expression. RAW264.7 cells were cultured in the presence or absence of RANKL (50 ng/mL) with AS-A2 (0 to10 μM), and the grown medium was changed on alternate days for six days. (A) TRAP, cathepsin K, MMP-2, MMP-9, NFATc1, c-Fos, and RANK genes mRNA expression were evaluated using RT-PCR. (B) The computable band solidity of the genes was adjusted to that of β-actin. The data are based on three unrelated experiments. AS-A2, aster saponin A2; RANKL, receptor activator of nuclear factor kappa-B ligand; TRAP, tartrate-resistant acid phosphatase; MMP, matrix metalloproteinase; RANK, receptor activator of nuclear factor kappa-B; RT-PCR, reverse-transcription polymerase chain reaction. * p < 0.05; ** p < 0.01; *** p < 0.001 vs. vehicle-treated cells.
DISCUSSION
Increased RANKL activity causes osteoclast differentiation, leading to bone resorption and illnesses, such as osteoporosis, Paget's disease, rheumatoid arthritis, and bone metastasis [19]. As a result, inhibiting osteoclast differentiation and function may aid in preventing or treating osteoclast-related disorders [20].
Bisphosphonates and denosumab are well known as antiresorptive agents. On the other hand, long-term use of these agents has serious side effects. Excessive inhibition of bone resorption may suppress bone formation and cause necrosis [21]. Hence, new drugs should https://doi.org/10.4142/jvs.21246 be introduced for bone diseases. AT is used to treat various illnesses throughout Eastern Asia, including Korea and China [9]. Cheng et al. [22] found polyphenols, triterpenes, and saponins in the roots of AT. Different compounds comprised of caffeoylquinic acids, ASs, and aster peptides from the root of AT have expectorant, antitussive, and anti-inflammatory properties [10]. AT has anti-inflammatory properties by blocking the MAPK and NF-ĸB signaling pathways [12,13]. This study examined the effects of AS-A2 on RANKL-induced differentiation in RAW264.7 cells and BMMs. In vitro, AS-A2 blocked RANKL-induced osteoclast development without causing cytotoxicity. This study examined how AS-A2 inhibitory action affected the BMMs viability and bone resorption. At the stated doses of AS-A2, AS-A2 decreased the RANKL-induced bone pit area dramatically. Further studies in postovariectomy osteoporosis will be needed to understand AS-A2 regulation better. RANKL, a critical factor regulating mature osteoclast function and viability, activated osteoclast differentiation from macrophages [23].
Anti-osteoclastogenesis effect of aster saponin A 2 from A. tataricus
The MAPK pathway was activated by the pairing of RANKL to its receptor RANK [24]. Furthermore, RANK/RANKL interaction enhances the specific intracellular signaling transduction pathways of MAPKs, such as ERK, JNK, and p38, which are important in osteoclastogenesis [25]. The cell survivability and development of the ruffle border, as well as cell polarity maintenance, are connected to the ERK activity [26]. The development of osteoclast precursors into osteoclasts, and hence to bone resorption, is dependent on the p38-mediated signals [27]. JNK is involved in the generation of osteoclast precursors and the survival of osteoclasts [28]. This RANKL stimulated ERK and JNK phosphorylations were suppressed by AS-A2 in RAW264.7 cells. Overall, AS-A2 plays a significant role in suppressing the MAPK pathways, including ERK and JNK, while inhibiting osteoclast differentiation and function.
NFATc1 and c-Fos are vital transcription factors for osteoclastogenesis and the differentiation of osteoclasts [29,30]. NFATc1 promotes RANKL signaling in the development of osteoclasts as a down regulator of c-Fos, NF-ĸB, and MAPKs [18]. By interacting with c-Fos, NFATc1 induces the auto-amplification of NFATc1 and the transcription of osteoclast-specific genes, such TRAP, MMP-9, and cathepsin K, [28,31,32]. AS-A2 suppressed the expression of RANKL-induced NFATc1 and c-Fos, which is consistent with earlier studies, indicating that the RANKL/RANK axis activation leads to NFATc1 expression downstream. Cathepsin K is a cysteine proteinase found primarily in osteoclasts that cleaves important bone matrix proteins and plays a crucial role in bone resorption by degrading the organic phase of bone [18,[33][34][35]. Furthermore, AS-A2 also suppressed c-Fos, NFATc1, RANK, MMP-9, TRAP, and cathepsin K, which are highly active genes implicated in RANKL-induced osteoclastogenesis.
In conclusion, AS-A2 inhibited RANKL-induced osteoclast differentiation in RAW264.7 cells and BMMs. In addition, AS-A2 inhibited the MAPKs pathway expression and downregulated the transcription factors, such as c-Fos and NFATc1. Therefore, AS-A2 isolated from AT appears to be a viable therapeutic therapy for osteolytic illnesses, such as osteoporosis, Paget's disease, and osteogenesis imperfecta. | 2022-05-10T16:00:18.850Z | 2022-04-11T00:00:00.000 | {
"year": 2022,
"sha1": "8242567f066aeb090c05f46953982dd26567d42e",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4142/jvs.21246",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cf87079647bc375a736894e32f5e72a5ac880401",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
51607404 | pes2o/s2orc | v3-fos-license | High-throughput, Microscale Protocol for the Analysis of Processing Parameters and Nutritional Qualities in Maize (Zea mays L.)
Maize is an important grain crop in the United States and worldwide. However, maize grain must be processed prior to human consumption. Furthermore, whole grain composition and processing characteristics vary among maize hybrids and can impact the quality of the final processed product. Therefore, in order to produce healthier processed food products from maize, it is necessary to know how to optimize processing parameters for particular sets of germplasm to account for these differences in grain composition and processing characteristics. This includes a better understanding of how current processing techniques impact the nutritional quality of the final processed food product. Here, we describe a microscale protocol that both simulates the processing pipeline to produce cornflakes from large flaking grits and allows for the processing of multiple grain samples simultaneously. The flaking grits, the intermediate processed products, or final processed product, as well as the corn grain itself, can be analyzed for nutritional content as part of a high-throughput analytical pipeline. This procedure was developed specifically for incorporation into a maize breeding research program, and it can be modified for other grain crops. We provide an example of the analysis of insoluble-bound ferulic acid and p-coumaric acid content in maize. Samples were taken at five different processing stages. We demonstrate that sampling can take place at multiple stages during microscale processing, that the processing technique can be utilized in the context of a specialized maize breeding program, and that, in our example, most of the nutritional content was lost during food product processing.
Introduction
Maize (Zea mays L.) is the most widely cultivated grain crop in the United States 1 . In 2016, 71.12 billion kg (2.8 billion bushels) of maize were devoted to human consumption 2 , indicating the importance of maize in the American diet. One of the great benefits of maize grain is that it is a relatively inexpensive commodity, but it also contains beneficial phytochemicals such as phenolics, unsaturated fatty acids, and protein 3 . As such, maize-based food products may be relatively inexpensive sources of beneficial phytochemicals for humans.
However, maize must be processed prior to human consumption. As a result, processing activities often impact the nutritional value of the final processed food product 4 . For instance, during the production of snack foods and ready-to-eat breakfast cereals (i.e., cold cereals), maize kernels are dry milled to produce large flaking grits. During dry milling, the bran and germ are physically removed, leaving only endosperm material. Since many phytochemicals are predominantly located in either the bran or the germ (e.g., phenolics and unsaturated fatty acids, respectively), this may result in a significant decrease in the nutritional value of the processed food product 4 . Conversely, downstream processing steps may improve the nutritional value. For instance, many food product processing techniques include cooking, baking, or toasting. The thermal stresses encountered during these stages may improve the bioavailability of beneficial phytochemicals 5 . From a food science and human nutrition perspective, it would be interesting to know how processing affects not only the nutritional value of processed food products but, foreseeably, also how adjustments to processing parameters may impact other sensory qualities, including color, texture, and taste. A protocol that allows such qualities to be monitored throughout processing could be used to select maize varieties for the improvement of the final processed maize food product. Two of the major obstacles to analyzing such characteristics in the past were the scale and throughput of available protocols. For instance, during the production of breakfast cereals for laboratory analysis, Fast and Caldwell 6 suggested the use of 45.4 kg of large flaking grits. This mass of large flaking grits far exceeds the amount of large flaking grits or large flaking grit materials 7 that can be produced from small plot field trials that are typical in plant breeding programs. Thus, the development of a microscale laboratory protocol for the production of processed food products could enable (1) plant breeders to improve maize varieties for nutritional and sensory traits that are of importance to food processors and (2) processors to efficiently design and test alternative processing strategies.
1. Place a 15 L canning pressure cooker on an electric hotplate. 2. Add 1 L of tap water into the canning pressure cooker and heat to 100 °C. 3. While the water heats, place a 100 g sample of industrial flaking grits or flaking grit material (12% moisture content, wet basis) 7 in a 1 quart canning jar. NOTE: The representative results of this study are based on the flaking grit materials produced by Macke et al. 9 using the laboratory scale dry milling protocol outlined by Rausch et al. 7 4. Add a sugar-salt solution consisting of 200 mL of distilled water, 2 g salt, 6 g granulated white sugar, and 2 g liquid malt extract. NOTE: Multiple samples can be analyzed at once, although the exact number of samples will depend on the size of the canning pressure cooker. 5. Mix the solution with the flaking grit material using a glass stirring rod. 6. After the water in the canning pressure cooker begins to boil, add 1 L of tap water to cool the water in the canning pressure cooker. 7. Place the canning jars in the canning pressure cooker such that they are equidistant from each other and from the wall of the canning pressure cooker. 8. Allow the water to reach a rolling boil. Place the lid on the canning pressure cooker. 9. Cook the large flaking grits or flaking grit material at 15 psi for one h. Allow the canning pressure cooker to cool and depressurize completely before opening. 10. Remove the lid from the canning pressure cooker using heat resistant gloves. 11. Remove the canning jars from the canning pressure cooker using tongs. Place the jars on a heat resistant surface.
NOTE: The resulting intermediate product at this point is cooked grits. 12. If using flaking grit materials as produced using the protocol outlined by Rausch et al. 13. Place 30 g of cooked grits (per processed sample) in a weigh boat and dry in an oven at 65 °C for 12 h. After drying, grind the cooked grit sample to a fine powder using a coffee mill and store in a cool dry place for phenolics analysis.
Produce Baked Grits
1. Place the remaining cooked grits on a foil-lined baking sheet.
1. To improve throughput, bake two samples concurrently. To do this, create two foil boats on a baking sheet. This eliminates the possibility of cross-contamination between the samples. 3. At the end of the 50 min time period, remove the baking dish containing the first two samples and allow to cool at room temperature for 30 min. 4. At the end of the cooling period, take a 30 g sample from the baked grits intermediate product. Place this sample in a weigh boat in an oven at 65 °C for 12 h. After drying, grind the baked grit sample to a fine powder using a coffee mill and store for phytochemical analysis.
Produce Final Toasted Cornflake Product
1. To increase throughput, store the dried sample in a foil-covered weight boat at room temperature until multiple samples (typically 24 or more) are ready for toasting.
4. Pre-heat a convection oven to 204.4 °C (400 °F). Place the dried untoasted flake sample on a flat baking sheet. Spread the sample so that minimal overlapping of the sample occurs. This ensures even toasting. 5. Place the sample in the oven for 60-90 s until it achieves the proper color (see Figure 6). 6. Allow the sample to cool for approximately 5 min at room temperature. This yields the final toasted cornflake. 7. Grind the toasted cornflake sample into a fine powder using a coffee mill.
Phytochemical and Statistical Analyses
NOTE: Depending upon the exact phytochemical of interest and the laboratory equipment available to researchers, these analytical protocols may change.
1. Determine phytochemical content using a protocol such as that outlined in Butts-Wilmsmeyer et al. 3 Follow all safety procedures provided in the protocols. 2. Analyze the data using an appropriate statistical model. NOTE: These example data were analyzed using a split-plot in an RCBD where the whole plot unit was the field plot from which grain was harvested, and the subplot unit was the processing stage. Analyses were conducted in PROC MIXED of SAS (version 9.3), and figures were produced in R.
Representative Results
This protocol allowed for the sampling and nutritional analysis of a processed maize food product, cornflakes, beginning with large flaking grits and continuing through intermediate stages of processing to the final product. This protocol was coupled with the protocol outlined by Rausch et al. 7 to produce flaking grit components from hybrid grain samples. Thus, information regarding the nutritional content of hybrid samples analyzed at the whole grain, large flaking grit, cooked grit, baked grit, and toasted cornflake processing stages are presented. Regardless of the hybrid cultivar under evaluation, most of the insoluble-bound ferulic acid and p-coumaric acid was removed during dry milling (Figure 7). Another decrease in the insoluble-bound ferulic acid and p-coumaric acid occurred during cooking. The decrease in the insoluble-bound ferulic acid and p-coumaric acid content observed during cooking may be due to the removal of the small amount of non-endosperm material that remained in the large flaking grit material. Multi-degree of freedom contrasts indicated that both the ferulic acid and p-coumaric acid content remained stable throughout the remainder of processing, regardless of the hybrid (Table 1).
Furthermore, the initial ranking of the hybrid cultivars in terms of their insoluble-bound ferulic acid content and p-coumaric acid content were not indicative of the ranking of the hybrids at the final processing stage (Table 2 and Figure 8). In other words, the initial content in the whole kernel was not indicative of which hybrid would possess the most insoluble-bound ferulic acid or p-coumaric acid at the end of processing. Thus, in order to study the genetic traits underlying the nutritional characteristics of processed food products, microscale processes must be used to study maize grain.
Ferulic Acid p-Coumaric Acid
Hybrid Grind sample to a fine powder. If phytochemical analyses do not appear to work, ensure that the sample has been ground to a fine powder such that there is a greater surface area exposed to solvents.
F-value p-value F-value p-value
NA 2.1 Do not allow samples to touch one another. They will become crosscontaminated.
NA Bake two samples concurrently by making individual foil boats for them on a cooking sheet.
2.2.1
Stir the sample after 25 min to ensure even baking.
If the sample does not appear to have baked evenly, stir at more frequent intervals (e.g. every 15 min).
3.1
Place the baked grit dough in a parchment paper pouch. This ensures sample will not be lost during pressing.
If sample starts to come out of the end of the parchment paper pouch, make the pouch longer. We found that 1 m appeared to be sufficient.
3.2
Leave the parchment paper pouch closed.
If the cutting tool cuts through the parchment paper, use a duller tool.
We found that a pizza cutter was the best tool for cutting the baked grits into squares. We did not cut through the parchment paper using this tool, but the baked grits still were able to be cut very quickly into squares.
3.5
Become very comfortable with the color and do not toast for too long.
If the sample becomes too dark, reduce the amount of time used to toast.
Store multiple dried baking grit samples in individual foil-covered weigh boats until multiple samples are ready for toasting.
Discussion
Changes in the nutritional content of maize-based food products throughout processing are likely due to the removal of gain components and thermal stress 5,10 . However, exactly how processing affects various nutrients had been studied in relatively little detail prior to the development of this protocol . Here, we present a microscale laboratory method for studying nutritional and sensory traits in maize throughout food product processing.
This protocol allowed sampling to take place at the flaking grit stage, after cooking, after baking, and after the shearing forces encountered during rolling. Thus, with the additional analysis of harvested corn grain, the protocol facilitates the analysis of the initial stage substrate and well as the final food product and intermediary stages of processing to elucidate changes in composition related to nutrition. This key feature of the protocol enables nutritional and sensory traits to be analyzed throughout processing while also enabling the researcher to choose which analytical chemistry protocols to use for those specific analyses. Another key feature of this protocol is the efficiency of this microscale protocol. First, this protocol uses a small sample, which is appropriate in a plant breeding setting ( Table 3). One kg of grain tended to produce approximately 0.3 kg of large flaking grit constituents, and roughly one third of the large flaking grit constituents produced were needed for processing. Secondly, this protocol allowed for the laboratory processing of approximately 16 samples per day, which is much more efficient than the previous protocol that required large sample sizes 6 . This protocol could easily be modified to mimic the production of other processed maize food products. For instance, large flaking grits are used in the production of various snack foods in addition to ready-to-eat breakfast cereals 9 . The laboratory protocol for the production of these snack foods could foreseeably include adjustments to cooking times and cooking solutions or adjustments to baking times. It is also possible that an adapted version of this protocol could be used for the study of other grains and their respective processed products. Processed grain products often include cooking, baking, or toasting processing stages that could be mimicked using an adapted version of the protocol presented here.
An important limitation of this protocol is that it has very few stopping points, i.e. once a processing step begins, it and subsequent steps must be completed ( Table 3). There is a single stopping point after the production of the cooked grits from the flaking grits. Only if necessary, the cooked grits could be placed in a sealed container (e.g. a sealed canning jar) and refrigerated for at most two days. However, storing the cooked grits for longer time periods appeared to alter the sample. Furthermore, once baking begins, there are no stopping points until after the baked grit dough has been rolled, cut, and dried.
Conclusion
Through these example results (see Butts-Wilmsmeyer et al. 4 for more information), we demonstrated that nutritional content could be monitored throughout processing. Furthermore, key processing stages where nutritional changes occurred were identified. Additionally, the small sample size required for this processing protocol enabled the study of multiple hybrids within the context of a plant breeding program. Using these hybrids, we identified which set of hybrids maintained the highest concentrations of insoluble-bound ferulic acid and p-coumaric acid throughout processing. These traits are important indications of the final toasted cornflakes' prebiotic potential. 11,12,13 These results could be used directly to help plant breeders establish breeding populations for improved prebiotic potential of processed maize products.
One of the major advantages of this processing protocol is that it does not limit the nutritional analyses that can be conducted. If a phytochemical protocol exists for analysis of the grain, then it can be used to study the processed products. Furthermore, because this processing protocol enables laboratory-scale food processing and nutritional analyses to be conducted independently, multiple phytochemicals can be studied. The analytical protocols for the study of phytochemical content should use small sample sizes, however, due to the small amount of intermediate and final processing products generated using the laboratory-scale processing protocol.
Disclosures
The authors have nothing to disclose. | 2018-08-01T02:21:12.188Z | 2018-06-16T00:00:00.000 | {
"year": 2018,
"sha1": "5c8fbb94a8139a3010c27e11d21f8cacf1483706",
"oa_license": "CCBYNCND",
"oa_url": "https://www.jove.com/pdf/57809/high-throughput-microscale-protocol-for-analysis-processing",
"oa_status": "BRONZE",
"pdf_src": "Anansi",
"pdf_hash": "5c8fbb94a8139a3010c27e11d21f8cacf1483706",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Mathematics"
]
} |
256437532 | pes2o/s2orc | v3-fos-license | Detecting Medication Risks among People in Need of Care: Performance of Six Instruments
Introduction: Numerous tools exist to detect potentially inappropriate medication (PIM) and potential prescribing omissions (PPO) in older people, but it remains unclear which tools may be most relevant in which setting. Objectives: This cross sectional study compares six validated tools in terms of PIM and PPO detection. Methods: We examined the PIM/PPO prevalence for all tools combined and the sensitivity of each tool. The pairwise agreement between tools was determined using Cohen’s Kappa. Results: We included 226 patients in need of care (median (IQR age 84 (80–89)). The overall PIM prevalence was 91.6 (95% CI, 87.2–94.9)% and the overall PPO prevalence was 63.7 (57.1–69.9%)%. The detected PIM prevalence ranged from 76.5%, for FORTA-C/D, to 6.6% for anticholinergic drugs (German-ACB). The PPO prevalences for START (63.7%) and FORTA-A (62.8%) were similar. The pairwise agreement between tools was poor to moderate. The sensitivity of PIM detection was highest for FORTA-C/D (55.1%), and increased to 79.2% when distinct items from STOPP were added. Conclusion: Using a single screening tool may not have sufficient sensitivity to detect PIMs and PPOs. Further research is required to optimize the composition of PIM and PPO tools in different settings.
Introduction
Older people are more frequently affected by polypharmacy, and more susceptible to adverse drug reactions (ADR), than younger people due to multimorbidity and physiological aging processes [1][2][3][4][5]. As a guidance for clinicians, a number of consensus-based instruments have been developed listing potentially inappropriate medication (PIM) to be avoided or used with caution in older people. Instruments alerting physicians to potential prescribing omissions (PPO) have also been developed [6][7][8].
Internationally prominent examples include START/STOPP criteria, and EU(7)-PIM, while the PRISCUS and FORTA lists are German developments [9][10][11][12]. More recently, the STOPPFall list has been developed by a European geriatrics society task force, which alerts prescribers to fall risk increasing
Data Source and Study Population
BaCoM is a multicenter prospective registry study of patients in need of care with three study centers in Bavaria, Germany (LMU Munich, UK Würzburg and FAU Erlangen, registered in the German Clinical Trials Register: DRKS 26039).
The analyzed BaCoM participants were those with and without a prior history of COVID-19, who had to be at risk of PIM or PPO, and therefore had to be above 65 years of age and take one or more long term medications. They were enrolled by their respective GP or study physicians and in need of care or support. The latter was defined as receipt of financial support by public care insurance according to an officially assessed care level ("Pflegegrad"), or a score of ≥5 on the 7-point Clinical Frailty Scale (CFS) [16,17]. Exclusion criteria were an estimated life expectancy of <6 months (as judged by the recruiting physician), unclear legal residency status, and persons without health insurance.
Data were collected by trained study assistants, including sociodemographic and health status data to describe the study population. Apart from clinical frailty, the health status also included data on cognitive function (assessed by a Six-Item Screening Tool) and a Montreal Cognitive Assessment Test Blind (MoCA-BLIND) in those with less than three errors in the Six-Item Screening Tool [18][19][20]. Medical diagnoses, medications taken, and vital signs such as blood pressure, heart rate, and forced expiratory volume in 1 s (FEV1) were documented to apply PIM and PPO instruments. Medication schedules and diagnosis lists were either provided by the GP or collected by the study team at the site at which the participant received care, e.g., nursing homes or, in the case of outpatient care, at the participant's home. The database source therefore partly comprised codes referring to International Statistical Classification of Diseases and Related Health Problems (ICD-coded) diagnosis lists and standardized medication schedules, but also handwritten lists extracted from nursing records.
Definition of PIMs and PPOs
We included a total of six different instruments designed to detect PIMs or PPOs or both. A brief description of each tool, highlighting the structure, number of items, and data categories required for their application, is provided in Table 1. All PIM instruments included in this study were applicable to patients aged 65 years or older (without restrictions), and comprised the FORTA list, STOPP, EU(7)-PIM, PRISCUS, German-ACB, and STOPPFall [9][10][11][12][13][14]. From the FORTA list, we only considered medications listed as "C = questionable" and "D = avoid", according to the authors' recommendations [12]. FORTA, STOPP, EU(7)-PIM, and PRISCUS are generic tools, in the sense that they were designed to cover medication risks across all drug groups, whereas German-ACB [14] and STOPPFall [13] were specifically developed to identify anticholinergic and fall risk increasing drugs (FRIDs), respectively.
In the German-ACB, we only classified as PIM medications with an ACB score of ≥3 [14]. For STOPP-Fall, we considered all 14 drug groups classified as FRIDs, but only defined them as PIM when participants' risk of falls was increased by one or more of the conditions listed in the accompanying STOPPFall deprescribing tool (e.g., diuretics in the case of hypotension). As PPO tools we included START [9] and FORTA-A (i.e., medications listed as "A = indispensable").
Measurement of PIMs and PPOs
All medications were coded using the Anatomical Therapeutic Chemical (ATC) classification and the diagnoses were coded using ICD-10 [21,22]. Where medication doses were required to apply the included PIM/PPO instruments, daily doses were calculated from the instructions provided. When dosage information was missing, these medications were not included in criteria that considered dose. In cases where dosing instructions were "as required", these were not taken into account in criteria considering only long-term medication.
Criteria that explicitly considered the duration of intake (e.g., longer than six weeks) were not considered in any patients because this information was not commonly available. Where medical diagnoses were required to apply the respective PIM or PPO instruments, we only considered explicitly documented diagnoses (i.e., did not assume diagnoses based on medication profiles).
The PIM-defining criteria from each tool were transcribed into a programming language and applied to the data using RStudio V.2022.07.2.
Data Analysis
In order to examine the prevalence of each PIM and PPO instrument, all instruments were first applied separately, and the prevalence was calculated as the proportion of patients (and 95% confidence interval) with one or more respective PIM or PPO. As a result, each medication taken by each patient was classified as a PIM (or not) or a PPO (or not) according to each tool. In order to examine the sensitivity of each PIM and PPO instrument, we defined PIMs and PPOs identified by any of the respective instruments as the gold standard. The sensitivity for each tool was then calculated as the proportion (and 95% confidence interval) of all PIMs/PPOs detected by each respective instrument. Similarly, we calculated the proportion of PIMs/PPOs uniquely detected by each respective instrument, i.e., not by any of the others. The concordance among the different tools was determined by an analysis of interrater reliability using Cohen's Kappa and overlaps between tools visualized using Venn diagrams [23]. In order to determine which proportions of PIMs/PPOs would be detected by which combination of PIM/PPO tools, we started with the instruments with the highest PIM/PPO prevalence. We then considered, which other tool would detect the most additional PIMs/PPOs not detected by the first tool, etc. The findings were visualized using a Pareto chart. All confidence intervals were calculated using the exact binomial test [24]. Table 2 shows the characteristics of the study population, comprising 226 participants with a median (IQR) age of 84 (80 to 89) years, with most (76.6%) aged ≥ 80 years and about one fifth (22.6%) being ≥ 90 years old. The majority (71.2%) of participants were female, and three quarters (74.6%) were residents of long-term care facilities. The median (IQR) score on the CFS was 6 (5 to 7), consistent with moderate frailty, and over half (53.3%) of participants achieved less than 18 points on the MoCA Blind Assessment, consistent with mild cognitive impairment [17][18][19]. The median (IQR) on the Charlson Comorbidity Index (CCI) was 3 (1 to 5), corresponding to moderate severity of comorbidities, and a quarter (26.1%) suffered from severe comorbidities (CCI score ≥ 5) [25].
Characteristics of the Study Population
Three quarters (75.2%) of patients had a documented diagnosis of hypertension, almost a third (31.9%) had atrial fibrillation, and more than 20% were affected by diabetes (27.0%), dyslipidemia (27.9%), and heart failure (23.5%). In addition, 21.2% had a documented diagnosis of depression.
Application of PIM and PPO Instruments to Available Data
Of the total 114 criteria of the START/STOPP tool, 91 items (30 items for START and 61 items for STOPP) could be applied. For the remaining 23 criteria, the data required were not available for the sample [27]. Missing but required data includes laboratory data, metrics from medical exams, vital signs from the past, and the date when a diagnosis was made or a medication was prescribed. For FORTA, we excluded vaccination-and cancerrelated sections because vaccinations and ongoing chemotherapy were not consistently documented in the available data. For all other instruments, data was available to apply all items. Table 3 shows that considering all PIM tools together, the PIM prevalence (proportion (95% CI) of patients with ≥ 1 PIM) was 91.6 (87.2-94.9)%, 79.6% had two or more PIMs, and more than half (57.1%) had four or more PIMs. However, the PIM prevalence varied considerably by tool, and was highest for FORTA C/D (76.5 (70.5-81.9)%), followed by STOPP (65.9 (59.4-72.1)%), EU (7)
Cumulative Sensitivity of Combining PIM Instruments
The Pareto chart in Figure 2a shows (as bars) the percentage of all PIMs detected by FORTA-C/D, while the remaining bars show the percentage of new PIMs additionally detected by each tool, after application of the previous tool(s). The line shows the cumulative sensitivity (i.e., the percentage of PIMs detected) resulting from the addition of each tool. Since PRISCUS did not identify any PIMs exclusively, this tool was not considered in this analysis. Starting with FORTA C/D (which had the highest sensitivity, of 55.1%), adding STOPP achieves a cumulative sensitivity of 79.2%, and further adding EU(7)-PIM achieves a sensitivity of 94.1%. Figure 2b shows that after application of FORTA-C/D and STOPP, adding PIM criteria for four drugs (apixaban, rivaroxaban, and sodium picosulfate from the EU(7)-PIM list; diuretics from STOPPFall) increases the sensitivity by 10.6% to 89.8%. The addition of criteria relating to opioids, antiepileptics and antipsychotics (from STOPPFall), and metoclopramide (from EU(7)-PIM), increases the sensitivity further by 3.7% to 93.5%.
(3) Cumulative sensitivity of combining PIM instruments
The Pareto chart in Figure 2a shows (as bars) the percentage of all PIMs detected by FORTA-C/D, while the remaining bars show the percentage of new PIMs additionally detected by each tool, after application of the previous tool(s). The line shows the cumulative sensitivity (i.e., the percentage of PIMs detected) resulting from the addition of each tool. Since PRISCUS did not identify any PIMs exclusively, this tool was not considered in this analysis. Starting with FORTA C/D (which had the highest sensitivity, of 55.1%), adding STOPP achieves a cumulative sensitivity of 79.2%, and further adding EU(7)-PIM achieves a sensitivity of 94.1%. Figure 2b shows that after application of FORTA-C/D and STOPP, adding PIM criteria for four drugs (apixaban, rivaroxaban, and sodium picosulfate from the EU(7)-PIM list; diuretics from STOPPFall) increases the sensitivity by 10.6% to 89.8%. The addition of criteria relating to opioids, antiepileptics and antipsychotics (from STOPPFall), and metoclopramide (from EU(7)-PIM), increases the sensitivity further by 3.7% to 93.5%.
Summary of Findings
This cross-sectional study of a convenience sample of 226 people in need of care, aged ≥ 65 years, in Bavaria (Germany) shows that the vast majority of participants received polypharmacy (92.5%). The vast majority (91.6%) also received at least one PIM after the application of six PIM tools together, with 79.6% receiving two or more PIMs, and over half (57.1%) receiving four or more PIMs. Similarly, most (82.7%) participants had at least one PPO considering FORTA-A and START together, and 50.0% had two or more PPOs. More than three quarters of the analyzed patients (76.1%) were affected by both PIMs and PPOs.
No single PIM instrument reached full PIM coverage, and the detected PIM prevalence varied considerably by tool, ranging from 76.5% for FORTA C/D to 6.6% for German-ACB ≥ 3. Pairwise agreement between the PIM tools was poor to moderate and highest between PRISCUS and German-ACB (Cohen's Kappa 0.42 (0.23-0.59)). FORTA C/D had the highest sensitivity of PIM detection (it identified 55.1% of all PIMs), and it also detected the most PIMs not identified by any other tool. However, stratification by drug group revealed that while FORTA-C/D had a high sensitivity for the detection of benzodiazepine, other psycholeptic, spironolactone, psychoanaleptic, and betablocker PIMs, it only detected a minority of low dose aspirin, opioid, and non-opioid analgesic PIMs. We found that combining items included in FORTA C/D and STOPP achieved a cumulative sensitivity of PIM detection of 79.2%, which could be further increased to 89.8% by additionally considering criteria relating to apixaban, rivaroxaban, and sodium picosulfate from the EU(7)-PIM list, and diuretics from STOPPFall.
The PPO prevalence was similar for both instruments used (63.7% for START and 62.8% for FORTA A), but considerably lower than for their combined use (82.7%), consistent with each tool also identifying unique PIMs. While FORTA-A detected all hypertension and diabetes PPOs, START detected no hypertension PPOs (0.0%) and very few diabetes PPOs (3.9%), but substantially more PPOs than FORTA-A for heart failure (100.0% vs 53.3%), depression (100.0% vs 0.0%), and atrial fibrillation (80.0% vs 30.3%).
Comparison to Literature
Numerous previous studies have used several of the PIM and PPO tools used in this study to examine the PIM and/or PPO prevalence in different settings. According to a recent review of PIM prevalence studies [6], the proportions of study participants affected by PIMs was 44.3% for FORTA (vs. 76.5% in this study) and ranged from 26.7% to 67.3% for STOPP (vs. 65.9% in this study), from 37.5% to 90.6% for EU(7) PIM (vs. 61.9% in this study) and from 13.7% to 68.5% for PRISCUS (vs. 12.8% in this study). Campbell et al. (2010) found that 10.8% of a sample of African American adults aged ≥ 70 years were exposed to at least one drug with strong anticholinergic properties (vs. 6.6% in this study) [29,30]. The prevalence of PIMs according to STOPPFall was 85.4% in one study of hospitalized patients (vs. 36.3% in this study). According to the same review [6], the proportions of study participants affected by PPOs ranged from 19.8% to 64.2% for START (vs. 62.8% in this study). Compared to this data, this study of patients in need of care found the PIM prevalence to be at the high end for FORTA and STOPP/START, in the middle for EU(7)-PIM, and at the low end for PRISCUS, German-ACB and STOPPFall. This may reflect that PRISCUS is a German development, was published in 2010, and contributed to the EU(7) PIM list, while FORTA is a more recent development, and START/STOPP is less well known in the German setting. The discrepancy in the results for STOPPFall however is explained by differing measurement methods. While Damoiseaux-Volman et al. (2022) considered any use of STOPPFall medications as PIMs, we considered them as PIMs only if their users also had risk factors for falls specified in the STOPPFall deprescribing tool [31].
In contrast to prevalence studies using one tool, comparisons of two or more PIM or PPO tools in the same study population are much less common. In a Norwegian population of geriatric wards of people aged 65 or older taking one or more medication, the PIM prevalence was 62.4-69.2% for EU(7)PIM, which is comparable to our findings (61.9%) [32]. In a Kuwaiti population of primary care patients aged 65 years or older, the PIM prevalence was lower for FORTA (44.3%) than for STOPP (55.7%), which is in contrast to our findings (76.5% vs 65.9%, respectively) [33]. In a German population of 3189 Subjects, the PIM prevalence was highest for EU(7)PIM (70.1%), followed by FORTA (55.9%), and PRISCUS (24.7%), whereas in this study, FORTA-CD detected more PIMs (73.9%) than STOPP (65.9%) [34]. These findings highlight that the study population may not only influence the prevalence of polypharmacy, but also the relative performance of different instruments.
Strengths and Limitations
To our knowledge, this is the first study to examine the sensitivity of PIM and PPO detection considering PIM and PPO instruments alone and in combination, which we considered most relevant to the German setting. Our analysis sheds light on the prevalence of PIMs and PPOs in a vulnerable population in need of care, which is often underrepresented in clinical research. We were able to collect a comprehensive data set, which enabled us to apply the vast majority of items included in each tool. However, a small number of items (19 items from the STOPP tool) could not be applied due to missing data, implying that the detected prevalence may be an underestimation. The main limitations of this study are its relatively small sample size and the potential selection bias resulting from convenience sampling. Nevertheless, study participants were included from a variety of settings, and our sample included study participants irrespective of their physical or mental health, or their cognitive abilities.
Implications for Clinical Practice and Research
Our findings demonstrate a very high prevalence of PIMs and PPOs among this vulnerable sample of patients in need of care, with the vast majority of study participants affected by PIM, PPO, or both. These findings alone reinforce the need to regularly and comprehensively review all medications these patients are taking. Our findings suggest that using a single tool may leave a substantial number of PIMs and PPOs undetected, but that by combining FORTA-C/D and STOPP, as well as FORTA-A and START, into comprehensive tools, the proportion of detectable PIMs and PPOs can be considerably increased. Nevertheless, it is clear that any combination of PIM tools applied without computerized support may not comprehensively detect all medication risks associated with polypharmacy, given the vast number of possible drug-drug and drug-disease interactions.
It is also clear that detection of PIMs and PPOs alone does not suffice to improve patient outcomes, which additionally requires clinical judgment to identify actually inappropriate medication, as well as effective interventions to overcome barriers to PIM deprescribing. This study has examined how the sensitivity of PIM and PPO detection can be enhanced in older people in need of care by combining prominent PIM and PPO instruments, but our findings should be confirmed in other settings. In addition, our findings should be supplemented by research characterizing the extent to which PIM and PPO tools identify medication that actually requires medication changes (i.e., deprescribing or initiation of drugs), which interventions may overcome pertinent barriers to which medication changes, as well as the effects of such changes on outcomes that matter to patients.
Conclusions
Instruments which explicitly highlight common and clinically relevant potentially inadequate medication (PIM) and/or potential prescribing omissions (PPOs) may support clinicians in identifying targets for medicines optimization among older people with polypharmacy. However, this study shows that PIM and PPO instruments differ considerably, both in terms of the quantity and nature of medication related problems they detect, and that it therefore matters which tool is used in which setting. Our study also demonstrates that using a single existing tool may not have sufficient sensitivity to detect PIMs and PPOs, and that combining distinct items from two or more instruments may con-siderably increase the sensitivity. Further research is required to optimize the composition of PIM and PPO screening instruments in terms of both the sensitivity and specificity in different settings. | 2023-02-01T16:13:46.422Z | 2023-01-28T00:00:00.000 | {
"year": 2023,
"sha1": "d96e91f891884181f85e115eb68b097cf9edde66",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/20/3/2327/pdf?version=1674902813",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b92d15c89da32f6ff5fbc6d466d941f1edaa3476",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119453151 | pes2o/s2orc | v3-fos-license | The second homotopy group in terms of colorings of locally finite models and new results on asphericity
We describe the second homotopy group of any CW-complex $K$ by analyzing the universal cover of a locally finite model of $K$ using the notion of $G$-coloring of a partially ordered set. As applications we prove a generalization of the Hurewicz theorem, which relates the homotopy and homology of non-necessarily simply-connected complexes, and derive new results on asphericity for two-dimensional complexes and group presentations.
Introduction
Every CW-complex has a locally finite model. This is a classical result of McCord [9, Theorem 3] who considered for any regular CW-complex K the space X (K) of cells of K with some specific topology, and defined a weak homotopy equivalence µ : K → X (K). The space X (K) can be viewed as a poset. The interaction between the topological and combinatorial nature of X (K) allows one to develop new techniques to attack problems of homotopy theory of CW-complexes (see [1]).
In this paper we use locally finite models to describe the second homotopy group of CW-complexes. The notion of G-coloring of a poset allows us to classify all the regular coverings of the space X (K). In particular, we obtain a description of the universal cover E of X (K) which is used to find an expression for the boundary map of a chain complex whose homology coincides with the singular homology of E. By the Hurewicz theorem and McCord's result, π 2 (K) = H 2 (E). One of the applications of our description is the following result which reduces to the classical Hurewicz theorem when the complex is simply-connected. Theorem 2.3. Let K be a connected regular CW-complex of dimension 2 and let K ′ be its barycentric subdivision. Consider the full (one-dimensional) subcomplex L of K ′ spanned by the barycenters of the 1-cells and 2-cells. If the inclusion of each component of L in K ′ induces the trivial morphism between the fundamental groups, then π 2 (K) = Z[π 1 (K)] ⊗ H 2 (K).
We also obtain results on asphericity of 2-complexes and group presentations. Recall that a connected 2-complex K is aspherical if π 2 (K) = 0. Theorem 3.1. Let K be a 2-dimensional regular CW-complex and let K ′ be its barycentric subdivision. Consider the full (one-dimensional) subcomplex L ⊆ K ′ spanned by the barycenters of the 2-cells of K and the barycenters of the 1-cells which are faces of exactly two 2-cells. Suppose that for every connected component M of L, i * (π 1 (M )) ≤ π 1 (K ′ ) contains an element of infinite order, where i * : π 1 (M ) → π 1 (K ′ ) is the map induced by the inclusion. Then K is aspherical.
From this result one can deduce for example, the well-known fact that all compact surfaces different from S 2 and RP 2 are aspherical.
To put our results in perspective, one should recall that it is an open problem, originally posted by Whitehead, whether any subcomplex of an aspherical 2-dimensional CWcomplex is itself aspherical . We refer the reader to [5,7,13] for more details on Whitehead's asphericity question.
In Theorem 3.5 we prove a result on asphericity of group presentations which resembles the homological description of π 2 by Reidemeister chains (see [5,10]).
Colorings and a description of the second homotopy group
A poset X will be identified with a topological space with the same underlying set as X and topology generated by the basis {U x } x∈X , where U x = {y ∈ X | y ≤ x}. If X and Y are posets, it is easy to see that a map X → Y is continuous if and only if it is order preserving. We denote by K(X) the simplicial complex whose simplices are the finite chains of the poset X (i.e. the classifying space of X). A result of McCord [9,Theorem 2] shows that there is a natural weak homotopy equivalence K(X) → X. Given a regular CW-complex K, its face poset X (K) is the poset of cells of K ordered by the face relation. Note that the classifying space of the poset X (K) is the barycentric subdivision of K and therefore, there is a weak homotopy equivalence µ : K → X (K). In particular the homology groups of the poset X (K) coincide with those of K. If X = X (K) is the face poset of a regular CW-complex, we can compute its homology by computing the cellular homology of K in the standard way (see [8,IX,§7]), namely, for each n ≥ 0 let C n (X) be the free Z-module generated by the points x ∈ X of height h(x) = n. Recall that h(x) is one less than the maximum number of points in a chain with maximum x. Choose for each edge (y, x) in the Hasse diagram of X a number [x : y] ∈ {1, −1} in such a way that for every x ∈ X of height 1, Here y ≺ x means that y < x and there is no y < z < x. The differential d : y]y in each basic element x. The homology of this chain complex is then the singular homology of the poset X (viewed as a topological space). The number [x : y] is the incidence of the cell y in the cell x of K for certain orientations.
Suppose p : E → X = X (K) is a topological covering (X (K) considered as a topological space). Using that p is a local homeomorphism it is easy to prove that E is also the space associated to a poset. Moreover, for each x ∈ E, p| Ux : U x → U p(x) is a homeomorphism. In particular E is the face poset of a regular CW-complex ([4, Proposition 4.7.23]). If y ≺ x in E, p(y) ≺ p(x) in X. Given a choice of the incidences in X, we define [x : y] = [p(x) : p(y)], which is a coherent choice for the incidences in E. Let p : E → X (K) be a regular covering and let G be its group of deck (covering) transformations. Since G acts freely on E and transitively on each fiber, C n (E) = ZG ⊗ C n (X) is a free ZG-module with basis {x ∈ X | h(x) = n}. The differential d : C n (E) → C n (E) is a homomorphism of ZG-modules.
In [3] we characterized the regular coverings of locally finite posets (i.e. posets with finite U x , for each x) in terms of colorings. We recall this result as it will be required in the description of the universal cover.
Let X be a locally finite poset. We denote by E(X) the set of edges in the Hasse diagram of X. An edge-path in X is a sequence ξ = (x 0 , x 1 )(x 1 , x 2 ) . . . (x k−1 , x k ) of edges, or opposites of edges. The set of closed edge-paths from a point x 0 ∈ X, with certain identifications and the operation given by concatenation, is a group H(X, x 0 ) naturally isomorphic to π 1 (X, x 0 ) (see [2] and [3]). This construction resembles the definition of the edge-path group of a simplicial complex. Given a group G, a G-coloring of a locally finite poset X is a map c which assigns a color c(y, x) ∈ G to each edge (y, x) ∈ E(X).
. ≺ x ′ l = y with same origin and same end, the weights of the edge-paths induced by the chains coincide. An admissible G-coloring c induces a homomorphism W c : H(X, x 0 ) → G which maps the class of a closed edge-path to its weight. The coloring c is connected if W c is an epimorphism.
Two G-colorings c and c ′ of X are equivalent if there exists an automorphism ϕ : G → G and an element g Theorem 2.1. ([3, Corollary 3.5]) Let X be a connected locally finite poset and let G be a group. There exists a correspondence between the set of equivalence classes of regular coverings p : E → X of X with Deck(p) isomorphic to G and the set of equivalence classes of admissible connected G-colorings of X.
Here Deck(p) denotes the group of deck transformations of p. The covering associated to an admissible connected G-coloring c is the covering that corresponds to the subgroup ker(W c ) of H(X, x 0 ) ∼ = π 1 (X.x 0 ). Theorem 3.6 of [3] tells us explicitely how to construct the covering E(c) corresponding to c. It is the poset E(c) = {(x, g) | x ∈ X, g ∈ G} with the relations (x, g) ≺ (y, gc(x, y)) whenever x ≺ y in X. The covering map being the projection onto the first coordinate. The group G acts on E(c) by left multiplication in the second coordinate. Now, let K be a regular CW-complex and suppose c is any G-coloring of X = X (K) which corresponds to the universal cover, that is, c is an admissible and connected Gcoloring such that E = E(c) is simply-connected or, equivalently, W c : H(X, x 0 ) → G is an isomorphism. The second homotopy group of K is π 2 (K) = π 2 (X (K)) = H 2 (E). The homology of E can be computed using the chain complex described above. In the case that K is two-dimensional, this computation is easier. In this case E is a poset of height two and C 3 (E) = 0. A chain α ∈ C 2 (E) = ZG ⊗ C 2 (X) is a finite sum of the form The isomorphism between C 2 (E) and Therefore π 2 (K) = ker(d) has the following description On the other hand, Theorem 4.4 and Remark 4.6 in [3] provide a concrete way to describe a coloringĉ which corresponds to the universal cover. Let X be a locally finite poset and let D be a subdiagram(=subgraph) of the Hasse diagram of X. Suppose that the poset which corresponds to D is simply-connected and that D contains all the points of X (for instance, a spanning tree). Let G be the group generated by the edges e ∈ E(X) which are not in D with the following relations. For each pair of chains . ≺ x ′ l = y with same origin and same end, we put a relation According to Theorem 4.4 in [3] G is isomorphic to π 1 (X). Moreover, letĉ be the Gcoloring defined byĉ(e) = e, the class of e in G, for each e ∈ E(X). If e ∈ D, e = 1 ∈ G. Then Wĉ : H(X, x 0 ) → G is an isomorphism, soĉ corresponds to the universal cover of X. This coloring can be used in the formula above to compute π 2 (K).
Example 2.2. Consider the regular CW-complex K in Figure 1. It has three 0-cells, a, b, c, six 1-cells, q, r, s, t, u, v and four 2-cells, w, x, y, z. The Hasse diagram of X (K) appears in Figure 2. Let D be the subdiagram of the Hasse diagram given by the solid edges. It is easy to see that the space corresponding to D is simply connected because it is a contractible finite space(=dismantlable poset) [11,Section 4]. The group G generated by the dotted edges e 1 , e 2 , e 3 , e 4 , e 5 with relations e 4 e 1 = 1, e 1 = e 5 , e 2 = e 3 , e 2 = e 5 , e 3 = e 4 is then isomorphic to the fundamental group of K. Hence π 1 (K) = Z 2 .
For each h ∈ G and each point of X (K) of height 1 we have one equation. These twelve equations describe π 2 (K). Denote γ the generator of G. Then e 1 = e 2 = γ. We can choose the incidences [p 0 : p 1 ] according to the orientations of the cells in Figure 1. Therefore π 2 (K) = {n(w − γw − x + γx + y − γy − z + γz) | n ∈ Z} is isomorphic to Z. In fact K is just the real projective plane, so the results are not surprising. However, the example shows how to carry on the computation of π 2 for arbitrary regular CW-complexes. Theorem 2.3. Let K be a connected regular CW-complex of dimension 2 and let K ′ be its barycentric subdivision. Consider the full (one-dimensional) subcomplex L of K ′ spanned by the barycenters of the 1-cells and 2-cells. If the inclusion of each component of L in K ′ induces the trivial morphism between the fundamental groups, then π 2 (K) = Z[π 1 (K)] ⊗ H 2 (K).
Proof. Let D be the subgraph of the Hasse diagram of X (K) induced by the points of height 1 and 2. Then L is the classifying space K(D) of the poset associated to D and the weak homotopy equivalence µ : K ′ → X (K) restricts to a weak equivalence L → D. Moreover, for each component L i of L, µ| L i is a weak equivalence between L i and a component D i of D. If L i ֒→ K ′ induces the trivial map in π 1 , then so does the inclusion D i ֒→ X (K). By [3, Remark 4.2] each admissible G-coloring of X (K) is equivalent to another which is trivial in the edges of D. In particular, if c is a π 1 (K)-coloring which corresponds to the universal cover, then it is equivalent to a coloring c ′ such that c ′ (x, y) = 1 for each (x, y) ∈ D. Then X = E(c ′ ) is also the universal cover of X (K) and where δ : C 2 (X (K)) → C 1 (X (K)) is the boundary map of the chain complex associated to X (K). Since Z[π 1 (K)] is a free Z-module, π 2 (K) = H 2 ( X) = Z[π 1 (K)] ⊗ H 2 (K) by the Künneth formula.
When K is simply-connected, the previous result reduces to the Hurewicz Theorem for dimension 2. Theorem 2.3 can be restated as follows: If every closed edge-path of K ′ containing no vertex of K is equivalent to the trivial edge-path, then π 2 (K) = Z[π 1 (K)] ⊗ H 2 (K).
There is an obvious generalization of Theorem 2.3 to connected regular CW-complexes with no restriction on the dimension. Corollary 2.4. Let K be a connected regular CW-complex. If every closed edge-path of K ′ containing only vertices which are barycenters of 1, 2 or 3-dimensional simplices is equivalent to the trivial edge-path, then π 2 (K) = Z[π 1 (K)] ⊗ H 2 (K).
The following is another application of our methods (compare with [12]).
Proof. Since each CW -complex is homotopy equivalent to a simplicial complex, it suffices to prove the result for face posets X and Y of regular CW-complexes. Here, X ∨ Y denotes the space whose Hasse diagram is obtained from the diagrams of X and of Y by identifying a minimal element of each. Let c be a coloring of X ∨ Y corresponding to the universal cover. Then c is a G-coloring with G ≃ π 1 (X ∨ Y ) ≃ π 1 (X). Since Y is simply-connected, there is an equivalent G-coloring c ′ which is trivial in Y (once again by Lemma 4.1 or Remark 4.2 in [3]). The restriction of c ′ to X is an admissible connected G-coloring. Moreover, if a closed edge-path in X is in ker(W c ′ | X ), then it is in ker(W c ′ ) = 0. Thus, it is trivial in H(X ∨ Y ) and then in H(X), since the inclusion X ֒→ X ∨ Y induces an isomorphism between the fundamental groups. Therefore, c ′ | X corresponds to the universal cover of X.
Let X ∨ Y = E(c ′ ) be the universal cover of X ∨ Y . Note that for n = 1, 2. Since c ′ | X corresponds to the universal cover of X and c ′ | Y is trivial, the differential is the differential in the chain complex associated to the universal cover of X and d Y : C 2 (Y ) → C 1 (Y ) is the differential in the complex associated to Y . By the Künneth formula,
Results on asphericity
We use the methods developed above to study asphericity of two-dimensional complexes and group presentations. Proof. Let c be a G-coloring of X (K) which corresponds to the universal cover. We will use the equations describing π 2 (K) to show that if α = h(x)=2 g∈G n x g gx ∈ π 2 (K), then n x g = 0 for every g ∈ G and every x with h(x) = 2. Let x = τ be a maximal element of X (K). Then W = W c : H(X (K), x) → G is an isomorphism. Let Y be the subspace of X (K) consisting of the 2-cells and the 1-cells which are faces of exactly two 2-cells. Note that L = K(Y ), so there is a weak homotopy equivalence L → Y . Since i * (π 1 (L, b(τ ))) contains an element of infinite order and W is an isomorphism, there is a closed edge-path ξ at x in Y of weight w(ξ) ∈ G of infinite order. We may assume that ξ is an edge-path of minimum length satisfying this property. Suppose ξ is the edge-path By the minimality of ξ, x i+1 = x i for every 0 ≤ i < k. Since x i and x i+1 are the unique two elements covering w i , the equation corresponding to w i and an element g ∈ G is Applying the previous assertion k times we obtain that n x hw(ξ) = 0. Repeating this reasoning we deduce that n x hw(ξ) l = 0 for every l ≥ 0. However, w(ξ) ∈ G has infinite order and this contradicts the fact that only finitely many n z g can be non-zero. Note that from the previous result one deduces the well-known fact that all compact surfaces different from S 2 and RP 2 are aspherical. Any triangulation K of such surfaces satisfies the hypotheses of the theorem since every edge of K is face of exactly two 2simplices and the links of the vertices are connected. Remark 3.3. It is well-known that the fundamental group of any 2-dimensional aspherical complex is torsion-free (see [6, Proposition 2.45]). Theorem 3.1 says that if the 2-complex K has a torsion-free fundamental group and the maps i * : π 1 (M ) → π 1 (K ′ ) are non-trivial then K is aspherical.
We derive from Theorem 3.1 a result on asphericity of group presentations. This result resembles in some sense the homological description of π 2 using Reidemeister chains [10, Thm 3.8] (See also [5]). Given a group presentation P , let K P be the usual two-dimensional CW-complex associated to the presentation, which has one 0-cell, one 1-cell for each generator and one 2-cell for each relator. The presentation P is called aspherical if K P is aspherical. In order to study asphericity of P , we will construct a digraph D P associated to P together with a G-coloring. First note that the notion of a G-coloring naturally extends to directed graphs. A G-coloring of a digraph D is a labeling of the edges of D by elements in G. We allow loops and parallel edges which could have different colors. The color of the inverse of an edge e is the inverse c(e) −1 of the color of e. A G-coloring c induces a weight map w c . If α = e 0 e 1 . . . e n is a cycle in the underlying undirected graph of D (for each i, e i is an edge of D or e −1 i is an edge of D), then w c (α) = c(e 0 )c(e 1 ) . . . c(e n ). Let P =< a 1 , a 2 , . . . , a k | r 1 , r 2 , . . . r s > be a presentation of a group G. The vertices of the directed graph D P are the letters a i which appear in total exactly twice in the words r 1 , r 2 , . . . , r s . So, a i appears either with exponent 2 or −2 in one of the relators and does not appear in any other relator, or it appears twice (in the same relator or in two different relators) with exponent 1 or −1 each time. Each vertex of D P will be the source of exactly two oriented edges and the target of two directed edges. Let r = r j = a ǫ 0 i 0 a ǫ 1 i 1 . . . a ǫ t−1 i t−1 be one of the relators of P , ǫ l = ±1 for every l ∈ Z t . We consider r as a cyclic word, so for example a i 1 comes after a i 0 and a i 0 comes after a i t−1 . Suppose a i l is a vertex of D P . We consider the first letter a i l+m coming after a i l which is a vertex of D P (i.e. the minimum m > 0 such that a i l+m ∈ D P ). It could be a letter different from a i l or the same letter if a i l appears twice in r or if it appears once and no other a is is a vertex of D P . Then (a i l , a i l+m ) is a directed edge of D P and the color corresponding to that edge is the subword ga The next example illustrates the situation. Figure 4. The digraph D P associated to P .
Theorem 3.5. Let P be a presentation of a group G. Suppose that every relator in P contains a letter which is a vertex of D P . If each component of D P contains a cycle whose weight has infinite order in G, then P is aspherical. Proof. We subdivide K P barycentrically to obtain a regular CW-complex K as usual. Each 1-cell corresponding to a generator a in P is subdivided in two 1-cells e a 0 and e a 1 sharing the unique vertex v of K P and a new vertex v a . The 2-cell f r corresponding to a relator r of P is subdivided in 2m 2-cells where m is the number of letters in r, adding a new 0-cell v r in the interior of the original 2-cell. Let L be the 1-dimensional subcomplex of K ′ defined as in the statement of Theorem 3.1. The vertices of L are the barycenters of the 2-cells of K and the barycenters of the 1-cells which are faces of exactly two 2-cells. In the interior of the cell f r there are exactly 4m vertices of L (the barycenters of the 2m 2cells and the barycenters of the 2m edges from v r to v and to each v a ). This 1-dimensional complex of 4m vertices is a cycle that we denote C r . The remaining vertices of L are the barycenters b(e a 0 ) and b(e a 1 ) for each letter a which is a vertex of D P . We show that the hypotheses of the theorem ensure that the hypotheses of Theorem 3.1 are fulfilled. Since each relator contains a letter which is a vertex of D P , the components of D P are in bijection with the components of L. Suppose a and c are vertices of D P and that there is an edge (a, c) ∈ D P (or (c, a)). Then, there is a relator r of P such that a and c are letters of r. Since a, c ∈ D P , b(e a 1 ) and b(e c 1 ) are vertices of L and they lie in the 2-cell f r of K P corresponding to r. Moreover, there is an edge in L from b(e a 1 ) to the cycle C r and an edge from b(e c 1 ) to C r . Therefore there is and edge-path in L from b(e a 1 ) to b(e c 1 ) entirely contained in f r (see Figure 5). A cycle α in D P with base point a, has associated then a closed edge-path ξ in L at b(e a 1 ). We will show that the order of ξ in the edge-path group E(K ′ , b(e a 1 )) is infinite or, equivalently, that the order ofξ = (v, b(e a 1 ))ξ(b(e a 1 ), v) ∈ E(K ′ , v) is infinite. The edge-pathξ ′ obtained fromξ by inserting the edge-paths (b(e l 1 ), v)(v, b(e l 1 )) at each vertex b(e l 1 ) (l a letter in α) is equivalent toξ. Suppose a = l 0 , l 1 , . . . , l k = a are the vertices of α. The edge-pathξ ′ is a composition of closed edge-paths γ i in K ′ at v, each of them contained in a 2-cell f r i . The edge-path γ i , as an element of π 1 (K, v), is homotopic to a loop contained in the boundary of f r i which is, as an element of G, the color of the edge (l i , l i+1 ) in α. Thus,ξ ′ ∈ π 1 (K, v) ≃ G coincides with the weight of α and the first one has infinite order provided the second one does.
In Example 3.4 there is an edge from c to d with color c −1 d, an edge from a to d with color a −1 b −1 d and an edge from a to c with color b 2 c. Therefore, there is a cycle with base point c whose weight is c −1 d(a −1 b −1 d) −1 b 2 c = c −1 bab 2 c ∈ G. It is easy to verify that this element has infinite order, since a + 3b clearly has infinite order in the abelianization G/[G : G]. Since D P has a unique component and both relators of P have at least one letter in D P , Theorem 3.5 applies. This shows that P is aspherical. | 2014-12-17T17:31:14.000Z | 2014-12-15T00:00:00.000 | {
"year": 2014,
"sha1": "fdc05f0672ab3675d1050bf49c29824b253b5c5f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fdc05f0672ab3675d1050bf49c29824b253b5c5f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
245787037 | pes2o/s2orc | v3-fos-license | Modelling Time Series Customer Preference Based on E-commerce Website
In the research of customer preference for products, we need to collect enough customer data first, and then we can analyze it to realize the effective establishment of customer preference model. Although a reasonable model can be obtained, the customer's own preference presents a dynamic state, which changes disorderly with the change of time. This requires a systematic analysis of the time series data of customer preference based on different time periods, so as to complete the modeling of customer preference. Through research, this paper proposes a new time series customer preference modeling method, which can effectively process and analyze online customer comment data of e-commerce websites, including opinion mining and chaos optimization algorithm (COA) based dynamic evolutionary neuro fuzzy inference system (DENFIS). Finally, a case verification method is adopted, and a hair dryer is selected to verify the effectiveness and rationality of the method. The comparison results show that the proposed COA based DENFIS approach performs better than K-means-based ANFIS, fuzzy c-means-based ANFIS, subtractive cluster-based ANFIS and DENFIS in terms of mean absolute percentage error.
INTRODUCTION
The research on customers' perception of products is of great significance, especially in product development, which can bring strategic support to develop products to meet customers' needs. Due to volatile market and diversified technological innovation, the preferences, needs and desires of customers always vary with respect to time. It is likely that what customers said in the past or currently will not be relevant by the time that new products are launched in the future [1]. However, previous research mostly focuses on studying the static customer preferences with the assumption that past customers' survey data can reflect customer's behaviours at a certain period of time in the future.
In terms of customer survey, interview or questionnaire survey is the most basic way, so as to obtain certain customer preference data, and then analyze and study it. However, with the continuous change and improvement of customer preferences, it is more difficult to collect the time series data of customer preferences. Moreover, the investigation process needs to consume high economic cost and time cost. With the development of the Internet, online customers' comments on products have become an important evaluation method, and they are relatively easy to obtain on the Internet. Such comments are very rich in content and have high research value. Therefore, they can be used as data to dynamically analyze customers' product preferences. In fact, in the past, dynamic evolutionary neuro fuzzy inference system (DENFIS) was proposed in the research of customer preference [2]. However, the determination of parameters in DENFIS is difficult which affects the accuracy of modelling time series customer preference. This article is based on this, opinion mining and chaos optimization algorithm (COA) based DENFIS approaches are proposed to model time series customer preference based on e-commerce websites. It is necessary to effectively collect online comment data based on different time periods. Then the opinion mining method is used to analyze the data of each time period, complete the process of emotion analysis, and get the emotion score of customer preference. Based on the setting of design attributes and results of opinion mining, COA based DENFIS is adopted to model time series customer preferences.
RELATED WORKS
Through the opinion mining method to process the online comment data, and then get the product attributes and customer preferences, researchers have done a lot of research. In order to realize the effective mining and extraction of sentence level adaptive text, Lee [3] studied and designed the supervised machine learning method, which can accurately identify the needs of customers. Wang et al. [4] also conducted a lot of research. In order to obtain product attributes, they developed customer driven product design. At the same time, in the modelling stage of customer preference, they innovatively adopted Bayesian linear regression method, which has been verified to be highly effective. Chen et al. [5] designed an ontology learning customer requirement representation system on the extraction of customer requirements. The ontology of the system has higher comprehensiveness and coverage in semantics. Zimmermann et al. [6] focused on the extraction of important hidden features in online comment data, developed and designed a framework for effective monitoring and recognition. Opinion mining and casebased reasoning was combined to achieve accurate identification of potential customer needs [7]. Zhang et al. [8] developed and designed a new opinion mining and extraction algorithm, which can be used for various analysis, such as feature opinion relationship and so on. Zhou et al. [9] developed a new effective model for customer preference, which can also integrate customers' negative opinions and positive opinions, so as to optimize and supplement the feature model. Through research, Kang and Zhou [10] developed a method that can extract the objective and subjective features of customers. Tuarob and tucker [11] tried to mine and extract data in social media networks, which can automatically mine potential product features and users' data, which has high research significance and value.
With the change of time, customers' preference for products will also change greatly, showing a dynamic change trend. The research on predicting the future customer preference is very important, and there are many previous research results. Shen et al. [12] obtained the fuzzy trend analysis method through research, which can effectively analyze the importance trend of customer perception over time in quality function deployment. Xie et al. [13] studied and realized the effective prediction of the future importance perceived by customers, and adopted the method of double exponential smoothing technology. Wu et al. [14] proposed a new model theory, which can effectively analyze the future importance and dynamics of customer preference based on the past, present and forecasted data, that is, grey theory model. Huang et al. [15] predicted the future importance based on the artificial immune system. Jiang et al. [16] adopted another method, that is, fuzzy time series method for predicting the future importance of customer preference based on online customer reviews. DENFIS was proposed to model time series customer preferences based on online customer reviews [2]. However, the optimal parameters of DENFIS cannot be determined which affects the accuracy of prediction.
PROPOSED METHODOLOGY
The proposed methods include opinion mining and COA based DENFIS method to model time series customer preferences.
Opinion Mining From E-commerce Website
The sample products are first identified. Then, customer reviews of sample products on the e-commerce website are obtained by the web crawlers, which are divided into different periods and put into the separate excel files. In this study, Semantria, a well-known text analysis software tool, was used for opinion mining of online reviews. It provides text analysis using Excel plug-in, extracts opinions based on positive, neutral, and negative dimensions and calculates the corresponding sentiment scores.
3.2.Chaos Optimization Algorithm (COA)
The COA adopted in this paper can effectively determine the optimal parameter setting of DENFIS. In fact, COA algorithm has many advantages, especially adding chaos to the optimization strategy, which can accelerate the analysis of the global optimal solution. After a series of iterative calculations, the logistic model generates chaotic variables, as shown in (1).
In the formula, represent the nth iteration value of the chaos variable c .
The chaotic variable is transformed into optimization variable by using the linear mapping formula: In the formula, a and b represent the lower and upper limits of the optimization variable q , respectively. n q represents the optimization variable; During the iteration, the chaotic variable will traverse 0, 1 and the optimization variable will traverse , ab.
Based on this, the optimal solution is obtained.
3.3DENFIS Approach
In fact, DENFIS method adopts evolving clustering method (ECM) to realize the clustering division of input,
Advances in Economics, Business and Management Research, volume 203
and then relies on the clustering centers to establish the antecedent of new fuzzy rules. Based on the threshold Dthr, the number of clusters can be determined. Dthr can effectively control the maximum distance between the cluster center and data points, and then become a constraint to update the cluster radius. Cluster C1 needs to be initialized. Its radius Ru1 and center Cc1 are set to zero and the first data set, respectively. When the new data i Z , 2, , in , is presented, the distances, Dij, from i Z to the existing clusters, Cj, 1, 2, , jm , are calculated by (3).
In the formula, j Cc represents the cluster center of represents the quantity of existing clusters , n represents the number of data sets. In addition, the minimum distance is calculated:
If x is MF x is MF and x is MF then y is f x x x (7)
In the formula, , 1, 2, , i x i q represent the ith input variable of DENFIS and the number of inputs is q. In fact, the input includes the emotional score of customer preference in the previous periods and the setting of design attributes; , 1, 2, , , 1, 2, , The data sets 12 , , , , , 1, 2, , l l l ql x x x y l n is to get the regression coefficient, where n is the number of data pairs. Using (10) and (11), the calculation and derivation of the initial inverse matrix P and the coefficients can be realized based on the weighted least squares estimation.
where P and are initialized. When the new data set is entered, the coefficients 1 l and inverse matrix 1 l P at the (l+1)th iteration will be updated. 1 1 and it is the Through the above learning, the calculation of DENFIS' lth prediction output can be realized, with the weighted average of each rule output.
CASE STUDY
The method proposed in this paper needs case analysis and verification. It adopts the method of time series customer preference modeling in the line with the customer comments of hair dryer products, which can systematically verify and compare the effectiveness and rationality of the method proposed in this paper. During the research, a variety of hair dryer products were compared, and ten typical ones were selected and numbered, expressed as A ~ J. In terms of online comment data acquisition, it is based on the method of the fixed time period, and there are four specified time periods. The online platform selects Amazon. com, which has authoritative and comprehensive online customer comment data on hair dryer products. The above data are collected and uniformly summarized into an excel file. Then the collected data are analyzed and processed, and the corresponding opinion mining can be carried out by using the semantria excel add-on, which is more convenient and reliable. In the research process, the customer preference "performance" which is expressed as y, is used to illustrate the proposed COA based DENFIS method. The obtained emotional score of "performance" is shown in Table 1. This paper summarizes the design attributes that are closely related to customer preference "performance", as shown in Table 2. The four design attribute include weight x1, power x2, heat setting x3 and speed setting x4. In this study, stage 4 was treated as the future period and the sentiment scores of stage 4 were denoted as y(t), where t is the time period. Stage 1 ~ 3 are considered as the historical periods. The emotion scores of stages 1~3 presented as y(t-3), y(t-2), and y(t-1), respectively, as well as the setting of four design attributes, x1, x2, x3, and x4, were used to predict the future sentiment scores of "performance" of stage 4, y(t). Based on the COA based DENFIS, the time series customer preference models can be developed. The following shows some examples of the fuzzy rules generated.
If x is MF x is MF x is MF x is MF y t is MF y t is MF
and y t is MF then y t Through comparative analysis with other methods, the reliability and accuracy of the proposed method can be obtained. The methods of comparative analysis include K-means-based ANFIS, fuzzy C-means-based ANFIS (FCM-ANFIS), subtractive cluster-based ANFIS (SC-ANFIS) and DENFIS. The verification results of the five methods are compared, and then the mean absolute percentage error (MAPE) method is used to get the comparison results.
The results obtained from the verification are shown in Table 3. The MAPE of the other four methods are higher than that of the method proposed in this paper, so the method proposed in this paper has higher prediction accuracy.
CONCLUSION
Through research, this paper proposes a new time series customer preference modelling method based on e-commerce website, including opinion mining of online comments and COA based DENFIS modelling method. Then, the proposed method is verified by the case analysis, and the hair dryer products based on online comments are selected as the analysis object. By comparing the mean absolute percentage error of the proposed method and the other four methods, it is obtained that the COA based DENFIS method has more prominent advantages than the K-means-based ANFIS, FCM-ANFIS, SC-ANFIS and DENFIS. The method proposed in this paper has higher modelling accuracy. | 2022-01-07T16:14:06.589Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "ebdcf7959f0e21c48e3f3f90758b69df0faa6f54",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2991/assehr.k.211209.525",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "07de923da31966a62cce84eddeb99cf4408b8816",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
119418010 | pes2o/s2orc | v3-fos-license | Effective field theories for disordered systems from the logarithmic derivative of the wave-function
We consider a spinless particle moving in a random potential on a d-dimensional torus. Introducing the gradient of the logarithm of the wave-function transforms the time independent Schroedinger equation into a stochastic differential equation with the random potential acting as the source. Using this as our starting point we write functional integral representations for the disorder averaged density of states, the two point correlator of the absolute value of the wave-function as well as the conductivity for a d-dimensional system. We use the well studied one dimensional system with Gaussian disorder to illustrate that these quantities can be computed reliably in the current formalism by using standard approximation techniques. We also indicate the possibility of including magnetic fields.
I. INTRODUCTION
The study of the localization of non-interacting electrons in disordered media has progressed considerably since the initial work of Anderson 1 in 1958. Early work on one dimensional (1D) disordered systems included the study of the spectral densities by Halperin 2,3,4 as well as a calculation by Berezinskii 5 using diagramatic techniques showing that all states are localized in 1D disordered systems, although this is generally difficult to extend to higher dimensions. Abrahams et al. 6 introduced a scaling theory of localization predicting that a metal-insulator transition occurs in dimensions greater than two, although there seems to be experimental evidence for a transition in two dimensions 7 . Making use of the replica trick 8 , the problem was mapped onto a non-linear σ-model 9,10,11 , which gave quantitative confirmation of the scaling approach. Efetov's supersymmetry approach 12,13 introduced a mathematically more rigorous alternative to the replica trick, which he used to prove, amongst other things, a conjecture of Gor'kov and Eliashberg 14 that random matrix theory 15,16 can be applied to the energy level statistics of particles in disordered systems.
Notwithstanding the considerable amount of work that has gone into the investigation of the localization problem, there are still many outstanding problems, for instance the lack of a order parameter 17 to describe the 2nd order metal-insulator phase transition. Also, finding an analytically tractable description of the localization problem, of which there has been little progress, would lead to a better understanding of disordered based phenomena, such as the Quantum Hall Effect 18 . For this reason, any additional aproaches for studying the localization problem, possibly leading to new insights, are useful.
In general, we would like to calculate disordered averages of observables that depend on a random potential V (x). These disordered averages can be calculated when the exact dependence of the observable on the random potential is known. However, when this dependence is not known, for example, the density of states and correlators of the wave-function, other methods of averaging these observables over the disorder are needed. Usually, the disorder averages of advanced or retarded Green's functions, G ± (E) = (E − H ± iǫ) −1 , are calculated since their dependence on V (x) is known. These averages are then related to the averages of the observable. Thus, one would calculate the average of the advanced Green's function and then relate it to the density of states using where the angle brackets denote averaging over the disorder. Both of the main field theoretic techniques for investigating disordered systems, the supersymmetry 12,13 and replica 8 methods, are based on calculating the averages of products of Green's functions using a generating function and then extracting information from the result. In this paper we would like to propose a complementary approach for calculating disorder averages. This approach entails a transformation where we change from the random potential V (x) to a new set of random variables, which can be related to the logarithmic derivative of the wave-function and energy of a particle moving in the random potential. Using this transformation allows us to calculate directly averages of the density of states and correlations of the wave-function and its absolute value.
In section II, we will introduce the formalism, both for one-dimensional systems and for higher dimensions, and show how disordered averaged observables are calculated within this framework. Since the one dimensional system with Gaussian disorder is probably the best studied disordered system, with a variety of well known results available 3,23 , it is ideal for testing and developing approximation techniques within our formalism with the ultimate aim of extending these techniques to higher dimensions, and possibly also to the case of a magnetic field. Therefore we focus in section III on the one dimensional Gaussian disordered system in order to illustrate how the formalism can be applied, using standard approximation techniques, to recover known results for the density of states 2 , and to obtain results for the 2-point correlator of the absolute value of the wave function 23 as well as the conductivity 5 . In section III we also give a realization of the model to show how the parameters describing the Gaussian disorder can be related to microscopic quantities. In the final section, section IV, we numerically calculate and generate plots of the main results obtained in section III in order to obtain a understanding of the results.
II. FORMALISM
We consider a particle moving on a d-dimensional torus, and in a periodic, random potential caused by impurities in a system. We wish to calculate observed quantities of this particle, which under the assumption of self-averaging implies the averaging of these observables over the different realizations of the random potential, i.e., whereZ = [dV ]P [V ] and P [V ] is the probability distribution describing the random potential. If we assume that the impurities are quenched, then the movement of the particle is described by the time independent Schrödinger equation. We impose periodic boundary conditions and for the moment assume that time reversal symmetry is not broken, so that the wave function can always be chosen real.
To introduce our formalism, we consider the logarithmic derivative of the wave function 35 instead of the wave function, and correspondingly change from the Schrödinger equation to the equation of motion of this new variable. There are several advantages to this transformation, particularly from a functional integration point of view. Firstly, in contrast to the Schrödinger equation in which the random potential multiplies the wave function, the equation of motion governing the new variable is a non-linear stochastic differential equation in which the random potential simply plays the role of a random source. Secondly, the physical irrelevant normalisation of the wave function is eliminated. Lastly, as is well known 19 , the localization length cannot be extracted from the correlations of the wave function as these are always short ranged due to the random phase cancellations. Instead, the correlations of the absolute values of the wave function should be computed. The current formulation is ideal for this purpose, as will be illustrated later.
Although the strategy is identical, there are subtle differences in the introduction of the formalism for onedimensional systems, where we transform from the scalar wave function to a scalar variable, and higher dimensions, where the transformation is from the scalar wave function to a vector variable. For this reason, we first introduce the formalism for one-dimensional systems and then afterwards consider the more general theory in higher dimensions.
A. One dimensional formalism
In one dimension, we introduce the following real valued field related to the logarithmic derivative of the wave function, where we use the notation that φ ≡ φ(x), unless the argument is specifically stated. The periodic boundary conditions on the wave function implies that φ is also periodic, but cannot be a constant function. Using φ in the Schrödinger equation, we obtain the first order Ricatti equation, where we work in units ofh 2 2m . Note that in these units, V and E have the dimensions (length) −2 , while φ has the dimensions (length) −1 .
Let us now consider (2). In principle, if we knew the functional dependence of the observableÔ on V , we could compute the desired quantities directly from (2). However, except in extremely trivial cases, we do not know this dependence and, in particular, we do not know the functional dependence of eigenvalues and eigenfunctions on V , which are the averages we would like to compute. On the other hand, the functional dependence of these, and many other observables, on the variable φ is fairly easy to determine (see section II D). It therefore seems like a good strategy to change integration variables in (2) from V to φ using the simple relation (4). Doing so will shift the complexity of (2) from the observables to the action (probability distribution) for φ, which will generally be highly non-linear, even though P [V ] may be simple, i.e. Gaussian. The latter problem is, however, more amenable to treatment through the arsenal of perturbative and non-perturbative field theoretic techniques than the original problem as stated in (2).
To facilitate the change of variables (4) in (2), we introduce the identity 36 where the functional integral is over all possible nonconstant periodic configurations of φ, N is the total number of states (dimension of the Hilbert space), and the Jacobian |J| = | det(− d dx + 2φ)|. Note that the role ofĒ under our change of variables is to replace the integration over the constant mode of V , which cannot be done with φ. It is easy to see, using the conditions imposed on φ, that the operator − d dx + 2φ can be transformed to − d dx through a similarity transformation, thus not affecting the determinant. Therefore, |J| is simply a multiplicative constant, which can be combined with the normalisation of the functional integral. Inserting the identity (5) into the averages of (2), and then completing the integration over the disorder, allows us to obtain a field theory, formulated in terms of the variable φ, for the disordered average of observables where It should be noted that although we considered only a random potential when constructing this field theory, it is also possible to obtain the field theory when there is both a random potential V (x) and a deterministic potential W (x). In this case the result is similar to (6a), except the energy is now shifted by W (x),
B. Higher dimensions
In higher dimensions, we introduce a real valued vector field related to the gradient of the logarithm of the wave function, Since the wave functions are assumed to be of class C 2 , we note by direct computation that ∇ × A = 0. Also, periodicity of ψ demands that A is periodic and does not contain a constant mode. Using A in the higher dimensional Schrödinger equation, we obtain As in the one-dimensional case, we can introduce an identity to implement a change of variables, based on (9), between V ( x) and the field A, constrained as described above: with − having the same meaning as in (5).
Using the identity (10) in (2) and integrating over the disorder gives the corresponding field theory for the disordered average of observables in higher dimensions, with We can of course solve the constraint ∇ × A = 0 by setting A = ∇φ so that the resulting theory can be expressed as a scaler theory. However, the form (11) is particularly useful for the inclusion of a magnetic field B, since one only has to treat A as a complex field with the constraint ∇× A = 0 replaced by ∇× A = ie B/c. We do not work out the details of this effective field theory in this paper, but rather focus on the one-dimensional case to illustrate the basic ideas.
C. Translational Invariance
Central to our analysis will be the translational invariance of the effective action, which stems from the assumed translational invariance of the probability distribution, P [V ], and implies that translational invariance is restored after averaging over the disorder. The translational invariance of the action leads to the appearance of an implied integration over a collective coordinate in the functional integral, corresponding to integration over the moduli space associated with the translational symmetry. It is appropriate to make this integration over the collective coordinate explicit to ensure that is correctly taken into account. To do this, we use a method inspired by the Faddeev-Popov 20 quantization method of gauge theories. For simplicity we consider here the onedimensional case, and indicate below how to extend to higher dimensions.
We introduce the identity where φ x0 ≡ φ(x + x 0 ), F is an arbitrary functional of φ which is not translationally invariant, and F ′ [φ x0 ] denotes the derivative with respect to x 0 . Unless explicitly stated, the integration is over the interval [− L 2 , L 2 ]. The proportionality constant c can in some cases be divergent, due to the existence of the Gribov ambiguity 21 for certain choices of F . Under these conditions, one needs to be careful to extract the spurious divergent term after completing the functional integration, as this term cancels with a similar term in the normalization.
As is usually done in gauge theories, it is more convenient to implement the identity (12) after integrating both sides with c −1 dνf (ν), where f is an arbitrary function, so that where the left hand side is independent of φ. Multiplying both the numerator and denominator of (6a) by the left hand side of (13) gives where the partition function is given by Note that the functional F [φ] acts in a similar fashion to gauge fixing terms in conventional gauge field theories. Since the choice of F [φ] is arbitrary, we can choose F [φ] so that our calculations can be simplified. As we shall see later, this choice will depend on what observables we wish to average. Also, any proportionality constants that appear due to the use of the Faddeev-Popov quantization method cancels out since we use the identity in both the numerator and the denominator (although, as mentioned earlier, extra care is needed if Gribov copying occurs).
To extend to higher dimensions, we introduce a vectorvalued functional F [ A x0 ] and the associated Faddeev- ∂xj . The results obtained above can then be generalized to higher dimensions through the replacement , the latter denoting the corresponding Jacobian.
D. Disordered averaged observables.
Instead of writing the observables as a functional of V , we would like to obtain them directly as functionals of the fields φ or A and the energy,Ē. It is possible to do this for observables like the density of states, correlators of the wave function, and the conductivity.
Density of States
The density of states at energy E is defined as where E m are eigenvalues of the Schrödinger equation, and d denotes the dimension of a system of size L. However, if we consider the identities, (5,10), we see that the functional integral can be considered as a sum over all possible solutions of the Ricatti equations (4,9), i.e all possible eigenstates with all possible eigenvalues for the corresponding Schrödinger equation, and is thus the total number of states. However, when fixing the integral over the energy at E, the functional integral yields only the number of eigenstates at E, and is therefore proportional to the density of states, ρ(E) for a particular configuration of the disorder. After integrating over V , this yields the disordered averaged density of states. Thus, we obtain via inspection the formula for the average density of states, normalised by the total number of states, for one dimensional systems, and for higher dimensional systems,
Correlators of the wave function
To obtain the observable related to correlations of the wave function, ψ(x)ψ(y), we use the definition of the field φ, (3) to write the unnormalised wave function as Up to a global phase factor, all information that can be obtained from the wave function can also be obtained from (17), including information about the phase, which we need to consider carefully when computing the localization length so as to avoid obtaining incorrect results due to random phase cancellation.
The phase of the wave function changes as the wave function changes sign. From (3), we see that φ must diverge at these points. We thus need a prescription to calculate the integral in the exponent of (17) at these points, since the result must be finite. The prescription we use is to integrate over a contour from 0 to x, where the contour avoids the positions on the real axis where there are singularities in φ(x) by moving around them in the upper complex plane with a semi-circle of radius ǫ.
This contour integral can be written in terms of the principle value of the integral plus a phase which depends on the number of times a singularity occurs in the interval [0, x]. Using this prescription, we are able to separate the phase from the integral over φ. Thus we obtain for the normalised wave function (the notation x 0 is used for the contour as described above) where x j are the positions of the singularities and To avoid the problems associated with the random phase cancellation when computing the localization length, we need to calculate correlators between the absolute values of the normalised wave functions 19 . Using (18) with (14), we obtain an equation for calculating the disordered average of the 2-point correlator of the wave function at fixed energy E, It is easy to check that this correlator is translationally invariant and that it only depends on |x − y|.
Since F [φ x0 ] and f are arbitrary, we make a choice which simplifies the calculation of the correlator by cancelling out the normalisation factor, N [φ].
To do this, we choose f = 1, and Using (21) in (20) we have Note that the integrand in (22) can be written as a total derivative to x 0 , and thus the integral over x 0 naïvely gives a result of zero. This is, however, an artifact of the choice of the gauge in the Faddeev-Popov identity, which is zero due to Gribov copying.
To obtain the correct result for the averaged observable, it is necessary, after completing the functional integral, to extract the terms that give zero using some form of regularization, and cancel them out with similar terms that occur in the normalization. What remains is the correct result for the disordered average of the observable.
Using the periodic boundary conditions of φ, we see that we can transform (22) into a form where the symmetries of the system are more explicit, . From (23a) we see that the correlator is translationally invariant, while (23b) shows that the correlator contains a reflection symmetry around |x− y| = L 2 . Thus one needs to ensure that any approximations that are made respect these symmetries.
These considerations can be generalized to higher dimensions, with the integral in the exponent of (18) being replaced by the line integral x 0 A·d s along any path connecting 0 and x. Due to the constraint ∇ × A = 0 this integral is path independent.
We note that if the wave function changes its sign ( A becomes singular) along a certain path connecting 0 and x, it must do so along any other path, which implies that the associated singularity in A must appear in all possible paths connecting 0 and x. This in turn implies that the singularity in A occurs on a surface separating 0 and x into disconnected regions. Any path connecting 0 and x may therefore cross a singularity and a prescription to handle this singularity is required. We can do this in the same way as the one-dimensional case : if t parameterizes the path, we can avoid the singularity by a detour in the complex plane. In this way the absolute value and random phase of the wave function can again be separated, with the principle value of the line integral determining the absolute value.The normalisation of the wave function can again be cancelled by an appropriate (non-unique) choice of F so that the correlation in higher dimensions is given by
Conductivity
The conductivity of a system of non-interacting fermions is given by the Kubo formula 22 . For our purposes it is convenient to integrate the Kubo formula by parts and use the periodic boundary conditions on the wave functions, so that the real part of the conductivity is then given by 2 where Here f (E) is the Fermi-function, and all energies are measured in units ofh 2 2m . Without loss of generality, we can focus on the disorder average of the quantity Φ(E, ω), which is of course just the contribution to the conductivity of a particle at energy E.
We are able to obtain the form of the observable for the disorder average of Φ by using a technique similar to the one used to obtain the expression for the density of states. The only difference is that there is a summation over two eigenvalues in (26), and so we need to introduce two identities of the form (5), where the integration over φ is replaced by integrations over φ α and φ β which correspond to two solutions of the Schrödinger equation with the same random potential. We can write the average of Φ as Completing the integral over φ β , we have whereφ β (x) is a functional of φ α and is determined bȳ We can now use the Faddeev-Popov method, where we introduce the same choice of gauge for the φ α andφ β fields as in the previous section, which allow us to cancel out the N [φ α ] and N [φ β ] normalisation factors respectively, so that In higher dimensions we use the same strategy as above to obtain
III. ONE DIMENSIONAL SYSTEMS WITH GAUSSIAN DISORDER
In this section we consider one dimensional Gaussian disordered systems. We do so to illustrate how the formalism as described in the previous section can be applied, using standard approximation schemes, to recover known results for the density of states 2,23 and the conductivity 5 .
If we have Gaussian disorder, then V (x)V (y) = l −1 δ(x− y) and the probability distribution P [V ] is given by where the dimension of l is (length) 3 . Normally, as in the previous section, we are interested in observables at fixed energy, E. In this section we concentrate on these and therefore setÔ[ Using (33) in (6), or its equivalent form (14), we obtain a ϕ 4 field theory for calculating the disorder averages of fixed energy observables: where the action is given by Here Z and O denotes the normalization and observable in a generic gauge and we omit the subscript F of (14). It is not possible to calculate the functional integral in (34) exactly, so we use perturbative approximations in order to calculate the disordered averages. If l is large, equivalent to a weak disorder system, we expand around a saddle point in (34). For small l, or a strong disorder system, we use a Hubbard-Stratonovitch transformation to obtain a functional integral which can be approximated well in this regime.
A. Weak disorder limit
For large 37 l, we calculate (34) perturbatively to one loop order using a saddle point approximation. Following a procedure similar to the one used in Zinn-Justin 24 , we find approximate saddle point solutions 25,26 which satisfy the saddle point equation to leading order in L −1 , and thus become exact in the thermodynamic limit. These saddle point solutions for positive energies are where m is the nearest even integer to √ EL/π, and for negative energies where we assume a periodic continuation of φ ± c (x) outside [− L 2 , L 2 ], and where the relative seperation 2x 0 between the instanton and the anti-instanton pair is large 26 . We shall see below that the constraint that φ contains no constant mode implies 2x 0 = L 2 so that this condition is automatically fulfilled. Under these conditions of well seperated instanton and anti-instanton pairs the dilute gas approximation is valid 26 .
We wish to emphasize that the two solutions obtained for E ≥ 0 and E < 0 have different physical behaviour, which can be explained by noting that the potential changes from parabolic for positive energies to a double well potential for negative energies.
For positive energies, the requirement that the solution must have periodic boundary conditions, as well as the constraint that there cannot be a constant solution, implies that the energies are quantized, leading to (35a). Also, the condition that m is an even integer is due to the periodic boundary conditions of the Schrödinger wave function.
We note that the solutions (35a) are topologically different for different m, as each solution has a different number of singularities. Since the number of singularities are related to the number of nodes of the wave function, each saddle-point solution corresponds to wave functions with a different number of nodes. To be more precise, when compactifying the real line to S 1 by identifying +∞ and −∞, we note that φ is a mapping from S 1 to S 1 , where the mappings are classified according to winding numbers. It is simple to see that the solution φ m c has winding number m.
In order to obtain a reliable approximation to the functional integral, it is necessary to sum over all the topologically different sectors. Additionally, since the fluctuations, η, are smoothly varying functions around the saddle point solutions, we see that all the information about the phase (which is determined by the number of singularities in φ) is contained in the classical solutions, so that the localization length (which is extracted from the absolute value of the wave function) is purely deter-mined by the fluctuations, η.
There is, however, a complication in the saddle point approximation, since the approximation contains a zero mode, which is a manifestation of the translational invariance of the action (34b). To circumvent the problems associated with the zero mode, we make the translational symmetry explicit in the form of a collective coordinate 27,28,29 which we introduce by using the Faddeev-Popov method discussed earlier. Inserting the identity (12) into (34) with the choice F [φ x0 ] = dxφ 0 (x)φ(x + x 0 ) in order to project out the zero mode φ 0 ≡ dφ m c dx , we obtain the positive energy saddle point appoximation where ∆E = E − ( mπ L ) 2 and the propagator is given by Here we noted that the constraint δ( dxφ m c ) is trivially satisfied so that the constraint δ( dxφ) in (34) simply becomes the constraint that the η integration is over all non-constant modes. Also note that in the pure limit (l → ∞) the value of m, and thus the topological sector is fixed by the energy.
Integrating over the zero mode and making the approx- where the accent denotes that the zero mode is excluded when calculating the funtional integral. Also note that F ′ [φ m c ] is a divergent constant (the zero-mode is not normalizable), that may depend on m. However, requiring that the disorder average gives the correct result in the pure limit, we find that F ′ [φ m c ] is a constant independent of m, which can thus be incorporated into the normalization.
In the negative energy region, the saddle point equation has a double well potential. The constraint that φ has no constant mode does not allow us to obtain constant saddle point solutions situated at the minima of the potential, thus the only other possible solutions are instanton solutions 25,26,27,28,29,30 where tunnelling occurs from one minima to the other. Since we must also satisfy periodicity, there must also be tunnelling back to the original minima. This to and fro tunnelling can occur multiple times 25 , corresponding to topologically different sectors over which one has to sum, but since there is an exponential decay associated with each tunnelling process we consider only solutions, (35b), where the tunnelling occurs once.
Once again the saddle point approximation contains a zero mode, which needs to be integrated out. Additionally, the saddle point solution (35b) allows another quasi-symmetry to exist 27 . This quasi-symmetry is due to the fact that for large system sizes, local translations are possible that only changes the action by terms of or-der exp(−cL). To be specific, for large separations in (35b), a translation inx 0 has an exponentially small effect on the action, so that to leading order in 1 L it is a symmetry of the action, which in the L → ∞ limit becomes an exact symmetry. Associated with this approximate symmetry there is again an approximate zeromode.
To circumvent the problems associated with the zero modes, we make the translational symmetries explicit in the form of collective coordinates 27,28,29 which we introduce by using the Faddeev-Popov method discussed earlier. However, since there are there are two collective coordinates which we wish to introduce simultaneously, we need to modify the identity in (12) so that 27 . (38b) In order to project out the zero modes, we choose where the zero mode φ 0 and quasi zero-modeφ 0 are given by Using (38) and (39) in (34), changing the variables φ(x + x 0 ) → φ(x) in the functional integral and then writing Here we have handled the constraint δ( dx(φ ± c + η)) by restricting the η integration to be over nonconstant modes, leaving the constraint δ( dxφ ± c ), which can explicitely be written as δ( dxφ ± c ) = δ(x 0 − L/4)/| dx ∂φ ± c ∂x0 |. Note that the constraint on x 0 forces the instanton and anti-instanton pair of (35b) to be separated by L 2 , which for large L allows us to use the dilute gas approximation 26 , After integrating overx 0 in equation (40), and using the dilute gas results 27 where and the notation [dη] ′ denotes that the zero modes are excluded in the functional integral.
To calculate the functional integrals in (36) and (42), we need to be able to calculate the propagator, ∆, and the determinant of it's inverse, det(∆ −1 ). This involves solving the eigenvalue equation where φ c is given by (35a) in the positive energy region, or by (41) in the negative energy region. Although it is possible to solve for the eigenvalues and eigenfunctions of (43) exactly for the different energy regions, it is not possible to obtain closed expressions which is necessary when calculating the propagator. We thus need a consistent approximation for calculating the eigenvalues and eigenfunctions of (43) for both positive and negative energies. (These approximations, as well as comparisons to the exact results, are discussed in detail in the Appendix.) For positive energies, we note that the dominant property of the classical solution (35a) is that it contains singularities which appear periodically with period L m . Thus, the eigenfunctions of (43) must be zero at the points where these singularities occur. Additionally, the periodicity of the classical solution leads to Bloch characteristics of the eigenfunctions, so that the eigenfunctions are also periodic over the interval L m , which implies that we only need to solve (43) over the interval [− L 2m , L 2m ]. We wish to make an approximation for the propagator that captures the essential characteristics of the original eigenfunctions. As a first approximation, we treat the tan 2 potential in (43) to lowest order in perturbation theory, where we capture the singularities of the potential by imposing vanishing boundary conditions at ± L 2m . We thus need to solve the eigenvalue equation with vanishing boundary conditions at ± L 2m . We can now calculate the approximate propagator and determinant in the positive energy region to give In the negative energy region one finds from the exact solutions two zero modes (which we must eliminate) and a doubly degenerate bound state at λ = 3|E|. The rest of the spectrum consists of four fold degenerate scattering states starting at λ = 4|E|. From the exact solution it turns out (See the Appendix) that the eigenvalues and eigenfunctions of the scattering states can be well approximated by that of a free particle. This amounts to writing tanh 2 (x ± x 0 ) = 1 − sech 2 (x ± x 0 ) in the classical solution appearing in (42b) and then treating the sech 2 (x ± x 0 ) terms to lowest order in perturbation theory. Thus the approximate eigenvalue problem that has to be solved to obtain the spectrum and eigenfunctions of the scattering states is where Ψ n are periodic on a system of length L. Taking into account the bound states and scattering states, approximated as a free particle spectrum, we obtain the determinant in the negative energy region as The contribution of the bound states in the propagator can be neglected as it involves the product of two very well localized eigenfunctions evaluated a points which are well seperated. Taking into account only the scattering states, again approximated as free particle states, we find for the propagator Using these approximate results for the propagator and determinant for positive energies (45) and negative energies (47) with the respective formulations for disordered averages, (36) and (42), now allows us to calculate the average value of observables to leading order in L.
As mentioned earlier, this approximation is valid when l is large in a sense determined by the other two scales, namely E and L. To find the precise criterion one has to evaluate the higher order loop corrections to (37) and (42), e.g., in the case of the density of states, one has to evaluate higher order vacuum diagrams. Doing this, one finds that these contributions can be neglected under the condition that El L ≫ 1. There are two limits under which this condition can be fulfilled. Firstly, for a fixed energy E, we find that l ≫ L E , which is large for large system sizes. The approximation thus holds in the weak disorder limit. Secondly, for a fixed amount of disorder, we have that E ≫ L l . We thus find that the approximation also holds in the high energy limit. Note, however, that in the thermodynamic limit (L → ∞), the condition cannot be satisfied, unless either l → ∞ or E → ∞, implying that using this approximation for disordered systems only holds for finite system sizes.
Density of states
We now apply the saddle-point approximation discussed above to the disorder averaged density of states given by (16). For positive energies, this amounts to computing (37) with the observableÔ = 1. Upon integrating over the functional integral to obtain the factor det(∆ −1 ) −1/2 , we have where N + is an unknown normalization factor, independent of the energy, that needs to be fixed in some manner. For negative energies, the saddle-point approximation of (16) leads to (42) withÔ = 1. After integrating over the resulting functional integral, we obtain where N − is another energy independent normalization factor. The latter result agrees with the result of Halperin 2 obtained by different means, although his result does not include the higher order corrections. It should be noted that in order to compare with the results of Lifshits et al 23 , which are given for a system of infinite size, one must take the L → ∞ limit in the results above such that the ratio El/L is fixed.
2-point correlators
If we consider the disorder average of the 2-point correlator given by (22), we note the exponential terms can be written as where we have relabled the the x 0 integration in (22) to z. The linear terms of η in the exponential can be written as an integral over the interval [ −L 2 , L 2 ] by using a combination of step functions, where S(x, y, z|x ′ ) can be considered as a source term for the η fields and is given by Furthermore, a periodic continuation is understood outside [− L 2 , L 2 ]. Applying the positive energy saddle point approximation (37) to (22), and making the change of variables z → z + x 0 , we have Note that as a result of the gauge that we chose to cancel the normalization of the wave functions giving rise to (22), equation (50) contains a total derivative with respect to z, which naïvly gives a result of zero when completing the integration over z. However, since we have a ratio of total derivatives, we should obtain a non-zero result if we use a consistent regularization method. With this in mind, we integrate over z and cancel the result with a similar term in the denominator of (50). We can now integrate over the fluctuations to obtain where After using (45) for the propagator and determinant, we have where and In the negative energy region, the saddle point approximation (42) is obtained using the dilute gas approxima-tion. However, we find that the approximate saddle point solution (35b) in a dilute gas approximation, (41), breaks the symmetries of the system, thus we first need to write the exponential terms in (22) in a form where the symmetries are explicit. We do this by using (23) so that Using this form for the exponential insertions in (22), along with saddle point approximation for negative energies (42), and then integrating over the fluctuations, gives where (57) Note that in obtaining (56), we once again relabled the integration variable in (22) from x 0 to z, translated z → z + x 0 , and then, as above, cancelled the z dependent terms giving rise to a total derivative with respect to z. Note from the saddle point solution (41) that to leading order this correlation function decays or grows like exp(± √ Ex), confirming the result of Lifshits et al 23 that the localization length is proportional to 1/ √ E for large negative energies.
Conductivity
To calculate the disorder average of the conductivity given by (30) in a saddle point approximation, we first need to be able to solve forφ β ≡ φ β [φ α ] using (29). To do this, we make the ansatz that φ β consists of a classical and a fluctuating term, i.e.φ β = φ c β + χ. Using this ansatz, as well as the expansion φ α = φ c α + η in (29), and neglecting the coupling terms between the classical and fluctuating terms, we find that χ = η and φ c β must satisfy the saddle point equation with the energy shifted byhω,h with the positive energy solution given by To satisfy the periodic boundary conditions, we see that q must be the nearest integer to (m 2 +hωL 2 /π 2 ) 1/2 .
Applying the saddle point approximation in the positive energy region (37) (where we relabel x 0 to z) to (30), and making the change of variables x → x + z,x →x + z, x 0 → x 0 + z andx 0 →x 0 + z allows us to integrate over z (since the integrand is now independent of z) to obtain where S(x ′ ) ≡ S(x,x, 0|x ′ ) and the classical solutions in the positive energy region are now denoted by the superscript m. Note that the total derivatives that appear are due to the original Faddeev-Popov method used to cancel out the normalizations of the wave functions. As before, we can cancel them with similar terms in the denominator of (60a). Introducing source terms for the η(x) and η(x) terms, integrating over the fluctuations and then integrating by parts, we havẽ where with D n given in (54).
B. Strong disorder limit
In this limit, when l is small, we use a Hubbard-Stratonovitch transformation on (34), so that The Λ dependent observable is given by and We now approximate (62) by expanding (as described in Zinn-Justin 24 ) up to first order in the loop corrections. We do this by first splitting the integral over Λ into an integral over the constant mode, Λ 0 , and nonconstant modes η. This is achieved by inserting the identity dΛ 0 dηδ[Λ + Λ 0 ]δ( dxη) into the numerator and denominator of (62a). Integrating over Λ, we have where the observable O[E, Λ 0 + η] is given by (62b) with Λ = Λ 0 + η. We wish to approximate (63) by neglecting the φ 2 η coupling term that appears in O[E, Λ 0 + η]. By examining the effective action for Λ 0 , expanded around the equilibrium value for Λ 0 , we find that one can neglect the coupling term if El L ≪ 1. As in the saddle point approximation, this condition can be fulfilled in two limits, namely the strong disorder limit (l ≪ L E ), or the low energy limit (E ≪ L l ). Also, in the thermodynamic limit, this condition always holds for fixed energy and disorder.
If we neglect the φ 2 η coupling term, we can integrate out the η integal, so that the disordered average is with the observable given by , which allows us to integrate over positive Λ 0 in (64a) if we take the real part of the integrand, thus Here we considered only the lowest order approximation where we totally neglected the contribution from the φ 2 η term, but it is easy to extend (64) to include the contribution of the quadratic terms in η arising from the φ 2 η coupling, upon which the η integral can still be done to yield a determinant, which will give higher order corrections to (64).
Density of States
For the average density of states, the observable (64b) is which we can use in (65) to obtain
2-point Correlations
The observable used to calculate the correlator in the dual region can be obtained from (22). Explicitly writing the total derivative and using the approximation in (64b), we integrate over the φ field to obtain where ∆(x, x ′′ ) = (− d 2 dx 2 + 2iΛ0 l ) −1 . Using this in (65), extracting the terms that are x 0 independent and then cancelling the x 0 integral with a similar term in the normalization, we have (69c)
C. Microscopic realisation of the model
It is useful to have a microscopic realisation of the model presented in the previous sections that relates the parameters to microscopic quantities. For this purpose, we consider a one dimensional model of N Dirac-delta scatterers placed randomly on a ring, with the potential given by Here a is a dimensionful constant (units of (length) −1 ) that determines the strength of the scatters, and the subtracting term is chosen so that V = 0. Applying our formulism, the disordered average of some observableÔ is now given by Introducing a Fourier representation for the functional Dirac-delta, we can write (71) as with a similar expression for Z. Assuming that the scatters are weak (a small) and the system size L is large, we expand to lowest order in a and 1/L : Performing the Λ integration, we have: We can therefore identify the disorder parameter, l, of our Gaussian model with l = L/2N a 2 =l/2a 2 , withl the mean free path length.
IV. NUMERICAL RESULTS
In this section we numerically calculate the main results that we obtained in the previous sections and generate plots from these calculations in order to obtain a understanding of the results.
A. Density of States
There are three different regions that we need to consider when calculating the disordered averaged density of states. Firstly, if E ≫ L l , then the weak disorder saddle point approximation in the positive energy region, (48a), holds. For E ≪ − L l , the weak disorder saddle point approximation for negative energies, (48b), is used. Finally, when − L l ≪ E ≪ L l , we use the strong disorder approximation, (67). Thus, we find that the weak disorder saddle point approximation describes the high energy tails of the average density of states, while the strong disorder limit describes the low energy states. Note that for a fixed disorder, l, the only approximation that holds in the thermodynamic limit, L → ∞, is the strong disorder approximation, as the regions described by the saddle point approximation tend to negative and positive infinity.
We calculate the disordered averaged density of states for finite L, with various disorder values using the approximations in their respective energy regions. These results are shown in Fig. 1 in arbritrary units. Fig-ure 1(a) shows the plot for the positive high energy tail for various disorder values. Note that at high energies, the density of states is peaked around the discrete values that one would get in the pure limit. Also, the width of the Gaussian distribution around these discrete values increase as the disorder in the system increases. Eventually, when the disorder is large enough (or the energy low enough, as seen in the plot) the Gaussian distributions start to overlap, which leads to a change in the density of states from a almost pure behaviour to a strong disorder behaviour. Note, however, that the criterion that E ≫ L l no longer holds in this region, and that the strong disorder approximation should be used instead. Figure 1(b) shows the density of states for the low energy states. As the disorder increases, the width of the strong disorder density of states increases. In Fig. 1(c), the result for the negative energy tail is shown. In this region the density of states falls off exponentially to zero. To describe the density of states over all energies, the arbitrary normalization factors appearing in the three different regions should be fixed by requiring a continuous matching at the transitional points E = ± L l and by imposing some global normalization condition. Here we have only imposed an arbitrary normalization within each region to exhibit the main features of the different regions.
From the plots in Fig. 1, we see that there is a crossover from the almost pure system behaviour at large positive energies to an exponential decay at negative energies. Of particular note is the crossover region at small energies where the strong disorder approximation is valid leading to a non-zero result for the density of states at zero energies.
Also, we note that as the parameter l decreases, the width of the density of states in the negative energy region increases due to the creation of additional bound states in the more disordered system, whereas for large l, that is a more pure system, there are less states at negative energies and the peak increases, leading to the E −1/2 singularity at zero energy for pure systems.
B. 2 point correlators
The correlation function in the weak disorder saddle point approximation is given by (51) using (53) for positive energies, and (56) for negative energies, while (69) gives the correlation function in the strong disorder approximation. Without loss of generality, we can set y = 0 and x = dL, allowing us to calculate the correlation function in the appropriate energy regions where the various approximations hold. These results are shown in Fig. 2 and Fig. 3. Figure 2 shows the result of the correlation function at a fixed value of l and L for various energies. Figure 2 the disorder is minimal and leads to a weak modulation of the envelope. Note that the solution is periodic, with a period shorter than dL, with only one period being plotted in Fig. 2(a).
As the energy is lowered even more, the region where the strong disorder approximation (− L l ≪ E ≪ L l ) holds is reached as shown in Fig. 2(b). The correlation function decreases as the distance increases until d = 0.5L after which the correlation increases again. This is of course due to our ring topology. Since the bulk of the states occur in this region, we consider the behaviour of the strong disorder region in the next figure, Fig. 3. Figure 2(c) shows the correlator function for large negative energies, where the weak disorder saddle approximation E ≪ − L l once again holds. Here the results show that the correlation function decays exponentially, where the decay length is determined by the energy and is largely disorder independent. This exponential behaviour is due to the formation of bound states in the disordered potential.
In Fig. 3, we consider the behaviour of the correlator in the strong disorder approximation, when the disorder and energy are varied. In Fig. 3(a), we keep the energy fixed and vary the disorder. As can be expected, there is a stronger decay when the disorder in the system is increased (smaller l). In Fig. 3(b), the disorder parameter is kept fixed, while the energy is varied. Once again, as is expected, the decay increases as the energy is lowered. Thus the strong disorder approximation gives a bridge from the almost pure behaviour at high positive energies to the strongly localized behaviour at strong negative energies due to the formation of bound states.
V. CONCLUSION
In this paper we introduced a functional integration formalism for studying disordered averaged observables that provides a complementary viewpoint to the standard field theoretic techniques used at present. The formalism is based on changing variables from the random potential describing the disordered system to a new set of random variables related to the logarithmic derivative of the the wave-function. This allows a more direct computation of certain disordered averages, such as the density of states or obsevables that explicitly depend on the wave function. In particular we showed how to calculate the disorder averages of the density of states (16), the 2-point correlators of the wave-function (22,24), as well as the real part of the conductivity (30,31).
As an illustration of how the formalism works, we considered one dimensional Gaussian disordered systems. We were able to obtain results for the weak disorder and strong disorder limits for the density of states (48, 67), and the 2-point correlators (53, 56, 69). Unfortunately we were only able obtain results of the conductivity in the weak disorder limit (61), as there is a complication in the perturbative expansion of the strong disorder limit when using the Hubbard-Stratonovitch transformation on (30), which we have as yet been unable to resolve. The formalism reproduced the results of Halperin 2 for the density of states, and considering the the 2-point correlator we showed that in the thermodynamic limit all states in one dimension are localised.
Future developments include the addition of a deterministic potential to the formalism, allowing magnetic in-teractions to be included. Also, the calculation in higher dimensions needs to be investigated further to see if signs of a metal insulator transition can be found.
The differential equation in (A2) can be solved exactly using the method of generalized ladder operators 31,32 so that Unfortunately, although we have the exact solution, we do not have the eigenfunctions in a closed form. This makes the calculation of the propagator difficult. We thus wish to make an approximation to (A2) that allows us to calculate the propagator. Since the tan 2 potential in (A2) contains m singularities in the interval [− L 2 , L 2 ], the eigenfunctions must be zero where these singularities occur. The approximation that we make for (A2) must preserve this global property. The approximation that we make is to ignore the tan 2 term in (A2) and to change the boundary condition so that eigenvalues have zeros at the correct intervals. The eigenvalue equation that we must solve is thus with the boundary condition that Ψ(± L 2m ) = 0. Excluding the constant mode as required by (37), the solution of (A4) is where C is an energy independent constant, but not necessarally independent of m. We determine C by requiring that the density of states (48a) be the correct solution in the pure limit. Obtaining the pure solution (up to a global normalization constant), requires that (A7) must be a constant independent of m. However, since E = mπ L 2 in this limit, and C is independent of E, we find that C must also be independent of m. Thus we see that C is constant that is independent of E and m.
In the negative energy region, φ c is given by (41) where we are using the dilute gas approximation. Thus the equation (A1) becomes where the quasi zero-modeφ 0 is given bȳ If we integrate over (A8a), and use the condition that φ 0 is orthogonal to Ψ n , we find that the constraint that the eigenfunction cannot contain a zero mode is satisfied for all eigenfunctions except the one corresponding to λ n = 4|E|. The eigenfuction with this eigenvalue must be explicitely checked to see if the constraint is satisfied. We first calculate the eigenfunctions and eigenvalues of (A8) in the subinterval [− L 2 , 0] or [0, L 2 ] using the method of generalized operators 31,32 or via the solution of a hypergeometric equation 34 , and find that the eigenvalues consists of two discrete eigenvalues, λ 0 = 0 and λ 1 = 3|E|, and a continuum of eigenvalues λ k = (k 2 + 4)|E|. The corresponding eigenfunctions are and We can now calculate the eigenfunctions for the full region [− L 2 , L 2 ] by matching the eigenfunctions in (A9) at x = 0, i.e. requiring that Ψ + (0) = Ψ − (0) and dΨ + dx (0) = dΨ − dx (0). The eigenfunctions are Ψ 0 ∝ ±sech 2 |E|(x + where k = 0 since the corresponding eigenfunction does not satisfy the constraint that there are no constant modes. Also, the periodicity requirements on the eigenfunctions imply that k must satisfy the equation Note that there are degenerate solutions for each eigenvalue, since the eigenfunctions can be constructed as a symmetric or an anti-symmetric solution. As in the positive energy region, we do not have the eigenvalues and subsequently also not the eigenfunctions in a closed form, which makes calculation of the propagator and determinant difficult. We thus once again wish to make an approximation that will enable us to calculate the propagator and determinant. We note that the right hand side of (A11) is approximately unity, allowing us to obtain k = 2nπ L |E| , ∀n ∈ Z, n = 0 (A12) for large values of k. The eigenfunctions Ψ k , can then be approximated by a plane wave, and is given by with corresponding eigenvalue λ n = 2nπ L 2 + 4|E|. This implies, as is to be expected, that all the higher lying scattering states can be very well approximated by free particle states and that this only breaks down for the lowest lying scattering states, where the potential is important. Note, however, that even the spectrum of the lower lying scattering states is well approximated by a free particle spectrum as the right hand side of (A11) is approximately unity also in this case. The eigenfunctions are, however, distorted away from plane waves due to the presence of the potential. It is now possible to calculate the determinant using the above approximation so that where we have once again used the identity 1.143.1 from Gradshteyn and Ryzhik 33 . Note that the quadratic term in (A14) is due to the doubly degenerate bound state (from the symmetric and anti-symmetric eigenfunctions), while the product over the continuum eigenvalues is raised to the fourth power since there is a fourfold degeneracy in the continuum eigenfunctions (from the symmetric and anti-symmetric states, as well as the right moving and left moving plane waves).
In calculating the propagator, we neglect the contribution of the bound state Ψ 1 , which gives only a small contribution to the propagator as it involves the product of two very well localized eigenfunctions evaluated a points which are well seperated. Taking into account only the scattering states, (A13), we find for the propagator ∆ ± (x, y) = 2 L ∞ n=1 cos( 2nπ L (x − y)) ( 2nπ L ) 2 + 4|E| . (A15) | 2018-12-06T10:52:19.568Z | 2001-09-07T00:00:00.000 | {
"year": 2001,
"sha1": "54d0109b655e5aa5e988bcda838a511dea35c88a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b1cf5242e90312636d4372581323f702355a5122",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
54018677 | pes2o/s2orc | v3-fos-license | An Agent Based Model of Tularemia
Francisella tularensis is a formidable intracellular pathogen. Upon inhalation it leads to systemic disease (tularemia) with a high mortality rate. We sought here to develop a computational tool for infections with this class. A bio-threat microbe for which extensive datasets are not available to infer parameters that might determine the outcome of the infection. We present a two-compartment agent based model that simulates inhalational tularemia with subsequent dissemination to the liver and incorporates experimental data and validated general parameters of host defense mechanisms. This systems approach suggests that the initial number of macrophages, the probability of dissemination, and the initial clearance rate of bacteria correlate with the outcome of infections with Francisella . These findings underline the importance of early innate immune defense mechanisms in the prevention of tularemia.
Introduction
A wide variety of individual host cell defense mechanisms and bacterial virulence factors have been described and their singular importance during infection has been experimentally established in many cases. Computer models of infectious diseases can provide a systems biology approach for the inference of critical parameters of host-pathogen interaction. However, few such models exist and they are particularly scarce for infectious processes of the lung. Disease models for infections with Mycobacterium tuberculosis were able to identify chemokine secretion and the efficiency with which T cells activated macrophages as important parameters for the outcome of latent versus active infection [1]. Modeling of influenza infection demonstrated the importance of the spatial structure of the initially infected cell [2]. In another model the growth rate of the parasite was found to be the most important parameter for infection with Leishmania [3]. While model-building is frequently done to integrate large and complex existing datasets, we sought here to apply computer modeling to the infection with the class. A bio-threat agent Francisella tularensis, which results in the systemic disease tularemia and for which limited experimental data are available. Francisella tularensis is a category of bio-threat because of its high infectivity, its high case-fatality rate when untreated, and because of previous attempts to 'weaponize' this microbe [4]. The ability of Francisella tularensis to cause disease is closely correlated with its ability to enter into and survive inside of what is currently thought to be its permissive host cell, the macrophage [5][6][7]. Inhalational tularemia progresses from necrotic pulmonary foci to systemic disease with the formation of ill-defined granulomas. During the initial phase of infection with Francisella via inhalation bacteria accumulate in macrophages or leukocytes in pulmonary alveolar ducts. From the second day on multiple foci of necrosis can be detected with significant macrophage infiltration. After the second day of infection lesions can also be detected in liver and spleen [8]. Francisella can be found in hepatocytes, where they rapidly recruit macrophages, which lyse hepatocytes with subsequent release into the extracellular space [9].
The goal was to investigate if a systems biology approach for the case of a rare disease such as tularemia could infer critical parameters for the outcome of this infection, which could then become the focus of further targeted investigations.
Mathematical model
In our model, the immune system is modeled by cells which are treated as C++ objects, and move according to a biased random walk in the direction of the chemokine gradient. This random walk is the solution of the chemotaxis PDEs [10].
where X C (t) is the position of the cell, and φ (X,t) is the concentration of chemoattractant, and ξ is a white noise variable, α, λ, β and D1 are constants.
The implementation of our model is based on Segovia-Juarez et al. [1]. Chemokine diffusion is implemented with the following rule: in each micro-compartment (i,j) the chemokine concentration C i,j diffuses to and from the four micro-compartments in its immediate (von Neumann) neighborhood: where λ is a diffusion constant. λ measures the proportion of C i,j that diffuses out micro-compartment (i,j) during each time-step. We calculate a value for λ from λ = 4 c ∆t/∆x where ∂ c is the diffusion constant for chemokine molecules in the diffusion PDE, x is the scale of spatial discretization, and ∆t is the scale of time discretization. Using ∆t = 0.1 min and ∆x = 10 −5 m, respectively. Using a value of ∂ c = 10 -7 we obtain a value of λ=0.6.
time-step, a certain proportion δ decays as in: Extracellular bacteria B E replicate with an estimated doubling time of 3-5 h. We assume an upper bound of B E , K BE =400 within each microcompartment, which is twice the number for M. tuberculosis, since the Francisella tularensis LVS bacterium is half the size of M. tuberculosis. Extracellular bacterial replication follows logistic growth with respect to this upper bound: In addition we have each bacterium secrete 1 unit of chemokine per unit time step.
Macrophages are represented as discrete agents that reside on the lattice. At most one macrophage can occupy a given microcompartment. Macrophages have the following attributes: position, age, the number of intracellular bacteria (B I ), and the state of the macrophage. B I is a non-negative integer such that 0 ≤ B I < K BI where K BI is a parameter representing the average intracellular bacterial carrying capacity of macrophages. The state is defined from one of the following: resting (M R ), infected (M I ), chronically infected (M C ), or activated (M A ). Macrophages move according to a random walk biased towards neighboring micro compartments with higher concentrations of chemokine.
The rules for resting macrophages are as follows: We assume that if there are a small number N RK = 2 of bacteria in the same micro-compartment as a resting macrophage, then the macrophage phagocytoses and kills those bacteria and remains in the resting state. If there are more than N RK bacteria present, then there is some small probability P K that the macrophage still succeeds in killing N RK intracellular bacteria. If it cannot it becomes infected (MI) with an intracellular level of N RK .
The rules for infected macrophages are as follows. At every timestep, each infected macrophage releases a given amount C I = 5000 units of chemokine into its micro-compartment. The model is invariant to changes in C I , for 1 Intracellular bacteria replicate within infected macrophages. The discrete intracellular growth rate α BI 2 is estimated to be between 0.002 and 0.006/min: If the intracellular bacterial load exceeds a threshold N C = 10, the macrophage becomes chronically infected.
The rules for chronically infected macrophage rules are as follows. A chronically infected macrophage secretes chemokine into its micro-compartment. We assume that within chronically infected macrophages, intracellular bacteria replicate logistically with respect to the bacterial carrying capacity of macrophages, K BI .
The growth rate α BI 2 is estimated to be between 0.006/min and 0.008/min. We use a value of K BI = 50. If the intracellular bacterial load of a chronically infected macrophage exceeds the bacterial carrying capacity of macrophages (K BI ), the macrophage bursts and its intracellular bacteria are released.
We model necrosis of lung tissue by counting how many times a chronically infected macrophage bursts. A micro-compartment (i,j) is declared to be necrotic if the number for bursting exceeds N necr , we use a value of N necr = 8.
We model dissemination of macrophages from the lung to the liver as follows. If macrophages move to a sink compartment they will move to the liver with a probability M diss between 1% and 20%. Because macrophages are disseminating to the liver, we have them replenished for the liver and lung according to the following rule. Macrophages are recruited with a probability of ( ) , max 1,0.001 i j recr C M * + , where C i,j is the concentration of chemokine at the source location with coordinates i and j at which the macrophage is recruited. We estimate M recr for both the lung and liver to be between 0.2 to 0.7.
Because the macrophages become infected in 1 minute (10 time steps) after the inoculation, and the movement of macrophages is updated every 100 time steps, we assume that there are initially three infected macrophages which are placed 1 μm away from a sink compartment, as infected macrophages can enter the bloodstream within 1 minute.
Infection assays
Francisella tularensis subspecies holarctica vaccine strain (F. tularensis LVS, army lot 11) was generously provided to us by Dr. Karen Elkins (FDA). Francisella was grown on chocolate II agar enriched with IsoVitaleX (BD Biosciences, San Jose, CA) for 40-48 hrs at 37°C. As liquid medium we used Mueller-Hinton broth supplemented with IsoVitaleX RAW 264.7 murine macrophages and A549 type II alveolar epithelial cells were obtained from ATCC. Dulbecco's Modification of Eagle's Medium (DMEM; Cellgro) was supplemented with 10% fetal bovine serum (Hyclone, not heat-inactivated) and penicillin (100 IU/ ml) and streptomycin (100 μg/ml). When cells were used for Francisella infection assay no antibiotics were added 24 h prior to infection. Cells were grown at 37°C and 5% CO 2 . Infection assays were performed as described in Tamilselvam and Daefler [11].
Agent-based model for infection with Francisella
We base our model on a tested agent-based model of mycobacterial infection of the lung [1,12]. This model, which approximates the solution of a system of Partial Differential Equations (PDE), comprises the basic innate immune defense mechanisms during infection and goes beyond the purely deterministic attempt of modeling with ordinary differential equations (ODE) [1]. We modified this agent-based model as described below and implemented it with experimental data from our laboratory and from the literature for infection with Francisella. Our model uses parameters of the human immune system and assumes infection with Francisella tularensis subsp. tularensis. When appropriate, data from infection of mice with Francisella tularensis subspecies holarctica vaccine strain (F. tularensis LVS) were incorporated.
Since infection with Francisella represents a systemic disease with dissemination to the liver, we designed a novel two-compartment agent-based model of Francisella infecting lung and liver. Recruitment of macrophages occurs at "source" microcompartments located at microcompartments 400 μm away from the four corners of the lattice. Macrophages enter the lattice with a probability M necr at these locations. These source compartments represent locations where blood vessels enter the lung tissue through which new macrophages arrive at the infection site. In order to model dissemination into the bloodstream, and then into the liver, we designate certain compartments in the lung, μm away from the center of the lattice as "sink" compartments. These sink compartments represent locations where blood vessels enter the lung tissue through which macrophages leave the infection site. Macrophages leave the lung with probability M diss . We also introduced replenishment of macrophages in the lung and liver in response to chemokines and 'chemoattractants' elaborated by bacteria.
An agent-based model is a generalization of a "lattice-gas" cellular automation, in which particles move about and interact in a prescribed fashion [13]. In an agent-based model, elements of the system are represented as discrete agents, which interact with each other and with the environment according to sets of rules. In our model, the immune system is modeled by cells that are treated as C++ objects, and move according to a biased random walk in the direction of the chemokine gradient. This random walk is the solution of the chemotaxis PDEs (equations 1 and 2 in Materials and Methods) [10,14]. Macrophages and T-cells are modeled individually as cells in a square region of organ tissue with source and sink compartment where they enter and leave the lung. Extracellular and intracellular bacteria are modeled as continuous variables. Rules describe events such as the recruitment, infection and bursting of macrophages, the killing of bacteria by macrophages, and activation of macrophages by T-cells. Our model is a two-component model, with one component being the lung, and the second component being the liver. Marino and Kirschner [15] have built a two component model of M. tuberculosis infection of the lung and lymph nodes. We model this by making two identical copies of the same model with different parameters, corresponding to the parameters of the lung and liver, denoting the lung with the index 0 and the liver with the index 1. The extracellular growth rate of the bacteria in the liver and the recruitment probability of macrophages in the liver are different from those in the lung, the rest remain the same. We also model the same square area of liver and lung tissue. Dissemination is modeled by changing the component from 0 to 1 with a probability of M diss , when a macrophage arrives at a ``sink" compartment. Dissemination to the liver takes one time-step or 6 seconds. We assume there are no macrophages in the liver initially.
We model 2 mm square of organ tissue (liver or lung) as a 100×100 lattice of micro-compartments. Each micro-compartment can simultaneously hold at most one macrophage, one T-cell, chemokine, and bacteria. The entities of the model consist of objects, which are macrophages and T-cells and continuous variables representing chemokine and bacteria. The chemotaxis of macrophages and T-cells is modeled using a biased random walk for each cell, biased by the chemokine gradient. Infected macrophages and bacteria secrete chemokines, which is a composite of all the chemo-attractants from the bacteria and cytokines from the macrophages. Chemokines diffuse and degrade according to equation 3.
Time is discrete in this model; each time-step corresponds to 6 seconds. Macrophages are modeled as objects with the attributes of position, age, the number of intracellular bacteria and the state of the macrophage. The state of the macrophage is either resting (MR), infected (MI), chronically infected (MC), or activated (MA). Infected macrophages release chemokines at each time-step. If the intracellular bacterial load exceeds a certain threshold, the macrophage becomes chronically infected. Chronically infected macrophages continue to secrete chemokines. If the intracellular load of a chronically infected macrophage exceeds the bacterial carrying capacity, the macrophage bursts and its intracellular bacteria are released. Necrosis of lung tissue results from this bursting of macrophages. A micro-compartment (i, j) is declared to be necrotic if the number of burstings exceeds N necr =8 as in Segovia-Juarez et al. [1]. Recruitment of macrophages occurs at "source" microcompartments located at microcompartments 400 μm away from the four corners of the lattice. Macrophages enter the lattice with a probability M necr at these locations. These source compartments represent locations where blood vessels enter the lung tissue through which new macrophages arrive at the infection site. In order to model dissemination into the bloodstream, and then into the liver, we designate certain compartments in the lung, 200 μm away from the center of the lattice as "sink" compartments. These sink compartments represent locations where blood vessels enter the lung tissue through which macrophages leave the infection site. Intracellular bacteria also follow logistic growth according to the equation 7. Here, intracellular growth, B I , is the continuous variable for intracellular bacteria, and K BI is the upper bound on the number of intracellular bacteria, which is also determined by the size of the bacterium. For Mycobacterium tuberculosis, K BI is 20, for Francisella tularensis K BI is 50.
Some parameters of the immune system are fixed. The speed of resting and infected macrophages has been found by Webb et al. [16] to be 1 μm/min for resting macrophages and 0.0007 μm/min for infected macrophages. The lifespan of macrophages are fixed at 100 days for a resting macrophage [17]. We also assume that the number of bacteria killed by a resting macrophage is 2. Other parameters of the immune system need to be estimated. The chemokine diffusion coefficient and the chemokine degradation coefficient were estimated between 0.5 and 0.8 per time step and 0.000288 and 0.0011 per time-step respectively [18]. The initial number of macrophages was estimated from Mercer et al. [19] and Stone et al. [20] to be between 40 and 400. The dissemination probability of a macrophage from the lung to another organ is estimated to be between 1% and 20%. The probability of a resting macrophage killing 2 bacteria is estimated between 1% and 10%.
Determination of Francisella's growth parameters
Growth rates for Francisella tularensis subspecies holarctica vaccine strain (F. tularensis LVS, army lot 11) were determined in a model infection of murine macrophage cell lines (Raw 264.7) and human type II alveolar epithelial cells (A549). We determined the intracellular growth rates for these models as between 0.002/min and 0.006/min for infected, and between 0.006/min and 0.008/min for chronically infected cells. While the initial rate of infection varied, no differences in growth rates were found between Raw 264.7 and A549 cell lines.
These findings are in agreement with results described in the literature, where Francisella follows logistic growth with doubling time of 3-5 hours [9,21] for the lung and liver, which gives calculated growth constants of 0.00231/min and 0.00785/min respectively.
Parameters that determine the outcome of infection
The model as described above was used for simulation of infection with Francisella for a period of 5 days. Nine parameters were allowed to vary in order to determine how they would correlate with outcome of infection (measured here as an increase or decrease in the number of extracellular bacteria): the diffusion coefficient λ, the degradation coefficient δ, the intracellular growth rate for infected macrophages α BI1 , the intracellular growth rate for infected macrophages α BI2 , the initial number of macrophages, M init , the recruitment probability of macrophages to the lung, M recr , the recruitment probability of macrophages to the liver, M recrL , the dissemination probability M diss , and the probability of a resting macrophage killing bacteria P K .
We performed Latin Hypercube Sampling [1,22,23] on these nine parameters of the model. The Partial Rank Correlation Coefficient (PRCC) with the number of extracellular bacteria in the liver and lung was calculated as shown in table 1. From this we see the parameter M diss , the initial number of macrophages, is most strongly correlated with high numbers of bacteria in lung and liver. M diss , the dissemination probability is negatively correlated with the number of bacteria in the liver, but this seems largely due to the stochastic nature of the model, where a dissemination probability higher than 10% can result in no or little liver dissemination. With a higher dissemination probability, macrophages migrate to the liver rather than stay in the lung and hence do not become infected.
The intracellular growth rate for infected macrophages also has a significant correlation with the number of extracellular bacteria in the liver and the lung, but the intracellular growth rate for chronically infected macrophages is not as significantly correlated. This suggests that slowing the growth of Francisella in the early phases of infection could have a significant impact on the outcome of infection.
Recruitment of macrophages correlates with dissemination of infection
We further investigated the effect of the dissemination probability on the number of bacteria in liver and lung when the initial number of macrophages was kept constant. Simulations were run and each individual outcome was plotted as the number of bacteria in lung ( Figure 1A) or liver ( Figure 1B) after 5 days versus dissemination probability. The number of bacteria in the lung decreases monotonically with increasing dissemination probability. This levels off at a probability of 10%.
The number of extracellular bacteria in the liver is within a range between dissemination probability 1% and 10%, and levels off roughly at dissemination probability of 5%. Above dissemination probability 10%, there is some noise, where there is little or no dissemination to the liver ( Figure 1B). The reason for this is that at higher rates of dissemination, more macrophages leave the lung and cannot become infected. To reduce this noise, we increased the chemoattractant secreted by the bacteria to be 5000 units per bacterium. The following plot shows that increased chemoattractant leads to more liver dissemination at M diss above 10% ( Figure 2C).
These findings suggest that the likelihood with which macrophages are recruited to the lung and leave this compartment are important factors in the pathogenesis of systemic disease. Very little quantitative data are available about the true dynamics of macrophage recruitment and release in the lung, which are very difficult, if not impossible, to measure. Our simulations suggest, however, that a balance of recruitment and release plays an important role and should be further experimentally investigated.
Initial number of macrophages is significant
Since the outcome of infection in terms of bacteria in lung and liver appears to level off at a dissemination probability of 5%, we investigated the influence of the initial number of macrophages on bacterial proliferation. At a constant dissemination probability of 5% the simulation was run 10 times for each set of parameters with the initial number of macrophages between 40 and 400.
Bacterial growth in the lung (Figure 2A) or in the liver ( Figure 2B) was plotted against the initial number of bacteria in the lung for each event. The number of extracellular bacteria in liver and lung increases monotonically with the initial number of macrophages. This number, due to the probabilistic nature of the model falls within a range of values, with the error bars of size between 10109.32 and 46139 in either direction. There is no clearance of bacteria either in the lung and the liver.
We further simulated the time-course of infection over a five days period for 40, 100, and 400 initial macrophages with a dissemination probability of 5% (Figure 3). One can observe that in spite of the stochastic nature of the model, there is not a large amount of variability. With the initial number of 40 macrophages, there are more bacteria in the lung than in the liver ( Figure 3A). As the initial number of macrophages increases to 100, the number of extracellular bacteria in the liver eventually increases above the number of extracellular bacteria in the lung ( Figure 3B). With 400 initial macrophages we can see that there are more bacteria in the liver than in the lung ( Figure 3C). There is also some stochastic variability in this simulation, but this variability is consistent with the fact that the number of extracellular bacteria in the liver and lung fall within a range of values, as described above.
Probability of clearing infection
With the initial number of macrophages and their dissemination dynamics being the most important factors to determine the proliferation of Francisella, we reasoned that the probability with which a macrophage might clear its intracellular infection might be pivotal for the overall outcome. We introduced a probability of activation for resting macrophages (P act ) to see if the infection clears. P act was varied between 1% and 100%. For each time the movement of a resting macrophage was updated, the macrophage could be activated with this probability. We then simulated the infection with 100 initial macrophages. We found that a probability greater than 4% leads to clearance within 10 days. We then set the probability of a resting macrophage killing 2 bacteria at 10% and placed 5 bacteria in the center of the grid instead of infected macrophages. The macrophages then could either kill the bacteria or become infected. We ran 10 runs for each value of the initial number of macrophages. We found that between 250 and 400 initial macrophages, there were runs in which all the bacteria were phagocytosed, and that the infection cleared immediately. The probability of clearance this way was 30% clearance with 250 initial macrophages, 10% with 300 initial macrophages, 30% with 350 initial macrophages and 30% with 400 initial macrophages. A representative simulation is demonstrated in figure 4.
Comparison with M. tuberculosis infection
The strong positive correlation of M init with the number of extracellular bacteria in the liver and in the lung contrasts with infections with M. tuberculosis where M init is not significantly correlated with this number. To further investigate this difference, we ran our simulation with the parameters for Mycobacterium tuberculosis with a dissemination probability of 5% and found that there was no liver dissemination, and that the infection eventually cleared out of the lung ( Figure 5A). To further test our model, we eliminated the replenishment of macrophages. In this case, there is also no dissemination to the liver, as the infection cannot spread to sink microcompartments within 10 days ( Figure 5B).
T-cells then arrive and activate most of the infected macrophages, which leads to a "wall" forms around the infection, where all of the macrophages surrounding the infection are either resting or activated. As the graph shows, the infection clears eventually and no infected macrophages can enter "sink" compartments to disseminate to the liver. Only activated macrophages able to disseminate to the liver, since the macrophages surrounding the extracellular bacteria all become activated before they can get to the "sink" compartments. In the case of F. tularensis, all of the macrophages in the granuloma surrounding the extracellular bacteria would be infected.
Discussion
When Francisella is inhaled it rapidly leads to systemic disease with a high mortality rate. We present here an agent-based model that simulates inhalational tularemia with subsequent dissemination to the liver as a two-compartment system. We used statistical analysis from multiple simulations to infer factors that influence the outcome of the infection. The current model suggests that the initial number of macrophages, the probability of dissemination, and the initial clearance rate of bacteria by innate immune defense mechanisms correlate with the outcome of infection with Francisella.
Agent-based modeling of infectious diseases has been employed previously for tuberculosis and leishmaniasis [1,3]. The immunological parameters and host defense mechanisms that are the basis for our model are the same as those which have been used in these validated agent-based models. However, when we introduce Francisella-specific attributes we identify different parameters that are crucial for the outcome of the disease process. The model presented here points to a preeminent role for early defense mechanisms (number of macrophages recruited and rate of initial clearance) to thwart a successful infection. This contrasts to infections with M. tuberculosis, where mounting a delayed specific immune response is the more important host defense and where variations in the number of macrophages recruited do not result in a clearance of the infection. A novel aspect of our model is the two-compartment system, which demonstrates the importance of dissemination as a measure to evade clearance mechanisms.
The power of this modeling approach is illustrated that by integrating pathogen-specific parameters such as growth rate, intracellular replication rates, and immuno-stimulatory effects as measured by the release of chemoattractants, disease-specific outcomes are modeled that correlate with experimental evidence. This also demonstrates how relatively simple parameters are key determinants in host-pathogen interactions and can be accurately assessed in an agent-based model. Even though parameters such as growth rate might be simple, they are the outcome of complex interactions of host and pathogen metabolism, elaboration of virulence factors, and host defense pathways. However, the power of our model is to allow variations of such summary and individual parameters, some of which are difficult or impossible to determine, and then correlate them with the outcome of infection in order to infer the dominating parameters for pathogenesis. This is of particular importance for infections with a less well studied microbe such as Francisella. Our modeling approach might thus delineate suitable parameters in lieu of extensive experimentation that can then be further investigated. | 2019-03-30T13:09:15.698Z | 2013-02-05T00:00:00.000 | {
"year": 2013,
"sha1": "65e89def1a97ff3945e60dfffcdae57b069a07e0",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/an-agent-based-model-of-tularemia-2153-0602.1000125.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "440c5868699c60d732acd0d8746b661be24bcf19",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
260412668 | pes2o/s2orc | v3-fos-license | Neo-Extractivism and Formalization of Artisanal and Small-Scale Mining—The Case of the Santurb á n Moorland (Colombia)
: The purpose of this paper is to analyze the negative impact of neo-extractivism to boost the mining industry and the components that prevent the formalization of ASM as a model of rural economic development in Vetas, California, Surat á , and Matanza, municipalities adjacent to the Santurb á n moorland (Colombia). A qualitative methodology with ethnographic design was followed, implementing the NVivo software (v12) for the analysis of information. The results show that the neo-extractivist model stands in opposition to the formalization of ASM in the territory, which prevents the prioritization of rural economic development by regional and local governments. ASM is at a disadvantage with respect to large-scale extractive industry, although this activity has been carried out in this region for centuries; it is also under threat as a result of a delimitation process that seeks the protection of the moor ecosystem, putting at risk the economic and socio-environmental stability of the communities that depend on this activity.
Introduction
Countries in the global south are characterized by having the greatest mining ac-tivity, resulting from the wealth of raw materials such as gold, nickel, coal, diamonds, bauxite, among others [1,2]. Latin America (LA) has stood out as a territory for metal exploration and exploitation, and its exports have concentrated on primary products and manufacture based on natural resources [3,4]. In this way, the 'boom of commodities' at the beginning of this century strongly influenced the economic context of LA countries, giving rise to a leading role of the state [5].
According to Gudynas [6], the term extractivism dates back to the 1970s, and it describes the evolution of mining and oil exports, to consider the variety of natural resources that endure systematic exploitation by transnational capitals [7]. Extractivism is defined as a "particular case of extraction of natural resources, characterized by an extraction in large volumes or under high intensity procedures, which are essentially export-oriented as raw materials or with minimal processing" [8] (p. 80). In this context, the expansion of extractivism has contributed to ecological pollution and the interest of governments in granting titles to develop extractive projects [9].
Between 2000 and 2010, LA economies increased at an average rate of 5% per year. However, between 2011 and 2015 this decreased considerably due to the reduced demand for commodities, which led to the slowdown of the economy [10], especially in countries with significant extractive sectors where, despite the crisis, governments have persisted in extractive activities as a model of development at "all costs" [5]. In extractivism, the state assumes a passive role, which restricts the guarantees for basic conditions, such as tax and environmental labor flexibility, capital movements, among others [11]. Moor ecosystems are located at heights between 3100 and 4000 m above sea level. They receive sunlight all year round in unique quality and quantity thanks to their location in the equatorial zone, which are beneficial to develop their vegetation. They are endowed with water regulation and, given their low temperatures, evaporation is reduced, and water is stored in their plants [28]. The Santurbán moorland is recognized as the birthplace of several rivers, and is home to periglacial wetlands, lagoons, and river basins [29].
The moorland is characterized by geological formations given by surface and underground water dynamics, which supply water to Bucaramanga and the province of Soto Norte (California, Charta, Matanza, Suratá, Tona, and Vetas) [30]. In turn, it includes the mining district of Vetas-California, of high economic importance for Santander due to the abundance of "gold and silver strands, associated with iron, lead, zinc, copper and sulfosalt or non-oxidized sulphur minerals" [29] (p. 42).
The Vetas-California district has stood out due to the ASM exploitation of gold since colonial times [31], an activity that has been preserved over time as an inherited and consolidated activity in family businesses [32]. ASM refers to the extraction and processing of materials with reduced technical skills, extensive labor, and it is regularly produced in informal settings [33].
Large-scale mining (LSM) is a derivation of neo-extractivism that promotes the entry of transnational capital to emerging economies; therefore, there is a political legitimation of mega-mining, legal flexibility, and reduced consequences related to its socio-environmental effects on the territories [34,35]. In this context, ASM and LSM are antagonistic, since governments have sought to eradicate the former to introduce transnational companies that generate greater fiscal profitability [36]; consequently, ASM is usually associated with poverty, human rights violations, child labor, the use of mercury and cyanide, land degradation, negative impacts on the ecosystem, and socio-environmental conflict [37].
For this research, the socio-environmental conflict linked to neo-extractivism and ASM in the territory of the Santurbán moorland is emphasized. In this regard, the various elements that make up the dynamics of the conflict are described. First, the Colombian Moor ecosystems are located at heights between 3100 and 4000 m above sea level. They receive sunlight all year round in unique quality and quantity thanks to their location in the equatorial zone, which are beneficial to develop their vegetation. They are endowed with water regulation and, given their low temperatures, evaporation is reduced, and water is stored in their plants [28]. The Santurbán moorland is recognized as the birthplace of several rivers, and is home to periglacial wetlands, lagoons, and river basins [29].
The moorland is characterized by geological formations given by surface and underground water dynamics, which supply water to Bucaramanga and the province of Soto Norte (California, Charta, Matanza, Suratá, Tona, and Vetas) [30]. In turn, it includes the mining district of Vetas-California, of high economic importance for Santander due to the abundance of "gold and silver strands, associated with iron, lead, zinc, copper and sulfosalt or non-oxidized sulphur minerals" [29] (p. 42).
The Vetas-California district has stood out due to the ASM exploitation of gold since colonial times [31], an activity that has been preserved over time as an inherited and consolidated activity in family businesses [32]. ASM refers to the extraction and processing of materials with reduced technical skills, extensive labor, and it is regularly produced in informal settings [33].
Large-scale mining (LSM) is a derivation of neo-extractivism that promotes the entry of transnational capital to emerging economies; therefore, there is a political legitimation of mega-mining, legal flexibility, and reduced consequences related to its socio-environmental effects on the territories [34,35]. In this context, ASM and LSM are antagonistic, since governments have sought to eradicate the former to introduce transnational companies that generate greater fiscal profitability [36]; consequently, ASM is usually associated with poverty, human rights violations, child labor, the use of mercury and cyanide, land degradation, negative impacts on the ecosystem, and socio-environmental conflict [37].
For this research, the socio-environmental conflict linked to neo-extractivism and ASM in the territory of the Santurbán moorland is emphasized. In this regard, the various elements that make up the dynamics of the conflict are described. First, the Colombian government has bet on the immersion of foreign companies in the vicinity of the Complex, through various attempts to obtain the environmental operating license by Canadian and Arab organizations in the last ten years [32,38]. However, through various urban collective mobilizations, the projects have been revoked due to the negative environmental impacts they might cause in the moorland ecosystem and the water resource that feeds neighbouring populations [39].
Second, although the government is not interested in granting mining titles to the communities to develop ASM, it has suggested its replacement, while increasing the requirements to formalize this activity; a situation that has caused greater disagreement among the locals [40]. Third, there is a clash of interests between the urban and rural population, since for urban inhabitants the extraction of gold and the conservation of the moorland are incompatible, not considering the sustainable practices in the care of land and water that miners and peasants perform daily, from the logic of productivity and ecological culture [38,41].
The three aforementioned elements reveal a breaking point where rural communities developing ASM in the moorlands may be allowed to formalize their mining activities, facing a challenge to preserve both ecosystem sustainability and their quality of life. The purpose of this paper is to analyze the negative impact of neo-extractivism to boost the mining industry and the components that prevent the formalization of ASM as a model of rural economic development in Vetas, California, Suratá, and Matanza. This paper intends to provide a glimpse on the importance of the problem in question, since in Colombia the needs and perceptions of rural communities that depend on ASM in the Complex are usually ignored; therefore, alternative solutions are proposed from a socio-environmental perspective.
Neo-Extractivism in Latin America
There are different perspectives and interpretations on the implications of neo-extractivism in LA. According to Svampa [42], this model originated in the transition from the Washington Consensus to the Consensus of the Commodities; the former emerged as a referent for economic policy between the 1980s and 1990s, from a vision of globalization, growth, macroeconomic stability, and poverty reduction [43,44]. The latter is characterized by the export of primary goods to industrialized countries on a large scale, which implies the acceptance of new environmental and political inequalities by LA countries [21]. Consequently, neo-extractivism "installs a vertical dynamic that barges in the territories and as it moves in, it destructures regional economies, destroying biodiversity and exacerbating in a dangerous way the process of land grabbing, as rural communities are expelled or displaced" [45] (p. 34).
However, Gudynas [46] affirms that the state plays an active role in neo-extractivism from a progressive perspective, as an opportunity to fight poverty and promote development with the surplus generated by the extractive sector; however, the socio-environmental impacts and other contradictions of the model are dismissed [11]. In countries such as Bolivia, Ecuador, and Venezuela, neo-extractivism has been a protagonist due to the participation of the state through state-owned or mixed companies and this activity has been legitimized as a synonym for progress [47]. In the Colombian setting, investments in the primary sector have been aimed at facilitating the immersion of private capital. Colombia has witnessed an unbridled mining extractivism marked by a discourse on productivity, efficiency, and technification [48]. Thus, in Colombia, the neo-extractivist component has been adopted with foreign investment, through binational agreements and the protection of extractive companies in the execution of their projects [49].
According to Svampa [13], one of the particular features of neo-extractivism in LA is the development of capital-intensive activities, but not intensive in terms of labor; in this regard, it is evident that neo-extractivism does not substantially change the structure of accumulation and appropriation of nature, while socio-environmental conflicts and hegemonic extractivist policies prevail [50]. Therefore, it is pertinent to ask one of the questions posed by Görg et al. [51]: "why could governments not-and in many cases did not even want to-reduce, in a historically exceptional situation, the dependence on the world market and promote certain forms of industrialization and an internal market?" (p. 11). This assumes that extractivism has been widely attractive to global economies in the LA continent since the colonial era, and it has been presented as a successful development model that in reality leads to soil depletion, biodiversity decline, and water pollution [52].
Artisanal and Small-Scale Mining
Gold ASM may represent between 10 and 15% of the annual production worldwide [53]; it is also characterized by being developed in the informal sector as a precarious practice of relatively sustainable livelihood over time, implementing rudimentary techniques and low levels of technology [54]. These techniques consist of manual tools that require physical effort from miners, having an impact on their health [55]. It is generally conducted in areas with ancient mining explorations and near rivers [37]; hence, it has become a low-skilled employment source, despite being a platform for the creation of wealth; academics highlight the fact that mining groups do not have enough tools to turn those revenues into savings or investments that would gradually improve their quality of life [56]. However, for communities, ASM is valued as a way to reduce poverty and escape from scarcity [54,57].
According to Marston [58], ASM has increased significantly in recent decades; hence the need for regulation of mining activities by regional and local governments, in order to maximize its benefits and address the socio-environmental problems that lead to its existence [53]. However, as mentioned earlier, tax and legislative policies for the development of mining have favored the exploration and exploitation of gold by large companies, with reduced prioritization of projects for the formalization and regulation of ASM [59]; consequently, this type of mining poses various challenges in the framework of sustainable development, in the political, economic, social, and environmental spheres [37].
In LA countries, it is usual for ASM to be found in protected areas with insufficient regulation of its operations [60]. Especially in Ecuador, Perú, and Colombia, where this activity depends on the moors for water capture and regulation, with little consideration for the strategic value of these protected ecosystems, as opposed to mining expansion in the south of the LA continent [61]. In Colombia, the ombudsman's office [62] has stated that the domestic market for minerals produced with ASM is limited and the volumes that are marketed do not make up an incident economy; therefore, this mode of exploitation involves high costs that miners try to reduce by means of empirical production methods and a high environmental impact. In this regard, controversies arise over the formalization of ASM in Colombia, while the employment opportunities generated are not considered formal due to poor health and safety conditions, along with the fact that the state does not grant access to the fiscal resources caused by this activity [63].
It is important to note that gold mining is part of the structure and tradition inherited for generations in rural communities inhabiting the Complex. Within this framework, the formalization of mining activities in the region must be oriented by the biological and cultural value of the ecosystem, together with the environmental services it provides as the source of water resources for rural and urban areas [64]. It is necessary to recognize that in agreement with the neo-extractivist model, states deny the existence of mining traditions by favoring large corporations [59,65]; consequently, in the attempts to formalize themselves, the mining communities have found little support to implement clean technologies without the use of mercury [60].
It should be noted that when reviewing the scientific literature on the development of ASM in the Santurbán moorland, little information was found on the gold mining activities developed by mining communities in the Complex. This may be explained because, for the most part, the studies focus on the moorland ecosystem, regarding the socio-environmental conflicts that arise from LSM and the environmental impact on the water resources of rural and urban populations surrounding the Complex, as discussion axes on the disputes between gold and water [66][67][68]. Likewise, other analyses are oriented to the use of mercury and arsenic in the practice of ASM [69,70].
Method
This research refers to a qualitative methodology to recognize subjects from the immediate reality of the phenomenon to be studied, with a holistic orientation [71]. From an ethnographic design, the perspectives of the participants regarding the problem in question were understood, as they need to be interpreted from their own point of view [72]; in this way, lifestyles were perceived in the context of ASM, through the interaction of the research group and the community [73].
Population and Sample
According to the most recent data, the total population of the territory surrounding the Santurbán moorland is distributed as follows: Vetas (1762 inhabitants), California (1832 inhabitants), Suratá (3520 inhabitants), and Matanza (4499 inhabitants) [74]. In order to establish the widest number of contacts for immersion in the field of study, through fieldwork conducted by and for the people as an ethnographic principle [75], five inclusion criteria were defined to select participants (Table 1). The population corresponds to miners and peasants, without gender preference, between 18 and 60 years of age. People should be related to the ancestral exercise of the activity of ASM and/or the combination of agriculture or livestock farming to complement their income; likewise, the time of residence in any of the municipalities (Vetas, California, Suratá, or Matanza) should be in the range of 20 to 35 years, since the territory under study has a mining tradition and coexistence with the moorland ecosystem has been present for more than 200 years [76].
The fifth criterion corresponds to community participation, consolidated in collectives or representatives that favor the search for solutions to the socio-environmental conflict from proactive intervention in the process of delimitation of the Santurbán moorland, which will be discussed in the following section. When determining 'axis variables', a statistical representativeness was not sought; however, elements were delimited that allowed the conformation of a homogeneous sample, characterized by shared experiences and knowledge that are also culturally validated [77]. In addition, because ethnographic design transcends representativeness to focus on the significance of the population and sample.
One of the researchers established a preliminary approach with a representative of the Community Action Board to initiate the first approaches. Later on, that participant contacted some miners who inhabit the territory. The sample was established in a chain (snowball), contacting key participants who contacted other inhabitants to have them join gradually [78]. The research team determined an intentional sample of 80 people.
Data Collection and Analysis
The following data collection techniques were used: participant observation, focus groups, and semi-structured interviews. Participant observation is distinctive of ethnographic work. It consists of the interaction between the researcher and the research subject to understand the phenomenon of analysis [79], as well as the social context that constitutes the object of study [75]. Approaches were made to the ancestral practice of ASM and to the lifestyle of the inhabitants of the moorland in order to achieve a subjective interpretation of the situation from the individual to the collective [80].
While the topic of analysis generates various debates regarding the negative impact of neo-extractivism on the practice of ASM, the focus groups facilitated individual interaction in a group setting; therefore, in each session discussions were generated on the topic of study and a shared vision of the problem emerged [81,82]. The characteristics of the participants are recorded in Table 2, according to the three focus groups that were organized. The semi-structured interviews were used as a complement to obtain specific descriptions of the interviewee regarding the conflict experienced daily by the miners and peasants who inhabit the territory [83]. Four people who have represented the community in the meetings organized around the process of delimitation of the Santurbán moorland were interviewed by using a guide of questions. An open discussion was conducted on their knowledge, experiences, and recommendations on the problem under study. Each person participated autonomously and freely after signing an informed consent.
Regarding the development of collection tools for focus groups and semi-structured interviews, the support of academics with preponderant knowledge in the area was required, especially from an ecological-political context that frames the conflict of interests in the moorland ecosystem; thus, the expert judgment validation technique allowed two researchers to provide concepts and assessments on the questions, in order to adapt it to the field of discussion and estimate its reliability [84]. When conducting the semi-structured interviews, participants were also asked to provide recommendations on the focus group guide, to be considered in future research.
The question guide for each instrument followed three key components: (i) the negative impact of neo-extractivism with respect to ASM; (ii) the formalization of ASM to guarantee the environmental protection of the moorland; (iii) ASM as a model of rural economic development.
An information analysis was performed by using the NVivo software (v12), which resulted in data encoding with node categorization. This allowed for the organization of the units, in accordance with the theoretical-conceptual constructs that guided the study [85]; in parallel, the constant comparison method was used to interpret the information collected by the three researchers, emphasizing the most significant elements [86].
Codes or topics were defined by the attributes and descriptors found in the categories determined. This allowed the research team to define a coding scheme to analyze transcripts and standardize text units [87]. By comparing the results obtained by the three researchers, bias was reduced, and reliability and validity were granted to the collection and interpretation of the data [77]. Figure 2 shows the nodes encoded in NVivo, which were determined by the criterion of information saturation. Figure 3 shows a word frequency cloud, which defines the scope of the research approach, with the following keywords: moorland, water, Santurbán, delimitation, rights, territory, protection, activities, government, environmental, and communities.
Sustainability 2023, 15, x FOR PEER REVIEW and interpretation of the data [77]. Figure 2 shows the nodes encoded in NVivo were determined by the criterion of information saturation. Figure 3 shows a word frequency cloud, which defines the scope of the rese proach, with the following keywords: moorland, water, Santurbán, delimitation territory, protection, activities, government, environmental, and communities. and interpretation of the data [77]. Figure 2 shows the nodes encoded in NVivo were determined by the criterion of information saturation. Figure 3 shows a word frequency cloud, which defines the scope of the rese proach, with the following keywords: moorland, water, Santurbán, delimitation territory, protection, activities, government, environmental, and communities. In order to grant validity and reproducibility to the study, it was essential to i the narratives in accordance with the daily life of the participants; however, the ticity and validity of the analysis was aimed at understanding the problem pos In order to grant validity and reproducibility to the study, it was essential to interpret the narratives in accordance with the daily life of the participants; however, the authenticity and validity of the analysis was aimed at understanding the problem posed from different angles, specifically, by means of three types of data: subject protagonist of the object of study, interpretation by the researchers, and the formal methodological-conceptual elements that have been developed on that topic of interest [88]. This method of analysis is known as triangulation, based on the contrast of information obtained with different strategies or from different informants [77,89].
For the present research, a triangulation analysis was made at two times. First, with the interpretation of the data collected by the three researchers, with the three instruments used. Second, by the contrast of information between the perspective of academics who have studied the subject (literature review), the subjectivities and intersubjectivities of the participants (data derived from the collection instruments) [90], and the optics of the research team (interpretation of the analysis). Finally, the results obtained from triangulation were returned to the study participants in order to validate the research directly with the community and receive recommendations for its reproducibility.
Results and Discussion
Nine legally established companies operating in ASM of gold and other precious metals are currently based in the Vetas-California mining district; in this way, employment opportunities have been created for inhabitants of the region, in addition to sustaining a mining tradition that, according to the community, dates back to the Chitarero natives who worked in gold mining prior to the arrival of the Spaniards in colonial times.
Regarding the socio-environmental conflict developed in the Complex, through resolution 2090 of 2014, the Colombian government defined the boundaries of the Santurbán moorland and demarcated the areas in which mining could be conducted, establishing that out of the 642,473.9 acres making up the ecosystem, 320,601.9 acres would be under protection [91]. Despite this, the community filed legal actions regarding the violation of participation rights and the due process. For this reason, in 2017 through judgment T-361, the Constitutional Court of Colombia ruled that the boundaries of the moorlands should be guided by a stage of consultation with the settlers to define the terms of special protection of the ecosystem [92].
Consequently, one aspect of the delimitation process is a source of concern to the communities, namely, the replacement and conversion of mining activities, since their livelihood would be hindered if not annulled. Although the determination aims to demarcate the moorland zone to guarantee environmental protection, the moorland dwellers feel uncertain and demand that the reconversion is reflected in a guarantee for the formalization of ASM in the region. According to Hook [93], demarcation is essential for miners in extractive activity, so that their profit is based on an inalienable property right; in this regard, formalization represents a challenge, because legal permits must provide adequate working conditions and environmental protection [94].
Taking into account the neo-extractivist currents that have permeated extractive mining in the Santurbán moorland and the demands of the community regarding the formalization of ASM, the delimitation process that is carried out in the territory was identified as a starting point to guarantee and optimize the development of mining activities in the region; specifically, since the reconversion ruled by the Constitutional Court.
Scope of Neo-Extractivism in Formalization Alternatives
According to Svampa [13], the expansion of the extractive frontier in LA results from a mercantilist view of natural resources, originating in colonialism associated with large-scale dispossession and looting. Thus, Lara-Rodríguez [95] highlights that in Colombia, the mining contract includes an extensive bureaucratic process that requires legal, technical, and scientific knowledge, along with an extensive financing capacity; therefore, the structure of the mining administrative sector is designed to privilege LSM. This factor is obvious for the Complex dwellers, since they state that in the delimitation process the government will give lands to large-scale mining based on the protection of the ecosystem argument.
Pokorny et al. [96] highlight that neo-extractivist policies, as they involve redistribution processes, lead to various problems due to excessive fiscal spending on social programs. The study shows that companies interested in obtaining environmental licenses to operate open-pit mining in the Complex introduce institutional plans to benefit communities; however, for the dwellers, favoring LSM results from the inability of governments to adequately support artisanal miners [97] and the stigmatization of ASM as an economic activity driven by poverty [98]. This is where the supposed incidence of neo-extractivism as a development model is reaffirmed as a force that operates in opposition to the expansion of ASM; mainly, because besides being an economic reference for LA, neo-extractivism has a wide influence on the political and cultural dimension of development that ultimately converges in domination [10].
In this order of ideas, neo-extractivism has become a contemporary version of development in LA [50], which is why the participants stress the importance of highlighting the positive scope of ASM in their territory, as a mode of resistance against the strategic actions of ruling groups and classes that exacerbate conflict on the use of resources [51]. In the African context, ref. [99] states that this model has been evaluated and implemented in the political agendas of Tanzania, with a similar agenda in terms of the interests that guide the neo-extractivist model and its influence in LA. In Africa, the scope of extractivism shows similarities with what is happening in the Santurbán moorland.
Regarding the above, just like the dwellers of the Complex, participants in studies carried out in sub-Saharan Africa reveal that when confronted with LSM, licensing attempts are reduced, and formalization is discouraged. They also recognize that governments justify the lack of resources and ignore the contribution of ASM to community development [100]. The moorland communities believe that the consultation process for the delimitation did not have enough guarantees of participation and it was observed that there is distrust with respect to the final decisions of the national and regional government. In this regard, Hook [93] found that for artisanal miners in Guyana, the state perceives ASM as disorganized, it criminalizes its action and places them as an accomplice in corruption activities that exacerbate socio-environmental damage.
Formalization as an Alternative Solution
While neo-extractivism strengthens extractive activity in LA and consolidates LSM through capital investment, ASM demands special attention and great challenges, insofar as it integrates the rural economy while keeping connections with agriculture [98]. Therefore, it is necessary to concentrate on the discontent of the dwellers about the management of the socio-environmental and economic problems of the sector with respect to certain actions or omissions by the national government, given that their voices began to be heard in the process of delimitation of the moorland just four years ago. As they state: We do low impact mining; if we are going to talk about impact, we are small mining companies, and all our lives we have done mining in the same area (Group 1).
Through small-scale mining we want to transform the future of Colombia, and we are convinced that it is responsible mining. We have the strength, intelligence, and knowledge to get mining to be productive (Group 5).
The interests of the international countries in the Santurbán moorland are not only about the water, but also about the mineral that is here (Group 7).
It should be mentioned that article 344 of the Mining Code sets out the functions of the Mining Policy and Regulations Advisory Council: (i) "it recommends to the National Government the policy and mechanisms for coordinating the activities of all public and private entities and bodies whose functions affect or may affect the mining industry", (ii) "it makes recommendations to ensure sustainable development in the extraction, processing and use of mineral resources" [101]. In this way, the political functions of the government are determined in the face of the sustainability of mining development and related entities surrounding the mining industry.
It is worth clarifying that Africa presents the most precise references in terms of formalization; thus, in sub-Saharan Africa, the debates on this process started between the 1980s and 1990s [98]. In Kenya, there was silence regarding the situation until 2016, and Tanzania has the most advanced government formalization policies in the region [102]. In this study, the participants urgently demand the regulation of ASM because the National Policy for the Formalization of Mining created in 2014 is insufficient and limited for the moorland context.
When analyzing the information collected, several similarities were found with the African context and with other countries in the global south that made it possible to identify the components that favor the formalization of ASM based on real experiences and difficulties. First, the miners find the government's economic interests as contrary to improving their quality of life [103]; therefore, it is considered necessary to overcome the confrontation or denial of ASM, a factor that does not help to solve the problem and exacerbates the economic crisis [104].
Second, the frustration of the inhabitants regarding the lack of precision in government decisions is obvious, since officials claim that the efforts of the institutions are aimed at increasing the fiscal control of the sector [105]. However, they do not have a defined path to formalize ASM [98]. In this sense, according to the testimonies of the moorland miners, financial and technological assistance should begin with a change in the narratives related with ASM and its operators [97].
Thirdly, the development of mining companies in the territory was possible after decades of entrepreneurship, added to the work performed for LSM organizations. As a result, Verbrugge [106] found that in the Philippines the formalization of ASM is divided among the dominant companies that benefit from the persistence of informality to take advantage of the excluded and exploited workforce and of the groups of miners who experience inequality and poor guarantees. To this end, access to formalization in the Complex implies the improvement and innovation of extractive activities. Therefore, the fourth point refers to the limitations of assistance and the prioritization of the formalization process, mainly, due to legal requirements which are difficult to interpret [107].
In this sense, Hilson et al. [108] point out that miners have limitations in the formalization process due to factors such as bureaucracy, excessive registration costs, and competition for land against large mining companies; hence, systematic exclusion patterns prevent miners with little resources (the majority) from accessing land [103]. A determining element in the Santurbán moorland case is linked to the commitment of communities to the protection of the ecosystem. According to the perception of some interviewees, there is awareness on the importance of eliminating mercury; therefore, it is essential for inhabitants that the knowledge built through generations is sustained over time and contributes to the exercise of responsible mining.
In this way, the fifth component is related to the environmental protection of ecosystems, in order to mitigate the environmental impact resulting from extractive activities. In this respect, as revealed by the settlers as miners, they have a sense of belonging and ownership over the territory that has become their main source of support; it is, thus, essential that ASM is formalized [104]. According to Kinyondo and Huggins [109], within the framework of sustainable development (SD), ASM should have significant relevance; especially in the design of environmental management plans that are built through formalization. This shows the interest of miners in the Complex to improve their environmental management through training sessions, monitoring, and financial support.
The five elements mentioned are complementary and they have been evaluated in local life contexts in which ASM has social, environmental, and economic implications [96]. In this sense, emphasis on SD is linked to the ways in which rural and urban communities related to the moorland ecosystem express concerns about the environmental impact of LSM and ASM. In this regard, Huntington and Marple-Cantrell [110] outline that concerns about the consequences of extractive activities are stronger in areas with traditional rules that require government regulation. For this reason, it is essential to formalize ASM in the Complex, supported by the delimitation process and, therefore, the reconversion of extractive activities.
Conclusions
The purpose of this paper is to analyze the negative impact of neo-extractivism to boost the mining industry and the components that prevent the formalization of the ASM as a model of rural economic development in Vetas, California, Suratá, and Matanza. It is concluded that the neo-extractivist model operates in opposition to ASM, especially because of the economic interests that mediate access to land and the traditional administrative structure that privileges multinational companies that run LSM.
The communities of the municipalities surrounding the Santurbán moorland have made the greatest efforts to prioritize the process of formalization of ASM in the territory, mainly in the search for environmental protection of the Complex; however, despite the efforts to generate alternative solutions to the socio-environmental conflict that was exacerbated with the delimitation of the moorland ecosystem, local, regional, and national authorities have minimized the problem and reduced the conversation to the exchange of natural resources by capital.
It is important to understand that the negative impact of neo-extractivism extends to limiting or preventing the formalization of ASM, mainly due to the stigmatization of artisanal work, the structure of the mining sector that privileges LSM by granting licenses in exchange for favors, the wide influence of the political dimension, and ignorance of the contribution of ASM to community development. Consequently, it is essential that the delimitation process ends with the demarcation of the territory in order to guarantee the continuity of a centuries old artisanal mining tradition in the region.
This solution measure involves different challenges, since the elimination of ASM justified as environmental protection does not guarantee that the large-scale and open-pit mining industry will ensure the conservation of the Complex and the economic stability of the communities that depend on ASM. In this regard, it is essential to legitimize the participation of the inhabitants, and their knowledge and experiences of territorial life, with the purpose of suppressing stereotypes and stigmatization of their economic activity.
In the midst of the conflict of interests between the political sectors that seek to promote neo-extractivism as the only alternative for the development of the mining industry, and the communities surrounding the Complex that favor the formalization of ASM, it is a priority to interpret the economic, social, and environmental reality of the territory and to implement actions that guarantee rural economic development through the recognition of the artisanal-traditional sector, the creation of companies and associations, financing, training, and access to marketing and technology with training [104]. In addition, it is necessary to achieve a formalization pattern that adapts to the adjustments and systems that the settlers have historically established [111]. | 2023-08-03T15:37:29.060Z | 2023-07-30T00:00:00.000 | {
"year": 2023,
"sha1": "e54bd82816d55e9cb1810cb053e872b85fe9e670",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/15/15/11733/pdf?version=1690684476",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d9734ed44289750f7b363835a1efef1ff8840317",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
9031842 | pes2o/s2orc | v3-fos-license | The Exploration of Disease Pattern, Zheng, for Differentiation of Allergic Rhinitis in Traditional Chinese Medicine Practice
Pattern, or “zheng,” differentiation is the essential guide to treatment with traditional Chinese medicine (TCM). However, the considerable variability between TCM patterns complicates evaluations of TCM treatment effectiveness. The aim of this study was to explore and characterize the relationship between patterns and the core patterns of allergic rhinitis. We summarized 23 clinical trials of allergic rhinitis with mention of pattern differentiation; association rule mining was used to analyze TCM patterns of allergic rhinitis. A total of 205 allergic rhinitis patients seen at Chang Gung Memorial Hospital from March to June 2005 were included for comparison. Among the 23 clinical trials evaluated, lung qi deficiency and spleen qi deficiencies were the core patterns of allergic rhinitis, accounting for 29.50% and 28.98% of all patterns, respectively. A higher prevalence of lung or spleen qi deficiency (93.7%) was found in Taiwan. Additionally, patients with lung or spleen qi deficiency were younger (27.99 ± 12.94 versus 58.54 ± 12.96 years) and the severity of nasal stuffiness was higher than among patients with kidney qi deficiency (1.35 ± 0.89 versus 0.62 ± 0.65; P < 0.05). Lung and spleen qi deficiencies are the core patterns of allergic rhinitis and determining the severity of nasal stuffiness is helpful in differentiating the TCM patterns.
Introduction
Traditional Chinese medicine (TCM) has been used for centuries in China and more recently has been widely studied and applied throughout the world [1,2]. "Pattern differentiation and treatment" has an important role in TCM treatment. With this approach, a diagnosis is established through four examinations: visual inspection, smelling and listening, inquiry, and palpation, followed by TCM interventions such as use of herbal medicine, acupuncture, moxibustion, and massage [3,4].
Pattern differentiation, or "zheng," is a unique TCM concept that summarizes the nature, location, and pattern of diseases corresponding to the World Health Organization's definition [4]. According to each individual pattern, the specific TCM treatment can be prescribed precisely to maximize its effectiveness [5][6][7]. However, successful use of pattern differentiation depends primarily on TCM doctors' subjective judgment, which is based upon classical TCM principles, education, and clinical experience. Thus, the practice of pattern differentiation can vary considerably among individual physicians [8]. In addition, there is little agreement between textbook guidelines for TCM pattern differentiation and its actual use in practice [9]. Finding ways to incorporate TCM knowledge into clinical practice and eliminating variability is an important issue in evidencebased investigations [9].
Due to the considerable variability in individual practices, it can be difficult to summarize TCM clinical data by conventional statistical techniques, and thus a number of data mining methods, such as association rule mining (ARM) and cluster analysis, are used to acquire TCM knowledge from large-scale clinical data [3,10,11]. ARM is a modern data mining tool developed to explore the relationships between a wide range of factors, and it is widely applied to TCM prescription analysis [10,12]. Moreover, ARM can effectively pinpoint the core TCM formula from a large prescription database by analyzing the relationship between TCM formulas [11]. In addition to TCM prescription, ARM is also used to analyze disease comorbidities and TCM patterns, and the advantages in reducing the complexity of TCM patterns have been well demonstrated [13,14].
Allergic rhinitis, a common immunologic disorder, affects 10% to 20% of the world's population [15]. It involves type 2 CD4 T lymphocyte activation with cytokine secretion, producing an increased number of eosinophils and mast cells. Certain drugs used in Western medicine (WM), such as H 1 -antihistamines, leukotriene receptor antagonists, intranasal corticosteroids, and even short-term oral corticosteroids, have been used to block disease progression and relieve symptoms [15]. In Taiwan, allergic rhinitis is one of the most common reasons for TCM visits, due to concern about side effects from long-term use of Western medications and the prospect of fewer side effects with TCM treatment [2,16].
Several TCM treatments have been beneficial for allergic rhinitis, and the results of many studies have outlined the possible mechanisms for suppressing allergic reactions [17][18][19][20][21]. Nonetheless, the effectiveness of different TCM treatments is still unclear because no large-scale survey on TCM pattern differentiation of allergic rhinitis has yet been done.
The aim of this study was to explore the core TCM patterns of allergic rhinitis by using ARM and to compare these results with a hospital-based database to identify crucial factors to differentiate the patterns of allergic rhinitis. Depending upon the results of this study, future studies could focus on the most important TCM patterns, and different treatments could then be designated for specific TCM patterns.
Construction of the Clinical Trial
Database. First, we conducted an extensive search of several databases, including PubMed, MEDLINE, Web of Science, Scopus, and the China Academic Journals Full-Text Database (CJFD). Keywords searched included "allergic rhinitis," "bi qiu," "chronic rhinitis," "pattern differentiation," "syndrome differentiation," "zheng," and "clinical trials." "Bi qiu" is the TCM disease corresponding to allergic rhinitis in WM. The full text of the search results was accumulated and critiqued by all authors of this study, and disagreements were resolved by consensus. After critical appraisal, the essential elements, including case number, gender, age, diagnostic criteria, and distribution of TCM patterns, were extracted from the eligible clinical trials manually. All these elements were entered into the computerized database.
Association Rule Mining (ARM)
. ARM, a data mining technique developed in the 1990s, has been widely used in medical research to explore the relationships among TCM prescriptions, disease comorbidities, and TCM patterns [13,14]. The detailed algorithm has been thoroughly described and presented in previous studies, and IBM DB2 Intelligent Miner 9.1 software (IBM Corporation, Armonk, NY) was used to perform ARM of the clinical trials database [22]. Two decisive factors, support and confidence, were used to demonstrate relationships between patterns. Support was defined as the prevalence of a certain relationship among the whole database, and conditional probability of coexistence of pattern A and B given only pattern A was related to confidence. Depending on the threshold formed both by the support and confidence factors, the significant relationship between pattern A and B was established. It was an iterative process to decide the proper value of support and confidence factors and, in this study, support and confidence factors were set to 1% and 20%, respectively. These values were agreed upon by all authors in this study. Additionally, a diagram was drawn of associations between all patterns to clarify the relationships between TCM patterns and the core patterns of allergic rhinitis.
Hospital-Based Clinical Data Acquisition.
To compare ARM results from the clinical trials database and practical clinical data, we used an established database of allergic rhinitis patients in the TCM outpatient service at CGMH. The definitive diagnosis of allergic rhinitis and TCM patterns was confirmed by Dr. Yang. Detailed data, including TCM patterns, age, gender, parents' health history, patients' personal health history, residence, serum IgE levels, results of MAST (Multiple Allergen Simultaneous Test panel) tests, and symptom severity, were recorded in this database. All data were collected with informed consent, and the records from March to June 2005 were extracted for further analysis. The process of data collection and analysis was approved by the Institutional Review Board (IRB) of CGMH.
Statistical Analysis of Characteristics of TCM Patterns.
To examine the differences in characteristics among TCM patterns Student's t-test and one-way analysis of variance (ANOVA) were used for numerical data, and chi-square statistics were applied to categorical data. Only results of statistics with a P value less than 0.05 were deemed to be significant.
Description of Clinical Trials of TCM Patterns.
A total of 114 studies were found by the search strategy, and after detailed appraisal, 23 studies were eligible for inclusion in the study. All 23 studies were done in China and had been published in Chinese. Studies with English titles are listed as examples in the Appendix. From the 23 eligible studies, 2589 patients were identified, and a patient-pooled database was constructed. Fifteen patterns composed of one or more organs and the nature of disease were identified. Lung qi deficiency was the most common pattern (23.95%), followed by spleen qi deficiency (22.75%), and lung yang deficiency with wind-cold assailing the lung (14.75%). More than half of patients were classified into the qi deficiency pattern in these trials. In contrast, blood stasis, dual deficiencies of qi and yin of lung, and lung-spleen yang deficiency were the least-recognized patterns, and all had a prevalence of less than 1% (Table 1).
ARM of TCM Patterns.
After applying ARM, we identified the 10 most common relationships between the locations and nature of disease patterns ( Table 2). The lung, followed by the spleen, was the most common site of disease, whereas qi deficiency was the most common nature of disease. More than half (58.48%) of all pattern combinations were composed of lung or spleen qi deficiency. Nearly all locations or cases of allergic rhinitis were connected to the lung, spleen, and qi deficiency, and strong interactions were also found. The central role of the lung and spleen can be seen in a diagram of relationships between patterns ( Figure 1). Additionally, high confidence, as high conditional probability, was found among three conditions: "heat with lung," "phlegm-dampness with lung," and "kidney and spleen with qi deficiency." It is assumed that, for patients with allergic rhinitis, once heat or phlegm-dampness was found, the nature of these two diseases would always be combined with lung, forming a pattern. More interestingly, qi stagnation and blood stasis were strongly associated, and neither had any relationship with major organs, such as lung, spleen, or even kidney. Despite the fact that this group's prevalence was only 0.19%, it may represent special mechanisms or manifestations of allergic rhinitis.
Pattern Analysis in Hospital-Based Surveillance.
Using the well-established allergic rhinitis patient database at CGMH, TCM pattern analysis showed these patients could be divided into 3 groups: those with lung qi deficiency, dual deficiency of lung-spleen qi, and kidney qi deficiency (Table 3). Similar to the results of clinical study reviews done in China, 93.7% of patients had patterns composed of lung, spleen, and qi deficiency, and the percentage was higher than in the clinical trials. Among all the patients' characteristics, patients diagnosed with kidney deficiencies were significantly older than the other two groups-57.37 years versus 27.99 years-whereas no differences were found in serum IgE levels, results of MAST allergy tests, or other factors (Table 3).
Relations between TCM Patterns and Symptoms.
TCM pattern differentiation was mainly based on clinical symptoms and therefore analysis of patients' symptom severity provided decisional information for pattern differentiation. Key: * P value < 0.05; † SD: standard deviation; ‡ combination of "lung qi deficiency" and "dual deficiency of the lung-spleen qi" groups.
Higher symptom severity scores, equivalent to more severe symptoms, were noted in the lung qi deficiency group and dual deficiency of the lung-spleen qi group, compared to the kidney qi deficiency group, although this was not statistically significant (Table 3). Nevertheless, the differences in symptom severity became more obvious when lung and spleen qi deficiency were combined due to symptom similarity, and compared with the kidney qi deficiency group (Table 4). Moreover, "stuffiness," one of the most bothersome effects of allergic rhinitis, was found to be more severe in the lung or spleen qi deficiency group than in the kidney qi deficiency group (Table 4).
Discussion
To the best of our knowledge, this is the first study to investigate the TCM patterns of clinical trials and to provide comparisons of clinical hospital-based data and severity of symptoms. The use of TCM has become much more widespread in recent years and many more interventions guided by TCM theory are being integrated into modern medicine [1,2,9]. TCM treatments, including herbal medicine, acupuncture, moxibustion, and massage, are administered according to TCM patterns, or "zheng" [23]. TCM patterns are composed of the cause, nature, and location of diseases, and differentiation of patterns is largely dependent upon clinical symptoms [3,24]. Because of the complexity and plurality of clinical symptoms, and the nature and location of diseases, such as the Chinese medicine theory of five viscera and six bowels, the variability of pattern differentiation is extremely high. Thus, agreement on patterns of the same disease is usually low [8,9]. From the viewpoint of evidence-based medicine, in future studies, it will be particularly important to summarize TCM patterns and to explore core patterns of disease. ARM is an appropriate statistical method for summarizing disease patterns and exploring core patterns and the nature and locations of diseases because it examines not only the prevalence of a pattern but also the strength of relations between and within patterns [14]. In this study, combinations of lung, spleen, and qi deficiencies were found to be the most crucial part of TCM patterns of allergic rhinitis. The results are consistent among clinical trials and hospital-based clinical data, and disclose valuable, evidence-based information for further investigation.
Qi deficiency has been proved to be crucial to allergic rhinitis in previous studies, and two famous qi-tonifying Chinese herbal products, Bu-zhong-yi-qi-tang and Xiangsha-liu-jun-zi-tang, have had marked therapeutic effects on allergic rhinitis, even without pattern differentiation [18][19][20]. The mechanisms of immunomodulation of qi-tonifying Evidence-Based Complementary and Alternative Medicine 5 agents include decreasing serum IgE, interleukin-4 (IL-4), interleukin-5 (IL-5), and gamma interferon (IFN-γ), increasing interleukin-10 (IL-10), and suppressing cyclooxygenase 2 mRNA expressions [18][19][20]. As a result, the imbalance of type 1 and type 2 helper T lymphocyte cells is reversed and allergic rhinitis symptoms are alleviated [18,20]. IL-4 and IL5 with helper T-lymphocytes switch from type 1 to type 2, and subsequently high IgE secretion has been proved to be the cardinal pathogenesis of an allergic reaction [25][26][27]. The effective reversal of activation of an allergic reaction by qi-tonifying agents shows the possible relationship between qi deficiency and serum cytokine level, and, perhaps, the pathogenesis of qi deficiency of allergic rhinitis.
Lung and spleen are the two important locations of diseases and are highly related to qi deficiency, forming TCM patterns. The function of lung, from the viewpoint of TCM, includes control of respiration, qi domination, and fluid regulation, and these functions are highly related to the nose and skin [4]. The most common symptoms of allergic rhinitis, such as sneezing, runny nose, and stuffiness, and possible subsequent critical illness in the form of asthma have been shown to be associated with the nose and entire respiratory tract and share the similar pathogenesis [15]. Moreover, immunomodulation of allergic diseases by lung-tonifying agents such as Astragalus membranaceus and Cordyceps militaris has been widely reported [28,29]. Owing to the remarkably similar disease behavior and pathogenesis, the lung, rather than other organs, represents the most important organ in pattern differentiation of allergic rhinitis.
The spleen, from the viewpoint of TCM, dominates transformation of food to energy, similar to WM's view of the gastrointestinal tract's function [4]. The gastrointestinal system has been thought to be associated with allergic diseases and the underlying mechanism may be related to activation of eosinophils and type 2 helper T lymphocytes, with increasing IgE levels [30,31]. Thus, by modifying intestinal bacterial flora and subsequent systemic immunomodulation, symptoms of allergic rhinitis may be relieved [32]. Additionally, a spleen-tonifying TCM formula has been found to be effective for alleviating allergic rhinitis symptoms [33]. These facts reveal the close relationship between spleen deficiency and allergic reactions, and through modulating gastrointestinal function by TCM herbal products, allergic disorders may be alleviated.
Yin and yang deficiencies are less commonly identified than qi deficiency in clinical trials, and they were also absent in the surveillance at our hospital. Yin deficiency was a specialized TCM pattern characterized by decreased body fluids, and it was diagnosed when patients complained about dryness of the mouth, throat, and nasal passages, or constipation. Additionally, a reddish tongue with scanty coating and a fine, rapid pulse were commonly seen among such patients. Moreover, symptoms of yang deficiency among allergic rhinitis patients included manifestations of qi deficiency with prominent fear of cold, cold extremities, clear nasal discharge, pale face, and an enlarged tongue with a white, slick coating. Both lung yin and yang deficiencies were noted in the late stage of the clinical course of allergic rhinitis, and they usually developed when qi deficiency, the early stage of allergic rhinitis, was not properly treated. Therefore, it is reasonable that combinations of qi and yin deficiency or yang deficiency were less frequently found among allergic rhinitis patients.
Additionally, combination of qi stagnation and blood stasis was a special pattern in this study. Although the prevalence was low, about 1.78%, a strong association with allergic rhinitis was found (Tables 1 and 2). Also, this group of patients seemed to be isolated from other patients (Figure 1). In other words, once qi stagnation was diagnosed, blood stasis was always also diagnosed, and vice versa. Qi stagnation and blood stasis among allergic rhinitis patients had a chronic course, and patients had a purplish or purple-spotted tongue and a stringy, choppy pulse. Due to the unusual characteristics, a different pathogenesis was suspected among these patients and therefore further studies were warranted.
The severity of nasal stuffiness, one of the common symptoms of allergic rhinitis, is definitely different between lung or spleen qi deficiency and kidney qi deficiency groups. In this study, the patients in the kidney qi deficiency group were older than those in the lung or spleen qi deficiency group. This finding is similar to that of previous studies. Currently, nasal stuffiness is thought to be caused by eosinophil and mast cell infiltration with subsequent airway remodeling. It is believed to be related to certain neuropeptides, and its severity decreases with aging [34]. From TCM's viewpoint, metabolism and transport of body fluids largely depend on lung and spleen [4] and therefore nasal stuffiness, caused by nasal cavity mucosa edema and swelling due to allergic reaction, is easily found in patients with lung and spleen qi deficiency with disturbed body fluid transport. Additionally, the prominent immunologic disorder found among lung and spleen qi deficiency patients may also be the cause of severe nasal stuffiness. Based on this significantly different symptom among the two groups, nasal stuffiness can be used as an inclusion or exclusion criteria for patient selection, and different treatment plans are able to be individually provided for the specific groups.
Though the clinical data are closely comparable to the summarized results of clinical trials for allergic rhinitis, there are still some limitations to this study. First, the quality of clinical trials is heterogeneous. Some population characteristics, such as gender, age, or detailed manifestations of allergic rhinitis, are not provided in every trial, and therefore selection bias may exist. To effectively eliminate this bias, only the most representative trials of allergic rhinitis were included in this study after strict evaluation. Although the number of cases was considerably reduced, the result of ARM is highly reliable, since trials enrolled in this study firmly focus on TCM patterns of allergic rhinitis. Second, the definition of TCM patterns is not exactly the same among these studies, and the basis of pattern differentiation includes Chinese expert consensus on allergic rhinitis in 1997 and 2004, and a textbook of TCM otolaryngology. This disadvantage was largely overcome by examining the descriptions of patterns in every trial and validating them by TCM doctors. Furthermore, results of statistical analysis on 6 Evidence-Based Complementary and Alternative Medicine
Conclusion
Core TCM patterns were explored in this study by applying ARM to clinical trials of allergic rhinitis, and the summarized result is comparable to hospital-based data. A younger patient population and greater severity of nasal stuffiness were associated with the most important patterns, lung or spleen with qi deficiency. Future investigations of TCM treatment for allergic rhinitis can be designed on the basis of these results, and may help define a specific TCM pattern. | 2018-04-03T00:00:38.067Z | 2012-07-29T00:00:00.000 | {
"year": 2012,
"sha1": "865d2cd3b3f5afe39003e8de9aef17db2d1e9e2e",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ecam/2012/521780.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1f4707ec79db6e22503f19224006c5932e027a7b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233462807 | pes2o/s2orc | v3-fos-license | Difference and Variance in Nutrient Intake by Age for Older Adults Living Alone in Japan: Comparison of Dietary Reference Intakes for the Japanese Population
This study aimed to estimate the distribution of usual intakes in protein, sodium, potassium, and calcium by age group and assessed whether proportions of deficiencies/excesses of each nutrient would occur more in older age via a comparison with the dietary reference intakes for the Japanese population (DRIs_J). A cross-sectional analysis was conducted using a database of the 2-day nutrient intake of 361 Japanese people aged 65–90 years. The AGEVAR MODE was used to estimate usual intake. Percentile curves using estimated distribution by sex and age and usual nutrient intake were compared to those of the DRIs_J. The usual intake of protein (male and female) and potassium and calcium (female) were lower with older age. Within-individual variance of protein in female (p = 0.037) and calcium in male (p = 0.008) subjects were considerably lower with older age. The proportion of deficiencies in protein (male and female), potassium (female), and calcium (female) were higher with older age. However, the proportion of people with excess salt (converted from sodium; male and female) did not differ by age. The variances found herein could be important for enhancing the understanding of differences in dietary intake by age.
Introduction
In older adults, physiological problems related to aging, such as motor and cognitive functions, influence dietary intake among individuals. These problems also increase individual differences in the usual intake of nutrients by age and can lead to frailty [1,2].
A previous study, based on a comparison between 30-49-and 50-69-year-old females and between 30-49-and 50-76-year-old males, reported that the within-and betweenindividual variances of dietary intake differed by sex, age, and nutrients [3].
However, there are only a few studies that identify the differences in nutrient intake by age groups, particularly among older adults. Owing to the small sample size for each age group [4], the standard error of the differences in nutrient intake by age and inter-individual variance would be large, which has made the measurement of differences difficult [5].
The AGEVAR MODE [6] method is a model that can express the average value of nutrient intake and within-and between-individual variances by age. The method helped overcome these limitations and demonstrated smaller standard errors compared with the other models and less bias despite small sample sizes using data from dietary surveys that lasted for a minimum of 2 days.
The current study used the AGEVAR MODE method and assessed differences in usual nutrient intake and within-and between-individual variances of important nutrients, such as protein, sodium (converted to salt equivalent), potassium, and calcium for older adults by age. In addition, data were compared with DRIs_J, and the differences in the proportion of individuals with deficiencies in or excess of nutrients by age were identified.
The study hypothesized that old-old adults (75 years and more) would have higher deficiencies of protein, potassium, and calcium, and excesses of sodium, compared with young-old adults (65-74 year). It has been reported that old-old adults prefer the traditional Japanese diet that contain a high amount of salt [7].
The proportion of older adults living alone has increased in Japan. Therefore, this study may contribute to the assessment of recommended nutrient intake and appropriate countermeasures for older adults.
Materials and Methods
A cross-sectional analysis was conducted using the existing database of a crosssectional nationwide survey in Japan. The original study aimed to clarify the relationship between health and nutritional status, diet, geographical environment, and food access of older adults living alone, which was conducted in 2013 and 2014. The relationships between health and physical status [8,9], food accessibility [10], and eating behaviors [11] as well as the distribution of the usual intake of nutrients were reported. However, differences in nutrient intake by age were not identified [8].
Study Population and Procedure
Figure S1 provides the study population and flowchart of the study. First, the researchers sought research approval from the mayor of each town or city. The subjects were recruited according to geographical factors through municipal support. Data were obtained from the basic residents' register on older citizens (individuals aged between 65 and 90 years) who lived alone. Then, distance from the home of each eligible respondent home to the nearest supermarket (by road) was evaluated and mapped using a geographic information system (ArcGIS 10.2; ESRI Inc., Redlands, CA, USA). Subsequently, a list of supermarkets and their geographic locations was obtained from a telephone directory published in October 2010 (Telepoint Pack!; Zenrin Inc., Tokyo, Japan). Based on previous studies [10], potential respondents were categorized into the following groups: (a) those living within 500 m, (b) between 500 m and l km, (c) and >1 km away from the nearest supermarket. An equal number of prospective participants were selected in a random stratified manner from the urban central, rural, and mountainous areas of each city or town. In total, 534 participants responded to the 2-day weighed dietary record survey. The researchers obtained written informed consent from all participants. The study excluded respondents who failed to answer the items necessary for analysis and derived data from 109 male and 252 female participants. The participants answered all items, including nutrient intake, nutritional status, frailty, and other variables presented in the self-administered questionnaire.
Nutrient Intake by 2-Day Weighed Dietary Records
Dietary records were obtained using the methodology of the National Health and Nutrition Survey in Japan (NHNS_J) [12]. To calculate highly reliable nutrient intake from a 2-day dietary record, trained registered dietitians conducted the survey from October 2012 to October 2013. Each participant received two recording papers and instructions on how to record information. The participants weighed the food consumed and enumerated each dish and ingredient and the quantity consumed. The registered dietitians or trained staff visited the homes of all participants at least once during the survey and confirmed and completed the dietary records through interviews [8]. Using the nutrition analysis software Excel Eiyoukun version 8.0 (corresponding to Standard Tables of Food Composition in Japan 2015), energy and nutrient intake were evaluated according to the 2015 Standard Tables of Food Composition in Japan [13]. Sodium was converted into salt equivalent for comparison with DRI_Js.
The study presents the results for protein, salt equivalent, potassium, and calcium. These nutrients were selected from the perspective of hypertension, osteoporosis, and frailty, which are serious health problems concerning Japanese older adults. Specifically, salt and potassium are associated with hypertension; calcium and protein are associated with osteoporosis and frailty, respectively.
Usual Nutrient Intake Estimated Using the Agevar Mode
Dietary surveys require 3-7 days to estimate usual intake. However, the study refrained from following the typical timeframe because it tends to increase the burden on the target older person [8]. Therefore, the researchers reviewed several methods that estimate the distribution of usual nutrient intake in a given population by conducting a multi-day dietary survey [14]. Many of such recognized methods are of the National Research Council/Institute of Medicine [15], Iowa State University [16], and the National Cancer Institute [17]. Yokomichi and Yokoyama, members of the present study, proposed a mixed-effect model named the AGEVAR MODE with age-dependent mean and withinand between-variances. The AGEVAR MODE method is a model that can express the average value of nutrient intake and within-and between-variances by age [6]. Furthermore, the method demonstrated smaller standard errors, especially for age-dependent analyses, compared to other models and less biases even for small sample sizes using data from a minimum of 2 days of dietary surveys [6]. Therefore, the method was considered appropriate for the study in the estimation of usual intake by age.
Body Measurements
The height measurements of the participants were derived from their most recent health examination. Height and weight were measured once before the morning meal. In the case of absence of information, the researchers measured it using a portable measuring device. Weight was measured using a digital scale. Body mass index (BMI) was evaluated for each participant as weight (kg) divided by height (m 2 ).
Frailty and Other Variables
A self-administered questionnaire was used for identifying the characteristics of older adults by collecting data on socioeconomic factors, such as age, sex, annual income, highest level of education attained, special nursing needs, and diseases. The frailty index was then measured using a previously developed method used by the Tokyo Metropolitan Institute. The index was a validated checklist for preventive care and used for screening older Japanese people for frailty [18].
The checklist comprises 15 self-administered items, such as being homebound; communication with friends, neighbors, and family members or relatives; physical conditions, such as capability to walk continuously for 1 km; comfortable vision; chewing ability; nutritional status; and a history of falls. The index score ranges from 0 to 15 with a high score suggesting high levels of frailty. Respondents with a score of 4 were classified as frail [18]. Lastly, the requirement for long-term nursing care was assessed.
Statistics
Data on 109 male and 252 female adults who answered all necessary items for the study were analyzed. The distribution of the usual intake for each sex and age class of older adults was calculated by following steps 1-7 listed below as per the AGEVAR MODE [6]. Furthermore, the proportion of usual nutrient intake was estimated under the estimated average requirement (EAR) or tentative dietary goals (DGs) for preventing lifestyle-related diseases of DRIs_J for each sex and age class by following steps 8-9 listed below:
1.
Amounts of daily nutrient intake were surveyed for 2 days to be approximately normally distributed using the Box-Cox transformation [19].
2.
The mean of the transformed data was modeled to be explained by the polynomial and logarithmic functions of age and within-and between-individual variances of data using a monotone exponential function of age. In this step, the transformed scale of the nutrient was assumed. Moreover, within-and between-individual variances of daily nutritional intakes should be normally distributed around the estimated mean of each individual and at each age group, respectively. 3.
Within-individual variance was then omitted from the estimated nutrient distribution for each age group given that the estimated mean and between-individual variance would establish the distribution of usual nutritional intake in the transformed scale. 4.
For the estimated distribution of usual nutritional intake in the transformed scale, percentiles of the data were estimated using the mean and between-individual variance for each age group. 5.
The distribution of the usual nutrient intake in the original scale of the nutrients was obtained by a reverse transformation of the usual intake distribution in the Box-Cox transformed scale. In this step, the estimated percentiles were reversed. 6.
Previous studies reported that bias would be mathematically induced in reverse transformation [14]. Therefore, the bias was reduced as per the equation, and the final distribution of usual intake in the original scale was obtained. In other words, withinand between-individual variations reported in the study are those observed before eliminating within-person variance to estimate usual intake. 7.
The estimated distribution was then graphically presented as percentile curves representing usual nutrient and daily intakes in the original and transformed scales, respectively, and curves representing within-and between-individual variances and the ratio of within-/between-individual variance in the transformed scale. The shaded areas of the 2.5%, 50%, and 97.5% percentile curves indicate the standard errors of the estimated intake distribution to demonstrate the magnitude of the estimation error owing to the sample size. 8.
Furthermore, the usual intake of nutrients was assessed by comparing the current observations with the 2020 DRIs_J. The survey was conducted between 2012 and 2013. However, the DRIs_J version was used for analysis because it was developed on the basis of new evidences and was a standard that is comprehensively divided into periods for older adults. For the 2020 DRIs_J, the reference intake is set for the age groups of 65-74 and ≥75 years [20]. The DRIs_J for the EAR for protein (g/day) indicated that men and women aged 65-74 years and 75 years and above require ≥50 and ≥40 g/day of protein, respectively. For DGs (tentative dietary goals for preventing lifestyle related diseases) of salt equivalent (g/day), the DRIs_J imply that men and women aged 65-74 years and 75 years and above need to consume <7.5 and <6.5 g/day, respectively. For the DG of potassium (mg/day), the DRIs_J suggest that men and women aged 65-74 years and 75 years and above should consume ≥3000 and ≥2600 mg/day, respectively. In terms of the EAR of calcium (mg/day), ≥600 and ≥550 mg/day are appropriate for men and women aged 65-74 years and 75 years and above, respectively. 9.
The graph displays changes with age of the estimated percentage of individuals under the EAR or those who exceeded the DG in DRIs_J. 10. Statistical analyses were performed using SAS software version 9.4 (SAS Institute, Inc., Cary, NC, USA). A p-value of <0.05 was considered statistically significant. Table 1 lists the characteristics of young-old (aged 65-74 years) and old-old (aged 75 years or more) adults. In males, a correlation was not observed between age category (young-old and old-old) and residential area, disease (not shown in the table), and frailty scores. The annual income for old-old males was significantly higher than that for youngold males (p = 0.007). Although many old-old males were elementary school graduates, several young-old males had attained university education (p = 0.001). Moreover, among the young-old males, the nutritional status based on height, weight, and BMI tended to be lower than that for old-old males. In females, no relationship was observed between age category and residential area. However, the characteristics of annual income, highest educational qualification, nutritional status, disease, nursing care needs, and frailty status among old-old females were significantly worse than those for young-old females. In addition, the proportion of old-old females with a frailty score of >4 points was significantly higher than that of young-old females (p = 0.0001). In addition, (a) presents the diagrams for the percentile curves of usual intake (g/day), (b) within-and between-individual variances, and (c) their ratios in the Box-Cox transformed scale. The percentile curves (a) (i.e., 2.5%, 10%, 25%, 50%, 75%, 90%, and 97.5%) of intake for each nutrient are presented separately for males and females. The shaded areas of the 2.5%, 50%, and 97.5% curves indicate the standard error that corresponds with 68% confidence interval, whereas the bold dotted line presents the 2020 DRIs_J values.
Results
The results revealed that the usual intakes of protein (male and female), salt (male), potassium (female), and calcium (female) were lower with the increase in age. However, the within-and between-individual variances of each nutrient differed according to sex and age.
Usual Nutrient Intake
Proteins were selected as indicators of muscle weakness in undernutrition. Salt and potassium were selected as indicators of hypertension, which is the main nutritional problem affecting the Japanese population. Finally, calcium was selected as an indicator of osteoporosis. Figure 1A(a) illustrates the differences in the percentile curves of the usual intake for proteins by age in males. The protein intake decreased with increasing age. The proportion of individuals who ingested less than the EAR (50 g) of the DRIs_J was low and marginally observed in those aged 80 years or more. Figure 1B(a) illustrates the differences in the percentile curves of usual intake for proteins by age in females. The usual protein intake decreased with an increase in age. However, a few women ingested less than the EAR (40 g) even in their old age. Figure 2A(a) illustrates the differences of the percentile curves for the usual intake of salt by age in males. The median salt intake was nearly constant at high levels (~10 g) but was distributed according to increases in age. Therefore, >80% of the participants at all age groups consumed more than the DG (7.5 g). Figure 2B(a) shows that approximately 90% of the female participants ingested salt beyond the upper limit (6.5 g) of the DG at all groups age. In addition, the between-individual variance of salt intake for females did not exhibit any substantial changes by age ( Figure 2B(b)). Figure 3A(a) demonstrates changes in the percentile curves for the usual intake of potassium by age in males. The median potassium intake was higher, whereas the percentile curves were distributed with the increase in age. Consequently, the proportion of participants who consumed less than the DG (3000 mg) was greater than 25% at all age groups. In Figure 3B(a), the usual intake among females decreased with the increase in age. Therefore, the proportion that ingested potassium below the DG (2600 mg) slightly increased and reached~25% at 85 years.
Finally, Figure 4A(a) shows the differences in the percentile curves for the usual intake of calcium by age in males. Calcium intake increased with an increase in age. At 65 years, the proportion who consumed less than the EAR (600 mg) was~50%. However, this rate decreased with an increase in age. In Figure 4B(a), calcium intake was lower among females, and it gradually increased with an increase in age. At 65 years, the proportion who ingested less than the EAR (550 mg) was~25%. However, because the EAR for individuals aged 75 years or more was set to low (500 mg), the proportion with less than the required amount did not differ considerably. Figures 1-4 further display the within-and between-individual variances of nutrient intake. For males and females, several nutrients existed with large within-individual variances compared to between-individual variances. In particular, the within-individual variance in protein for females (p = 0.037) ( Figure 1B(b)), salt for females (p = 0.063) ( Figure 2B(b)), and calcium for males (p = 0.008) ( Figure 4A(b)) decreased with the increase in age. In other words, the day-to-day variations in the intakes of these nutrients were lower for old-old adults. Furthermore, with regard to protein (male and female) and calcium (male) intakes, the between-individual variance until the age of 70-75 years was less than within-individual variance. However, at more than 70-75 years, between-individual variance was higher than within-individual variance.
Within-and between-Individual Variances of Nutrients
3.3. Proportion of Individuals with a Usual Intake Less or More Than the Estimated Average Requirement (EAR) or Tentative Dietary Goals for Preventing Lifestyle-Related Diseases Figure 5 presents the proportion of individuals with a usual intake less than the EAR or more than the DG for protein, salt equivalent, potassium, and calcium. The proportion of individuals with deficiencies in protein (male and female), potassium (female), and calcium (female) decreased with the increase in age. However, the proportion of individuals with excess salt (male and female) did not differ much by age.
Discussion
The study estimated the usual intakes of protein, sodium, potassium, and calcium by age group, and the deficiencies/excesses in intake proportions were assessed by comparing the results to the Dietary Reference Intakes for Japanese. The results confirmed the differences in the usual intakes of nutrients by age. Among the nutrients, protein, which is one of the vital nutrients required for older adults, exhibited a decrease with the increase in age. Individuals whose intake was under the EAR appeared at age 70-75 years, whereas the percentage of those with consumption under the EAR decreased with the increase in age (male and female). A previous study proposed the association between aging, frailty, and protein intake, including total protein [21].
The within-individual variance of protein for the female participants was remarkably reduced with the increase in age. However, the proportion of old-old adults who consumed less than the EAR was low. Many studies have confirmed low levels of food diversity among males, and males and females may have lesser meat intake with aging [8,22]. In future, investigating the relationship between nutrients and food types is imperative [23] with the coding of food groups as a background [24].
With regard to salt and potassium intakes for females, no difference was observed in salt intake. However, potassium intake decreased with the increase in age. This finding could be attributed to concentrated seasoning. In the study, 56% of the females were hypertensive, which was consistent with the results of a previous study conducted by the researchers [8] and another study on elderly Chinese individuals [25]. In addition, Villela et al. [26] reported that older adults with hypertension prefer salty foods more than those without hypertension. Moreover, in old-old adults, seeking methods to support dietary intake is necessary.
The usual calcium intake among males increased with the increase in age. One of the reasons for this result is that the target population comprised senior adults who lived alone. That is, they could live autonomously without availing of elderly care facilities because these individuals had a nutrient intake more than the EAR. Such individuals are likely to be careful about their calcium intake. The proportion of individuals below the EAR decreased with the increase in age. The studies conducted in China and Germany reported similar results. In other words, the dietary intake of senior adults living independently, including super-aged individuals, was adequate for the majority of the evaluated nutrients [25,27].
The features of nutritional status in the older population emerged clearly in terms of percentiles for usual intake as well as within-and between-individual variances by applying the AGEVAR MODE to the 2-day dietary survey data. The variance of 1-day dietary intakes in the population comprised between-and within-individual variances [5,28]. Many studies encouraged the discontinuous 2-day surveys for short-term dietary surveys to estimate the usual nutrient intake [29]. Using the AGEVAR MODE, the present study indicated the feasibility and applicability of using non-consecutive 2-day surveys of older adults to assess usual dietary intake. In fact, depending on the nutrients, the usual intake variance and within-and between-individual variances differed [30]. Therefore, the AGEVAR MODE successfully estimated the percentiles for usual nutrient intakes and within-and betweenindividual variances of interest and enabled the assessment of differences in nutritional status by age in an elderly population. However, the model can only address dietary data that are normally distributed with appropriate transformation and, therefore, may require further improvement, such that it could be successfully applied to data with large numbers of null amount or data of food items [31]. Although the AGEVAR MODE yields smaller standard errors of age-dependent estimates compared with the other models [6], the sample size of the current study may not be enough especially in males. To consider this issue, we conducted post hoc power analyses to detect a significant (p < 0.05) change in withinand between-individual variances and their ratio according to age, given the effect size (the percent change per 10-year increment of age) observed in the current study. Among the 4 nutrients (protein, salt equivalent, potassium, and calcium), the median effect-size of within-and between-individual variances and their ratio (and post hoc power) was −27% (0.30), +42% (0.18), and +91% (0.32) in men, respectively; −19% (0.30), +6% (0.04), and +31% (0.13) in women, respectively. Thus, the non-significant changes according to age may be due to the small sample size in men and the small effect size in women.
Although the model corresponds to various median amounts of usual intake, its estimation is based on the assumption that within-and between-individual variances would either monotonously be high or low. The study assumes that this assumption would be natural in real-world settings because a previous study suggested that older adults tend to have low within-individual variance and high between-individual variance with the increase in age [6,25]. However, the variances may change over time among young and middle-aged adults, a subset of whose dietary lives would drastically change [32]. They may live alone far from their hometown for business reasons and would marry and raise children. A subset may even obtain a divorce. The considerations of employment status and family member data may be necessary for future studies on improving such statistical models to estimate the distribution of usual intake.
The present study has certain limitations and requires improvements in the future. First, the researchers omitted weekends or holidays in the dietary survey because many of the subjects were retired. However, their dietary behavior and meals during weekends may differ during weekdays because meals would be shared with relatives, children, or grandchildren. In addition, the survey was conducted between October and November, which is the autumn season. Further studies should include food type and nutrients of food with seasonal effects and changes [33]. Second, the participants included only those who were living alone. In this regard, comparing the results between older adults who live alone and those who live with others is necessary. Third, the number of male participants was less than that of female participants in the survey. The researchers faced difficulties in obtaining consent for study cooperation from male subjects. In addition, this study analyzed the data of 361 individuals (males = 109; females = 252; 173 persons were excluded) and identified significant differences between the characteristics of the included versus the excluded participants. The results indicated that among the excluded participants, the proportion of individuals who required special nursing care was significantly higher than those who did not. The study considered the proportion small to influence the results because it targeted individuals who could eat independently. Fourth, future studies should consider that the target population in the central area is lesser than the real population because equal numbers of prospective participants were selected on the basis of the distance from the home to the supermarket. Fifth, the study was a cross-sectional analysis, which could not examine changes over time. Further research on changes over time is required in the future.
Despite these limitations, the study demonstrated differences in the usual intake and within-and between-individual variances of nutrients according to the age of older adults living alone.
The proportion of old-old adults living alone has increased in Japan. Therefore, this study contributes to the assessment of the recommended nutrient intake and appropriate countermeasures for older adults taking consideration of aging. | 2021-05-01T06:17:19.003Z | 2021-04-23T00:00:00.000 | {
"year": 2021,
"sha1": "837058c54a5ebfcd24d50790f1d71efabd365998",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/13/5/1431/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e79a9d76fd741e3a468149b099e85c0b994e5e96",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
203592002 | pes2o/s2orc | v3-fos-license | Multi-Agent Actor-Critic with Hierarchical Graph Attention Network
Most previous studies on multi-agent reinforcement learning focus on deriving decentralized and cooperative policies to maximize a common reward and rarely consider the transferability of trained policies to new tasks. This prevents such policies from being applied to more complex multi-agent tasks. To resolve these limitations, we propose a model that conducts both representation learning for multiple agents using hierarchical graph attention network and policy learning using multi-agent actor-critic. The hierarchical graph attention network is specially designed to model the hierarchical relationships among multiple agents that either cooperate or compete with each other to derive more advanced strategic policies. Two attention networks, the inter-agent and inter-group attention layers, are used to effectively model individual and group level interactions, respectively. The two attention networks have been proven to facilitate the transfer of learned policies to new tasks with different agent compositions and allow one to interpret the learned strategies. Empirically, we demonstrate that the proposed model outperforms existing methods in several mixed cooperative and competitive tasks.
Introduction
In nature, the battle for dominance rights or desirable territories between individuals is a typical phenomenon (Smith and Price 1973). Occasionally, individuals in the same group cooperate to compete against enemy groups. They can gain stronger immunity against predators (Ugelvig and Cremer 2007) or powerful forces to overpower preys (Powell and Clark 2004). The cooperation and competition among agents are also important modeling paradigms in various engineering systems, such as smart grids (Dall'Anese, Zhu, and Giannakis 2013), logistics (Ying and Dayong 2005;Cao et al. 2013), and distributed vehicles/robots (Corke, Peterson, and Rus 2005;Fax and Murray 2004;Matignon et al. 2012). To control such complex systems composed of many interacting components, researchers have studied multi-agent reinforcement learning (MARL) for a long time.
Although a variety of MARL algorithms have been proposed, unresolved issues still exist in MARL when it is applied to realistic environments. First, in terms of the generality of the modeling framework, most models focus on deriving pure cooperation or competition among multiple agents by forcing all agents to seek a shared common reward rather than modeling the relationships among heterogeneous agents in a mixed cooperative-competitive task (issue in modeling flexibility). In addition, the models are limited for modeling a large number of agents owing to the curse of dimensionality in modeling centralized critics (issue in scalability). This is connected with a more fundamental limitation, in which the trained model cannot be transferred to different tasks with different numbers of agents having different goals (issue in transferability).
We herein propose a model, called Hierarchical graph Attention-based Multi-Agent actor-critic (HAMA), that conducts both representation learning for multi-agent system and policy learning using multi-agent actor-critic in endto-end learning. HAMA employs a hierarchical graph neural network to effectively model the inter-agent relationships in each group of agents and inter-group relationships among groups. HAMA additionally employs inter-agent and intergroup attentions to adaptively extract the state-dependent relationships among multiple agents, which is proven to be effective for helping policies to adjust their high-level strategies (e.g., cooperate or compete). The combination of hierarchical graph neural networks with two distinct attention layers, which we refer to as a Hierarchical Graph Attention neTwork (HGAT), effectively processes the local observation of each agent into a single embedding vector, an information-condensed and contextualized state representation for each individual agent. HAMA sequentially uses the embedding vector for each agent to compute the individual critic and actor for deriving decentralized policies.
We empirically demonstrate that HAMA outperforms existing MARL algorithms in four different game scenarios. Furthermore, we demonstrate that the policies trained by HAMA in a small-scale game with a small number of agents can be applied directly to control a large number of agents in a new game. Finally, we demonstrate that inter-agent and inter-group attentions can be used to interpret the derived policy and decision-making process.
Related Work
We categorize existing MARL studies into two categories; learning-for-consensus approach and learning-tocommunicate approach depending on how the consensus among multiple agents is derived.
Learning-for-Consensus
Approaches. Learning-forconsensus approaches in MARL focus on deriving decentralized policies (actors) for agents, each of which maps a local observation for an agent to an individual action for it. To make such individually chosen actions be coordinated to conduct collaborative tasks, these approaches first construct a centralized critic for either a global reward or individual reward and use the centralized critic to derive the decentralized actor. MADDPG (Lowe et al. 2017) has extended DDPG (Lillicrap et al. 2015) to multi-agent settings for mixed cooperative-competitive environments. COMA (Foerster et al. 2018) constructs a centralized critic and computes an agent-specific advantage function to derive a decentralized actor. FDMARL (Zhang et al. 2018) has proposed a distributed learning approach for each agent to learn a global critic using its local reward and the transferred critic parameters from the networked neighboring agents. Because these models directly use the state or observation in constructing critic or actor networks, it is difficult to apply such models to a large-scale environment or transfer them to new environments. As a way to resolve the scalability issue, the concepts of graph neural network (Gori, Monfardini, and Scarselli 2005;Scarselli et al. 2009;Battaglia et al. 2018) and attention network have been employed to effectively represent the global state and accordingly centralized critics. For example, MAAC (Iqbal and Sha 2019) employs the attention network and graph neural network to model a centralized critic, from which decentralized actors are derived using soft actor-critic (Haarnoja et al. 2018). In addition, DGN (Jiang, Dun, and Lu 2018) applies a graph convolutional network to model a centralized Q-function for each agent using a deep Q-network (Mnih et al. 2015).
Learning-to-Communicate
Approaches. Another method to achieve consensus among decentralized policies in cooperative environments is using communication among agents. In this framework, each agent learns how to transmit messages to other agents and process the messages received from other agents to determine an individual action. During the centralized training, such message generating and processing procedures are learned to induce cooperation among agents. During the execution phase, agents exchange the messages to determine their actions. CommNet (Sukhbaatar, Fergus, and others 2016) uses a large single neural network to process all the messages transmitted by all agents globally, and the processed message is used to guide all agents to cooperate. BiCNet (Peng et al. 2017) has proposed a communication channel in the form of bi-directional recurrent network to accommodate messages from any number of agents, thus resolving the scalability issue. To effectively specify the communication structure while considering the relative relationships among agents, ATOC has proposed a communication channel in a bi-directional LSTM with an attention layer. The attention layer in ATOC enables each agent to process the messages from other agents differently depending on their state-dependent importance. Similar to ATOC, IC3Net (Singh, Jain, and Sukhbaatar 2019) has been proposed to actively select and mask messages from other agents during communication by applying gating function in the message aggregation step. TarMAC (Das et al. 2019) has proposed a targeted communication protocol to determine whom to communicate with and what messages to transmit using attention networks. Such communication-based methods use attention networks to learn the communication structure/protocol effectively. Although the communication helps each agent to use extensive information in its individual decision making, this approach requires a separate communication channel and a well-established communication environment for message exchange. Furthermore, some communication-based methods can be applied to only cooperative tasks because it does not make sense to receive messages from competitive agents while playing a game.
Novelties of HAMA. As introduced, the graph neural network and attention network structures have been widely employed (1) to model a critic for scalable learning in learningfor-consensus approach, and (2) to model communication structure in learning-to-communicate approach. HAMA, our proposed model, employs the HGAT to embrace the merits of employing a graph representation in both learning-forconsensus and learning-to-communicate approaches. The proposed HGAT capturing enhanced relative inductive biases (Battaglia et al. 2018) enables HAMA to model both centralized critic and decentralized actor (1) that can be scal-able and transferable and (2) that can effectively utilize the contextualized state representation. In particular, because it learns how to represent the partial observation through graph embedding, it is different from other communication approaches that utilize the messages processed by other agents. Therefore, our approach is considered to be a kind of learning-for-consensus approaches and can be applied for mixed cooperative and competitive environments easily.
Background
Partially Observable Markov Game (POMG). A POMG is an extension of partially observable Markov decision process to a game with multiple agents. A POMG for N agents is defined as follows: s ∈ S denotes the global state of the game; o i ∈ O i denotes a local observation that agent i can acquire; a i ∈ A i is an action for agent i. The reward for agent i is computed as a function of state s and joint action a as r i : The state evolves to the next state according to the state transition model T :
Multi-Agent Deep Deterministic Policy Gradient (MAD-DPG).
Deterministic policy gradient (DPG) (Silver et al. 2014) aims to directly derive a deterministic policy, a = μ(s; θ), that maximizes the expected return The D is an experience replay buffer that stores (s, a, r, s ) samples. Deep deterministic policy gradient (DDPG), an actor-critic model based on DPG, uses deep neural networks to approximate the critic and actor of each agent.
MADDPG is a multi-agent extension of DDPG for deriving decentralized policies for the POMG. MADDPG comprises the individual Q-network and policy network for each agent. The Q-network for agent i is learned by minimizing the loss: and a = (a 1 , . . . , a N ) are, respectively, the observations and actions of all agents, and (Veličković et al. 2017) is an effective model to process structured data that is represented as a graph. The GAT has proposed a way to compute the node-embedding vector of graph nodes by aggregating node embeddings h j from neighboring nodes {j ∈ N i } that are connected to the target node i as h i = σ( j∈Ni α ij Wh j ). The attention weight α ij = softmax j (e ij ), where e ij = a(Wh i , Wh j ), quanti-fies the importance of node j to node i in computing nodeembedding value h i .
Methods
HAMA comprises a representation learning framework for processing the state represented as a graph and a multi-agent actor-critic network for deriving decentralized policies for the agents. As shown in Figure 1, HAMA represents the game state as a graph and computes for each agent the nodeembedding vector that compactly summarizes each agent's status in relation with other groups of agents and environment. The computed node-embedding vector for each agent is subsequently used to compute the Q-value and action in an actor-critic framework.
State Representation Using HGAT
We propose HGAT, a network stacking multiple GATs hierarchically, that processes each agent's local observation into a high-dimensional node-embedding vector to represent the hierarchical inter-agent and inter-group relationships of each agent.
Agent Clustering. The first step in representation learning is to cluster all the agents into distinct groups C k using prior knowledge or data. For pure cooperative tasks, all the agents can be categorized into a single group. If the target task involves competition between two groups, we can cluster the agents into two groups. In addition, we can cluster into a group the agents that do not execute any actions but participate in the game (i.e., terrain components or obstacles). In this study, we assume that the agents can be easily clustered into K groups using prior knowledge on the agents, which implies that HAMA utilizes enhanced relative inductive biases regarding the group relationships.
Node-Embedding Using GAT in Each Cluster. Agent i has the local observation o i = {s j | j ∈ V (i)} where s j is the local state of agent j, and V (i) specifies the visual range of agent i. The visual range can be specified depending on environment settings so that agent i can observe the agents within a certain distance. Thus, our agent can observe nearby agents as a partial observation. Agent i computes the different node-embedding vectorsh k i for different groups k = 1, ..., K to summarize the individual relationships between agent i and agents from different groups. To computē The inter-agent attention weight α k ij quantifies the importance of the embedding h k ij from agent j to agent i. The inter-agent attention weight is computed as softmax . The attention can be extended to multiple attention heads (Vaswani et al. 2017), but the current study employs only plain and classical attention networks. It is noteworthy that agent i computes embedding h k ij by processing its own observation on other agents; therefore, the other agents are not required to send messages to agent i, unlike other learning-to-communicate approaches that require agents to exchange their hidden vectors.
Hierarchical State Representation Using Multi-Graph
Attention. This step aggregates the group-level nodeembedding vectorsh 1 i , ...,h K i of agent i for the informationcondensed and contextualized state representation of agent i as h i = K k=1 β k ih k i while considering the relationships between agent i and the groups of other agents. The inter-group attention weight β k i guides which group agent i should focus more on to achieve its objective. For example, if β k i is large for the same group which agent i belongs to, it implies that agent i focuses on cooperating with the agents in the same group. Otherwise, agent i would focus more on competing with agents from different groups. The inter-group attention weight is computed as softmax The hierarchical state representation is particularly useful when considering mixed cooperative-competitive games where each agent or group possesses their own objectives, which will be empirically shown by various experimental results in this study. The embedding and attention functions in this study comprise a two-layered MLP with 256 units and ReLUs.
Multi-Agent Actor-Critic
The proposed method uses the embedding vectors h C i and where c(i) is the group to which agent i belongs. Note that the embedding vectors h C i and h A i are computed separately using two different HGATs; computing h C i requires a joint action a in the training phase under CTDE. Additionally, agents in the same group share the actor and critic networks for generalization.
Compared to using raw observation as an input for the critic and actor network (Lowe et al. 2017), using nodeembedding vectors computed from HGAT as inputs of-fers the following advantages: (1) a node-embedding vector can be computed by considering the hierarchical relationships among agents, i.e., relative inductive biases, thus providing contextualized state representation; (2) it is scalable to a large number of agents as the dimension of a node-embedding vector does not change with the number of agents; and (3) HGAT enables the learned policy to be used in environments of any agent or group size, i.e., the property that transfer learning aims to achieve.
The training of HAMA is similar to that of MADDPG. The shared critic Q k for agent i in group k is trained to minimize the loss L: where Q μ and μ are, respectively, the target critic and actor networks for stable learning with delayed parameters (Lillicrap et al. 2015). In CTDE framework, the joint observation and action are assumed to be available for training. The shared actor μ k for agent i in group k is then trained using gradient ascent algorithm with the gradient computed as:
vs. 1 Predator-Prey, c) 3 vs. 3 Predator-Prey, and d) The More-The Stronger.
Lu 2018) as well as mixed cooperative-competitive environments extended from well-known environments. As baseline algorithms for comparing the performances, we consider MADDPG and MAAC because they belong to learning-forconsensus approaches and are designed to process only local observation during the execution phase as HAMA does. These algorithms are more general for mixed cooperativecompetitive games where communication is not always possible. For the cooperative navigation, we additionally consider ATOC, one of learning-to-communicate approaches, as a baseline because this game is fully cooperative; thus, the communication-based method can be naturally considered. All the performance measures are obtained by executing the trained policies with 3 different random seeds on 200 episodes. Regarding the visual range of the agent, we assume that each agent observes up to three nearest neighboring agents per each group with relative positions and velocities in all the experiment settings.
Cooperative Navigation
First, the proposed model is validated in the cooperative navigation, where only cooperation among agents exists. In the game, all the agents, which are homogeneous, are required to reach one of the landmarks without colliding with each other. Each episode starts with n randomly generated agents and landmarks and ends after 25 timesteps. During an episode, each agent receives −d, the distance to the nearest landmark, as a reward. In addition, each agent receives an additional reward, −1, whenever it collides with other agents during navigation. It is an optimal strategy that each agent occupies its distinct landmark. Figure 3 compares the normalized mean penalties of four different MARL algorithms (instead of representing results with rewards that are negative, we present the results in terms of penalty as a negative reward since it is more intu- itive for this game). A smaller mean penalty implies closer to the nearest landmark and fewer collisions with other agents. The penalty is averaged over every 10,000 steps until reaching 3 million steps.
In training, as shown in Figure 3, HAMA converges fast to the lowest value. This is possible because HAMA effectively represents the state of each agent by considering their relative positions and velocities through HGAT. Table 1 compares the normalized mean penalties that the agents obtain during testing with 200 episodes with the trained models. ATOC achieves smaller mean penalty than MADDPG and MAAC. Because ATOC employs an active communication scheme based on attention network, it can effectively derive the cooperative behavior among agents. Our model has a lower mean penalty than ATOC. This indicates that the cooperative strategies trained by HAMA can effectively induce coordination among agents even without having active communication among agents.
Due to the use of a shared actor with efficient state representation, the trained policies by HAMA and ATOC can be applied to the cooperative game with any number of agents/landmarks, whereas the policies trained by MAD-DPG and MAAC cannot be transferred. The performance of transfer learning is also summarized in Table 1. When the policies trained by 3 agents are used to play the game with 50 and 100 agents, HAMA has the lower average penalty and higher percentages of landmark occupation (provided in the parenthesis in the table) by the participating agents. Note that when we conduct the transfer learning experiments, we reduce the size of agents (25 times smaller) to have a large number of agents in the same environment, where each agent can observe three nearest agents and three landmarks.
vs. 1 Predator-Prey
A predator-prey game consists of two groups of agents competing with each other, along with obstacles that participate in the game but do not take an action. The goal of three homogeneous predators is to capture one prey, while the goal of the prey is to escape from the predators. For the predators to capture the prey, they need to cooperate with each other because of their slower speed and acceleration compared with those of the prey. Each predator gets a positive reward, +10, when it catches the prey, and the prey receives a negative reward, −10, when it is caught by a predator. When the prey leaves a certain zone, the prey receives a negative reward to prevent it from leaving farther. It is noteworthy that each agent seeks to maximize their accumulated rewards, which results in the competition between predators and prey. We compare the performance of HAMA with two other models as baselines: MADDPG and MAAC. Each model is trained while self-playing (i.e., predators and prey are trained with the same model), and the trained policies are validated while having the trained policies compete with other policies trained by different models. Table 2 summarizes the average scores that a predator can obtain per step in an episode. The results indicate that the predators trained by HAMA have higher or similar scores than MADDPG and MAAC when competing with the prey trained by other models. Similarly, the prey trained by HAMA results in the lowest score when competing with the predators trained by different models. Note that when HAMA plays the role of a single prey where no cooperation is required, it still performs best in defending itself from the predators because it effectively configures the relationships with the predators and obstacles by using HGAT.
vs. Predator-Prey
The next game we consider is 3 vs. 3 predator-prey game, a variant of the original 3 vs. 1 predator-prey game. The game rules are identical to those of 3 vs. 1 predator-prey game. In this game, if a predator recaptures a prey that has already been captured, neither reward occurs. Instead, each predator receives an additional reward, +10 * t r , when the predators capture all preys, where t r is the number of remaining timesteps in the episode, and the game ends. Although the game is similar to that of the original predatorprey game, the optimal strategy of the agents is no longer clear because more diverse and complex strategic interactions occur among the two groups of agents. For example, a predator can choose to either cooperate with other predators to chase a prey or to capture a prey individually if the prey is nearby. In addition to MADDPG and MAAC, we consider two heuristic strategies for the predators. In Heuristic 1, all the predators chase the same prey that has not been captured yet. In Heuristic 2, each predator chases the prey closest to the predator. Table 3 compares the results of the game when the predators and preys, each of which is trained by self-playing, compete against each other. As shown in the table, the predators trained by HAMA achieve the highest scores against the preys trained by all other algorithms. Similarly, the preys trained by HAMA defend the best against the predators trained by all different algorithms, including the two heuristic strategies. The performance of HAMA is incomparably superior to those of other methods in both roles of predator and prey. This is remarkable in that HAMA and MAAC achieve a similar performance in the 3 vs. 1 predator-prey game where the only strategy of the predators is to cooperate to capture a unique prey. Meanwhile, in the 3 vs. 3 predatorprey game, each predator can choose from various strategies, such as cooperating with other predators or chasing prey individually. When chasing prey, a predator can also choose which prey to chase. The superior performance of HAMA is possible because it learns to represent better the hierarchical relationships among agents in the dynamic game owing to the relative inductive biases imposed by HGAT.
We validate our hypothesis on the success of HAMA's strategy by conducting an ablation study. Table 4 Note that the SG-IAA has a similar architecture with MAAC. Compared to the SG-IAA, a hierarchical graph attention architecture always scores higher regardless of whether attention is used. We assume that this effectiveness is due to the use of enhanced relative inductive biases regarding both the agent-level interactions and the grouplevel interactions. In comparing the role of attention, when both attentions are considered, the HG-IAGA outperforms the others. The combination of hierarchical graph structure and specially designed attentions is a key factor that induces the superior performance of HAMA.
Transfer Learning. In general, when the number of predators is large and the number of preys is small, the predators have a higher chance to win the game (i.e., capture all the preys within a single episode). As shown in Figure 4, this general trend is well realized when the predator and prey policies trained by HAMA in the 3 vs. 3 predatorprey game are transferred to play an m vs. n predator-prey game. It shows that the success rate of the predators is close to 1 when m (number of predators) > n (number of preys).
Interpreting Strategies. We explain why a certain action of the agent is induced at a certain state by analyzing and interpreting the inter-agent and inter-group attention weights in HAMA. In Figure 5, the blue, red, and gray circles represent the predators, preys, and obstacles, respectively. The plots in each row show how each predator agent, which is represented by the blue circle with a black outline, attends other agents in the same and different groups over time. The width of the arrow indicates the magnitude of the attention weight α k ij on the agent the arrow is pointing out in each group. The blue and red bars at the top of each figure indicate the magnitudes of inter-group attention weights β k i to the predator (k = 1) and the prey groups (k = 2), respectively. The black arrow indicates the agent's action (i.e., direction and speed). From the predator's perspective, the attention to predators and preys can be interpreted as the at- tention to cooperation and competition. Figure 5 depicts the situation where predator 1 and 3 increase the cooperative attention (i.e., attention to the same group) over time to jointly chase the prey into a corner of the box. Meanwhile, predator 2 attempts to catch one prey, whose strategy is indicated by the attention to the competition (i.e., attention to the different group).
The More-The Stronger
The more-the stronger game keeps the framework of the 3 vs. 3 predator-prey game. The additional game rule in this game is that when the preys are clustered together, only a group of predators whose size is equal to or larger than that of the clustered preys can capture the preys. For example, one predator can capture one prey by itself, but threegathered predators are required to capture three-gathered preys. HAMA outperforms other models in this game.
Conclusions
We herein proposed a multi-agent actor-critic model based on state representation by a hierarchical graph attention network. Empirically, we demonstrated that the learned model outperformed other MARL models on a variety of cooperative and competitive multi-agent environments. In addition, the proposed model has been proven to facilitate the transfer of learned policies to new tasks with different agent compositions and allow one to interpret the learned strategies. | 2019-09-27T08:40:01.000Z | 2019-09-27T00:00:00.000 | {
"year": 2019,
"sha1": "4f6a187dea2d9aad945d76332aa73e7060a28702",
"oa_license": null,
"oa_url": "https://ojs.aaai.org/index.php/AAAI/article/download/6214/6070",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "8980fcedfb8777e64ba224deff901b46191a169f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
264153041 | pes2o/s2orc | v3-fos-license | Dyslexia and Stuttering: An Overview of Processing Deficits and the Relationship Between Them
Stuttering and dyslexia are two processing deficits that have an impact on a person's social and academic lives, especially as they usually affect the pediatric population more than adults. Even though they affect different domains, they have similar characteristics in their pathogenesis, epidemiology, and impact on life. Both disorders represent a considerable percentage of the population worldwide and locally in Saudi Arabia, and they have similar epidemiological trends. Family history, genetic factors, early fetal and neonatal factors, and environmental factors are all identified as risk factors for both conditions. Moreover, it has been established that both diseases share a common genetic and anatomical basis, along with a mutual disruption of diadochokinetic skills. While rehabilitative techniques can be used in both conditions, stuttering could also benefit from pharmacological interventions. This review emphasizes that extensive research should be done to explore both of these conditions as they impact different areas of one's life and the relationship between them to better understand their pathophysiological origins.
Introduction And Background
Speech and language disorders are classified into many different categories, including stuttering, speechsound disorder (SSD), specific language impairment (SLI), and developmental dyslexia (DD) [1].
The term "dyslexia" is drawn from the Greek words dys (deficiency or lack of) and lexicon (word).Dyslexia occurs when an individual has significant difficulties reading, writing, and understanding a text.Unlike other learning disabilities, intelligence is not affected, so people with dyslexia have adequate intelligence and typical schooling, and it is not essentially an all-or-nothing condition; the patient can exhibit a different degree of severity [2,3].
Stuttering is a speech disorder characterized by involuntary, audible, or silent repetitions or prolongations of sounds or syllables, along with disturbances in the fluency of verbal phrases.It is not easily controllable and may co-occur with other movements as well as negative emotions like fear, embarrassment, or agitation, and it is associated with higher levels of social anxiety [4,5].Being attentive is important for making a correct diagnosis in children, as it is believed that early therapy is critical.Stuttering in adults could be associated with significant psychosocial morbidities, such as social anxiety and poor quality of life [6].
As much as the two sound very different, it has been noted that the two disorders share commonalities in their anatomical basis, pathogenesis, and epidemiological features.Both of which were found to have an underlying difficulty in processing speech sounds in a word, which is why they are collectively termed processing deficits [7].
According to the review of literature we made, there are just a few studies that have discussed dyslexia and stuttering and whether or not a relationship between them exists.So, our review article's main aim is to provide an overview of the global and local epidemiological features of stuttering and dyslexia, along with their risk factors, clinical characteristics, and available methods of management, and explore the relationship between them.We searched for articles to be reviewed in Google Scholar and PubMed.The most recent articles that were published on a certain topic with the best study design (according to the hierarchy of evidence) were included in this study.Therefore, articles with topics that were explored in more recent studies or with a more reliable study design were excluded.
Review Local and global epidemiological trends of stuttering and dyslexia
Stuttering is not an uncommon problem in children.The prevalence of stuttering varies significantly; however, it has been estimated to be around 0.72% of the general population [8].Approximately 5% of all children experience a six-month or longer period of stuttering.Later in childhood, about 75% of kids with stuttering will start to recover, and only 1% of them will have long-term consequences [9].
Locally in Saudi Arabia, a study about the social knowledge of stuttering in the Saudi population discussed the prevalence of stuttering in KSA.According to the findings, stuttering has a higher incidence at young ages, and IQ scores do not correlate with the prevalence of stuttering in the Saudi community.Furthermore, stuttering affects more than 6% of the Saudi population, and it is dramatically increased in males compared to females.Information about the effects of stuttering, on the other hand, was rarely reported.As a result, future experiments with well-planned public education and health interventions for stuttering will be encouraged [6].
Regarding the age of onset of stuttering, it is known to predominantly affect children, and different studies have explored the age of onset and the people at risk of developing stuttering.Studies showed that most stuttering cases start in early childhood, sometimes even before 18 months of age; only a few cases reported stuttering that started during the teens [10].In a study about the onset of stuttering in children, most children in the study were noticed by their parents to have begun stuttering for periods of two months or less.A study shows data from six studies in the United Kingdom, Denmark, the USA, and Australia about the onset of stuttering among school and preschool children (till the age of six There is a remarkable difference between studies on the age of onset and the percentage of children.Moreover, about the patterns of stuttering, most parents said that their children's syllable and word repetitions (especially short ones) were common in early stuttering, and syllables were repeated three to five times.Sound prolongations, silent intervals, blocks in short-syllable and singlesyllable word repetitions were seen in a lower percentage of children, which are all symptoms of children who stutter [8].
As for dyslexia, it is one of the most common learning disorders and the most common cause of difficulties with reading, writing, and spelling.The global prevalence of dyslexia is estimated to be between 5% and 10% of the population, but it may be as high as 17% [11].
Among the Saudi population, a cross-sectional study conducted in Riyadh, Saudi Arabia, in 2015 at public and private girls' schools, which included a random sample of 720 students from 1st grade to 6th grade, showed that 172 out of 720 students (23.89%) have some sort of learning difficulty.The most common of all types of learning disabilities were dyslexia and dysgraphia, which represent 31.4% and 27.3%, respectively, as shown in Table 1.Academic performance is lower for students with learning difficulties than for normal students.The results showed dyslexia was the most common learning difficulty among 720 students, with a percentage of 31.4%, which represents 7.5% of the Saudi population [7].
Risk factors of stuttering and dyslexia
In the case of stuttering, the presence of monosyllabic word repetitions and sound prolongation is related to emotional stress, according to an analytical cross-sectional study that was conducted at the Fluency Studies Laboratory of a public university's Department of Speech and Hearing Disorders.Also, having relatives with stuttering, a history of delayed childhood development, and a late start to speech or learning were mentioned as factors that increase the risk of speech disorders.Physical stress and inappropriate family attitudes were also implicated in the development of stuttering [12,13].A secondary analysis of data done in 2010 identified being fidgety and restless as a risk factor, which might point towards the coexistence of attention deficit hyperactivity disorder (ADHD).Also, this study found that troubles during birth and parental alcohol abuse are possible factors that may cause stuttering [14].Family history of stuttering, unfavorable performance in phonological assessment, more frequent stuttering-like dysfluencies, and poorer nonword repetition task performance were linked as risk factors for the persistence of childhood stuttering rather than its development [15].
As for dyslexia risk factors, they may include premature birth, prenatal alcohol consumption, and nicotine exposure, which may alter brain development in the fetus [16].In 2012, a study identified maternal smoking during pregnancy, birth weight, and socioeconomic status as possible environmental risk factors for developmental dyslexia, as they are thought to increase the genetic liability to developing dyslexia [17].Moreover, a study by the same research group in 2013 identified the risk of miscarriage, maternal and paternal age at childbirth, and the educational level of both parents during the first three years of the child as potential risk factors [18].Also, a cross-sectional study investigating the prevalence and risk factors of dyslexia in China found that a child's degree of engagement in active learning was proposed to be a possible factor contributing to the development of dyslexia [19].The frequency of literacy-related activities, time spent on electronic devices, and restrictions applied to children on the use of electronic devices have also been implicated in the development of this condition [20].
As for the genetic risk factors, a systematic review by Becker et al., 2017, found that many genes were discussed for their possible association with the development of dyslexia (i.e., DYX1C1, DCDC2, DYX9, and DYX2).However, no evidence that the proposed genes play a role was investigated in another study [21].
Despite that, a study conducted by the University of York's Psychology Department Ethics Committee and the NHS Research Ethics Committee found that having a family history of dyslexia or other learning disorders is one of the main risk factors.Dyslexia has a hereditary aspect as it runs in families; this has been a known fact for many years, and there is a lot of evidence of its association with candidate genes.However, these hereditary (familial) effects are not very clear because learning disabilities are influenced by both genetic and environmental factors.Even if other risk factors were controlled, people with dyslexia would still not be able to learn like normal people, which is explained by gene-environment correlation.Because a parent's genotype is related to both the child's genotype (here, the child has a genetic basis of dyslexia) and the child's environment (for example, poor quality education), it represents a passive gene-environment relation (rGE), while active rGE refers to children actively selecting their environment for genetically influenced reasons, which means that the child has genetic bases but also does not want to study [22].
Clinical presentation of stuttering and dyslexia
Stuttering is a speech disorder in which the flow of speech is disrupted by repetitions or prolongations of words, syllables, and sounds.Patients with stuttering might have difficulties starting words, too.Short pauses or silences between syllables or sentences or within a word (broken word) and excessive tenseness, tightness, or movement of the face or upper body to pronounce a word are also features of the disorder.Moreover, it may be accompanied by excessive eye blinking and jaw or lip tremors.Symptoms of stuttering tend to be exacerbated by public speaking or anxiety; they are often better when singing or speaking alone [13].
Dyslexia symptoms are often hard to detect before school when the child is noted to have learning difficulties.Symptoms before school often include late speech, slow learning of new sentences and phrases, issues with forming sentences, and difficulties with memorizing names, colors, or letters.Once the child starts school, signs and symptoms become more obvious and easier to notice, which may include his or her learning level being below the normal level of his age, difficulty recognizing (and hearing) similarities and variations in letters and words, difficulty with pronouncing new and unfamiliar words, spelling issues, and spending a longer time on an assignment that requires reading or writing.In teens and adults, dyslexia may manifest as effortful and sluggish reading and writing, avoiding reading and writing activities, issues with the pronunciation of names or words, problems retrieving words, difficulty learning a new language, and problems with memorization [16].
Pathogenesis of stuttering and dyslexia
Is it possible that stuttering-related candidate genes play a role in dyslexia pathogenesis?Mutations were discovered in the GNPTAB, GNPTG, and NAGPA genes in a study that began the analysis of candidate genes [23].In another stuttering study, researchers found a mutation in the GNPTAB gene, as well as the other two related genes, GNPTG and NAGPA, in large families and sporadic patients, confirming their relationship with stuttering [24][25][26].Children with motor deficiencies or learning disabilities are more likely to stutter, suggesting that speech disorders and learning disorders are genetically related.As a result, these three genes (GNPTAB, GNPTG, and NAGPA) could predispose people to stuttering and may also be a risk factor for other learning and speech disorders.The genetic architecture, which includes a variety of molecular mechanisms, is complex.Rare protein-coding mutations in the forkhead box P2 (FOXP2) transcription factor, for example, cause serious defects with speech sound sequences, while common small-effect genetic risk variants in genes like CNTNAP2, ATP2C2 and CMIP are related to typical types of language impairment [27].
Using functional neuroimaging techniques, it has been suggested that dyslexia originates from abnormal activation of language networks of the left hemisphere, particularly the temporoparietal region, which is believed to be important for phonological processing, and the occipitotemporal region, which is important for visual word recognition.It has been reported that a decrease in gray matter may exist in these regions [28].
Stuttering was once thought to be mainly a psychogenic condition, and the severity of stuttering is affected by arousal, nervousness, and other factors.As a result, two-factor models of stuttering have emerged.The first factor, which is most likely a structural or functional central nervous system (CNS) abnormality, is thought to be the cause of the condition.The second factor is the avoidance of learning, which strengthens the first one [29].Differences in brain anatomy, function, and the regulation of dopamine levels have been associated with stuttering, which is thought to be attributed to genetic factors [6].On the other hand, it is also hypothesized that stuttering is a result of genetic, environmental, and epigenetic interactions that shape the child's anatomy and function of the brain.It was noticed that stuttering results from defective interarticular coordination patterns, which are well-functioning in fluent adults.This is hypothetically attributed to the inability of the central nervous system to develop well-functioning muscle synergy and coordinated motor programs.These unstable patterns are suggested to originate from abnormal left speech motor and premotor areas and abnormal connections between motor, language, and auditory areas.
Recently, a delay in the development of speech-motor control is thought to play a role in early dyslexia, which contributes to its persistence [30].
Management of stuttering and dyslexia
Stuttering can be treated in different ways, according to a review by Maguire et al., 2020.First, pharmacological therapy includes first-generation dopamine-blocking medications and a variety of secondgeneration dopamine-blocking medications; the first generation has more serious side effects than the second generation.Even though numerous studies have demonstrated the efficacy of other pharmacological treatments in reducing disease burden, such as alpha receptor agonists, GABA agonists, and calcium channel blockers, no drugs have been approved by the FDA until now.Recently, there have been two active drugs, ecopipam and valbenazine, and they are currently undergoing clinical trials.The second option is speech therapy, which has shown improvement in patients with stuttering.Finally, cognitive behavioral therapy is a psychotherapeutic method that can show better outcomes for patients with stuttering; it is also associated with reducing anxiety, and the patients will be more confident about speaking in public.According to a clinical study of cognitive and behavioral therapy (CBT) combined with speech therapy, stuttering is similar to other neuropsychiatric disorders like depression, in which "talk" therapy combined with medication is the best treatment option.Moreover, CBT is useful in cases of stuttering, as it often coexists with social anxiety and other anxiety disorders.Because of their experience with both psychotherapy and psychotropics, psychiatrists, along with phoniatric doctors, speech-language pathologists, and psychologists, play a significant role in the medical team to help improve stuttering outcomes.Psychiatrists should collaborate with these clinicians.Transcranial direct current stimulation is also under investigation for its ability to improve disfluency, as some evidence of its efficacy in improving fluency is available [31].
The Lidcombe program (LP) is also used for the treatment of stuttering, and it has shown promising evidence.LP is a method by which the parents of a child with stuttering are instructed to verbally describe a situation to the child and ask him to say a sentence that is similar in both length and complexity.LP is considered a direct method of treatment as it engages the child in an activity that improves his speech.The Lidcombe program was shown to improve speech fluency in children who are younger than six years of age.Several other methods have shown promising results in improving stuttering, such as the RESTART-DCM program, which indirectly aims to improve stuttering by balancing the communication demand and the child's ability, and the parent-child interaction (PCI) program, which aims to improve stuttering through interactive strategies.In adolescents and adults, speech restructuring therapy, which is often used in conjunction with transcranial direct current stimulation, has been shown to be effective in reducing stuttering and improving fluency, but with no effect on other aspects of stuttering like social anxiety [32].
Technology-based therapy is an emerging technique in the management of stuttering.It includes virtual reality-based interventions, video-self modeling, telehealth technology, biofeedback, software programs, and other forms.Technology-based stuttering interventions have been shown to be well-tolerated and effective in reducing stuttering, according to a systematic review [33].
As for dyslexia, several treatment options exist, including programs for reading where children with dyslexia who have trouble matching letters to sounds and words to meanings are given additional reading and writing assistance.The kids will work with a reading teacher; they will learn how to pronounce letters and words (phonics), read more quickly, and comprehend what they're saying.
A few reading programs are designed specifically for dyslexic children.For instance, Orton-Gillingham is a step-by-step method for teaching children how to match letters to sounds and identify letters in words.
Multisensory instruction teaches children how to learn new skills using all of their senses (touch, sight, sound, smell, and movement).To learn how to read different words, the children may run their fingers over sandpaper letters.Individualized for children with learning disabilities such as dyslexia, an individualized educational program (IEP) outlines the child's requirements and how the school can assist with meeting those requirements.Every year, the plan will be updated by the parents and the school based on the child's improvement.A learning specialist may do one-on-one or group sessions, either in the classroom or in a different room in the school, to provide special education to the child.Accommodations are outlined in an IEP, which describes special programs for a child's needs to make learning easier.For instance, audiobooks provide additional time to complete tests or text-to-speech, which is a technology that reads words out loud from a machine or document.A child's education should be a continuous process that is not limited to the classroom.Reading with the children and assisting them in sounding out words that they are having difficulty with is a good idea.
Moreover, some suggestions were shown to help children and adults with dyslexia, like trying to read in a quiet environment, listening to books on CD or device and reading along with the narration, asking teachers and colleagues for assistance when needed, joining a dyslexia support group for kids or adults, and getting enough sleep along with eating a well-balanced diet [34].
Furthermore, technology-based interventions have also been shown to improve phonological skills in dyslexic children, according to a meta-analysis of four studies [35].
Relationship between dyslexia and stuttering
In terms of epidemiological characteristics, the prevalence of dyslexic adults who had a speech disorder (stuttering) in childhood was indicated in the study with a percentage of 34%.This was determined by the severity of dyslexia.People with moderate dyslexia had a lower incidence of childhood stuttering (15%), while those with severe dyslexia had a higher incidence of childhood stuttering (47%).Furthermore, the prevalence of adults with stuttering who had dyslexia as children was 50%.Phonological working memory, perception, and retrieval were equally decreased in adults with dyslexia and adults with stuttering.
According to the finding, stuttering and dyslexia may share a phonological deficit [10].Moreover, both diseases have a large male bias (2:1 ratio for dyslexia and 3.7:1 ratio for stuttering) [36].
Regarding the pathogenesis of both conditions, several studies have suggested that speech and language disorders may share a genetic basis.For example, the forkhead box P2 (FOXP2) gene has a significant role in the pathogenesis of stuttering and dyslexia [1,37,38].Although stuttering and dyslexia seem to be quite different diseases, they have some significant similarities.Both disorders have been shown to share the same genetic factors (i.e., DRD2, GNPTAB, and NAGPTA) [24,39].In both disorders, phonological processing is impaired (e.g., phonological awareness [40] and phonological working memory [41]. Furthermore, one study discussed the common pathogenesis from a different angle, which is the diadochokinetic skill.The sample of the study includes dyslexic, stuttering, and normal children.The amount of time needed to receive and interpret motor gestures (which are usually short, repetitive, rhythmic movements that are closely tied with prosody in verbal speech) to produce precise and repeated syllables over time is known as diadochokinetic skill.Verbal diadochokinetic skill is the time required for oral recurrence of monosyllable and multi-syllable verbal structures, which is usually measured using the maximum recurrence rate paradigm.
Most studies indicate that when a child's motor system matures, their diadochokinetic skills improve, and by the age of 9-15, they should be comparable to an adult's motor system.The study we found started with 120 children: 40 with stuttering, 40 with dyslexia, and 40 who were normal.Age, gender, and bilingualism were evenly distributed among the three groups.The inclusion criteria were children from 6-11 years old with a diagnosis of stuttering for the stuttering group and dyslexia for the dyslexic group.The diadochokinetic skills of the selected children were assessed separately using a diadochokinetic task to measure their diadochokinetic abilities.Children were asked to repeat the monosyllables (pa/ta/and/ka), which were presented one by one orally in a quick, accurate, and fluent way.The examiner used a chronometer (an instrument for measuring time accurately despite motion or variations in temperature, humidity, and air pressure) to measure the time that was needed for every child to do 15 repetitions.To assess the ability of children for long syllable (pataka) pronunciation, the same method was used, and the time spent on 15 repetitions was reported.The control group underwent a similar procedure.Finally, the study's findings show that children with stuttering and children with dyslexia have inefficient movement of the tongue, particularly in terms of diadochokinetic skills.Deficits indicate that the children's motor and speech functions were affected.Because of the similarity between the two groups, the study suggests that both disorders originate from the tongue.Furthermore, the motor deficits of both disorders can be explained by their shared neural basis.It could also be explained by the fact that in both disorders, the malfunction occurs in a similar region of the brain.To identify this dysfunction, different parts of the children's brains were examined with fMRI while they performed diadochokinetic tasks [42].
Conclusions
We concluded that stuttering and dyslexia originate from a complex interplay between a genetic predisposition, anatomical abnormalities, and defective neurophysiological mechanisms.Moreover, stuttering and dyslexia seemingly share a relationship in different aspects.Both disorders have been shown to have a similar genetic basis, common anatomical involvement, and comparable pathophysiological processes and neural ground.Also, dyslexia and stuttering have similar epidemiological characteristics.So, it is important to do further research to determine the presence of a relationship between dyslexia and stuttering, which may become helpful for treatment and prevention methods in the future.Furthermore, interventional modalities, such as technology-based interventions, have promising evidence for their potential benefits in managing both dyslexia and stuttering.While speech therapy and cognitive and behavioral therapy remain the most commonly used methods to manage stuttering, other interventional programs like the Lidcombe program, the RESTART-DCM program, and the parent-child interaction program have shown encouraging primary results in the management of stuttering.However, data on all existing interventions need further exploration to ascertain their value in the management of both conditions, as sufficient evidence is not established yet. | 2023-10-17T15:04:40.085Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "c49aff98d3a558379a4414015efd099f1d894fe8",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/review_article/pdf/177737/20231015-19814-8txffd.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9ec2b094f219efc1f3b13a14d5a7098c9ad3fbe",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
8974089 | pes2o/s2orc | v3-fos-license | A Prognostic Model for Patients with Triple-Negative Breast Cancer: Importance of the Modified Nottingham Prognostic Index and Age
Purpose Considering the distinctive biology of triple-negative breast cancer (TNBC), this study aimed to identify TNBC-specific prognostic factors and determine the prognostic value of the Nottingham Prognostic Index (NPI) and its variant indices. Methods A total of 233 patients with newly diagnosed stage I to III TNBC from 2003 to 2012 were reviewed. We retrospectively analyzed the patients' demographics, clinicopathologic parameters, treatment, and survival outcomes. The NPI was calculated as follows: tumor size (cm)×0.2+node status+Scarff-Bloom-Richardson (SBR) grade. The modified NPI (MNPI) was obtained by adding the modified SBR grade rather than the SBR grade. Results The median follow-up was 67.8 months. Five-year disease-free survival (DFS) and overall survival (OS) were 81.4% and 89.9%, respectively. Multivariate analyses showed that the MNPI was the most significant and common prognostic factor of DFS (p=0.001) and OS (p=0.019). Young age (≤35 years) was also correlated with poor DFS (p=0.006). A recursive partitioning for establishing the prognostic model for DFS was performed based on the results of multivariate analysis. Patients with a low MNPI (≤6.5) were stratified into the low-risk group (p<0.001), and patients with a high MNPI (>6.5) were subdivided into the intermediate (>35 years) and high-risk (≤35 years) groups. Age was not a prognostic factor in patients with a low MNPI, whereas in patients with a high MNPI, it was the second key factor in subdividing patients according to prognosis (p=0.023). Conclusion The MNPI could be used to stratify patients with stage I to III TNBC according to prognosis. It was the most important prognosticator for both DFS and OS. The prognostic significance of young age for DFS differed by MNPI.
INTRODUCTION
Triple-negative breast cancer (TNBC), an intrinsic subtype of breast cancer, is defined as a tumor that does not express the estrogen receptor (ER), progesterone receptor (PR), or human epidermal growth factor receptor 2 (HER2) [1].Despite racial disparity, the overall incidence of TNBC is estimated at 15% to 20% of all diagnosed invasive breast cancers [2,3].TNBC has distinctive characteristics and natural history; it is usually associated with African-American ethnicity, young age at diagnosis, advanced disease, and poor outcome [2,4].Metastasis is characterized by an early peak of recurrence and a high incidence of visceral metastasis, particularly to the lungs and brain [1,2].
The Nottingham Prognostic Index (NPI), a value composed of tumor size, lymph node (LN) status, and histologic grade, was suggested for discriminating patients according to prognosis in the adjuvant setting of primary operable breast cancer [5].Use of the NPI has been subsequently validated in longterm follow-up data [6,7] and extensive multicenter studies from the Danish Breast Cancer Cooperative Group [8].After preoperative systemic therapy (PST) was widely accepted in clinical settings, it has been shown that residual tumor burden in the breast and/or LNs, as defined by the NPI, is an independent prognostic factor.However, Chollet et al. [9] showed that the modified breast grading index (MBGI), which scores histologic grade using a French modified Scarff-Bloom-Richardson (MSBR) grading system, rather than the traditional Scarff-Bloom-Richardson (SBR) system, has a higher prognostic in-
A Prognostic Model for Patients with Triple-Negative Breast Cancer: Importance of the Modified Nottingham Prognostic Index and Age
Purpose: Considering the distinctive biology of triple-negative breast cancer (TNBC), this study aimed to identify TNBC-specific prognostic factors and determine the prognostic value of the Nottingham Prognostic Index (NPI) and its variant indices.Methods: A total of 233 patients with newly diagnosed stage I to III TNBC from 2003 to 2012 were reviewed.We retrospectively analyzed the patients' demographics, clinicopathologic parameters, treatment, and survival outcomes.The NPI was calculated as follows: tumor size (cm) × 0.2+node status+Scarff-Bloom-Richardson (SBR) grade.The modified NPI (MNPI) was obtained by adding the modified SBR grade rather than the SBR grade.Results: The median follow-up was 67.8 months.Five-year diseasefree survival (DFS) and overall survival (OS) were 81.4% and 89.9%, respectively.Multivariate analyses showed that the MNPI was the most significant and common prognostic factor of DFS (p= 0.001) and OS (p= 0.019).Young age (≤ 35 years) was also correlated with poor DFS (p= 0.006).A recursive partitioning for establishing the prognostic model for DFS was performed based on the results of multivariate analysis.Patients with a low MNPI (≤ 6.5) were stratified into the low-risk group (p< 0.001), and patients with a high MNPI (> 6.5) were subdivided into the intermediate (> 35 years) and high-risk (≤ 35 years) groups.Age was not a prognostic factor in patients with a low MNPI, whereas in patients with a high MNPI, it was the second key factor in subdividing patients according to prognosis (p= 0.023).Conclusion: The MNPI could be used to stratify patients with stage I to III TNBC according to prognosis.It was the most important prognosticator for both DFS and OS.The prognostic significance of young age for DFS differed by MNPI.
fluence than the conventional NPI after induction chemotherapy.Although some studies have reported that the attenuated relationship between tumor size or stage and probability of survival was observed in TNBC [1,2], a smaller-scale study showed that the NPI seems to be able to stratify and predict the prognosis of patients with TNBC in the adjuvant setting [10].
Considering the distinctive tumor biology of TNBC, there is a need to identify TNBC-specific prognostic factors.In addition, the NPI and its modified indicators have not yet been evaluated in patients with TNBC in a clinical practice setting including PST.Therefore, we sought to evaluate the prognostic influence of various clinicopathologic factors including the NPI and other indices.
Patient identification
A total of 233 patients newly diagnosed with stage I to III TNBC at Seoul National University Bundang Hospital from March 2003 to December 2012 were reviewed.To identify the patients with TNBC, we reviewed initial histopathologic parameters, including ER, PR, and HER2 status.TNBC was defined as the subtype showing no expression of ER, PR, or HER2 according to the 2013 St. Gallen Consensus.HER2 negativity was defined as a negative or 1+ score for c-erbB-2 by immunohistochemistry, or no amplification of HER2 by fluorescence in situ hybridization.After obtaining approval from our Institutional Review Board (B-1505/298-116), we retrospectively reviewed the patients' medical charts to collect data on demographics, clinicopathologic parameters, treatment, and survival outcomes.
All patients were staged according to the American Joint Committee on Cancer staging system, seventh edition.For the analysis, initial clinical stage was used for patients treated with PST, and pathologic stage was used for patients who were not treated with PST.Baseline Ki-67 and cyclooxygenase 2 (COX-2) were recorded based on the results of initial immunohistochemistry. COX-2 was considered positive with a staining score of 3+, as previously described [11].Pathologic factors, including histology, histologic grade, extracapsular extension (ECE), lymphovascular invasion (LVI), and multiplicity, were based on the pathologic report of the curative surgical specimen.Node ratio (NR) was defined as the ratio of positive to excised nodes.The NPI was calculated as follows [6]: tumor size (cm) × 0.2+node status (1, node negative; 2, 1-3 positive LNs; 3, ≥ 4 positive LNs)+SBR grade (1, grade I; 2, grade II; 3, grade III).The modified NPI (MNPI) was obtained by adding the MSBR grade [12] instead of the SBR grade.The breast grading index (BGI) and MBGI were also calculated by the summation of tumor size (cm) × 0.2 and SBR or MSBR grade, respectively [9].
Clinical endpoint and statistical analyses
Disease-free survival (DFS) was defined as the duration from the date of initiating treatment to the first failure or last follow-up.Overall survival (OS) was calculated from the date of initiating any treatment to the date of death from any cause or the last follow-up.Survival data were collected through inquiries to the Resident Registration of the Ministry of Security and Public Administration of the Republic of Korea.In terms of treatment failure, locoregional failure (LRF) was defined as a failure occurring in the ipsilateral breast/chest wall or the ipsilateral regional LNs (including the axillary, supra/infraclavicular, and internal mammary LNs), while distant failure (DF) was defined as any failure that did not qualify as LRF, including contralateral breast events.Locoregional failure-free survival (LRFS) and distant metastasis-free survival (DMFS) were defined as the duration from the date of initiating treatment to the date of last follow-up or failure (LFR and DF, respectively).
The actuarial survival curves were estimated using the Kaplan-Meier method, and the effects of each variable on survival were evaluated by log-rank test.For multivariate analysis, we fitted a Cox regression model with the forward stepwise selection method, as entering the variables confirmed that the assumption of proportional hazards was met.A conditional inference tree was used to estimate a regression relationship by binary recursive partitioning.Statistical analyses were performed using STATA version 13 (Stata Corp., College Station, USA) and R program version 3.2.2(R Foundation for Statistical Computing, Vienna, Austria).A p-value below 0.05 was considered statistically significant.
Patient and tumor characteristics
Patient and tumor characteristics are summarized in Tables 1 and 2. The median patient age at diagnosis was 48 years (range, 20-89 years).The most common tumor histology was infiltrating ductal carcinoma (83.3%), with metaplastic carcinoma as the second most common histology (8.6%).Of 57 patients who received PST, the pathologic complete response (pCR) rate was 26.3%.The median number of harvested LNs was 9, and this increased to 20 in patients with an NR > 0.2 (8.6%).The median NPI and MNPI were 4.44 (range, 2.60-7.30)and 6.38 (range, 3.04-9.30),respectively.Immunostaining of Ki-67 was performed in all, but three, patients.The median value of baseline Ki-67 was 40%.COX-2 expression was available in 112 patients, and 23.2% patients were positive for COX-2.
Survival outcomes and patterns of failure
The median follow-up for all patients was 67.8 months (range, 0.7-147.7 months).Five-year DFS and OS were 81.4% and 89.9%, respectively.During the follow-up period, 45 patients experienced failure (crude failure rate, 19.3%).DF occurred in 38 patients, and the lung was the most common site (47.4%) of the first DF.Of 38 patients with DF, 18 experienced both DF and LRF.Isolated LRF occurred in seven patients.Five-year DMFS and LRFS were 85.2% and 88.6%, respectively.
Univariate analysis
Patients less than 35 years of age at diagnosis had a significantly shorter DFS (p = 0.002).Both LRF and DF occurred more frequently in this group.However, this did not translate to compromised OS.Initial T stage and N stage affected both DFS and DMFS.Histologic grade according to the SBR sys-
Multivariate analysis
After the identification of significant variables in univariate analyses, we performed multivariate analyses to adjust for interaction between factors.As shown in Table 4, multivariate analysis showed that the MNPI was the most significant and common prognostic factor of DFS (p = 0.001) and OS (p = 0.019).Patients with high MNPIs had a threefold increased In detailed analysis with respect to failure type, the MNPI retained its significance for DMFS (p= 0.002, HR 3.37).
Figure 1 shows survival according to MNPI.Young age ( ≤ 35 years) also correlated with poor DFS (p = 0.006) (Figure 2).All other variables considered, the statistical significance on DFS of pathologic tumor size, LN status, ECE, multiplicity, LVI, and NPI disappeared.
Prognostic model for DFS
Based on the results of multivariate analysis, we performed a recursive partitioning for establishing a prognostic model for DFS.As shown in Figure 3, the patients were divided into three risk groups.Patients with a low MNPI ( ≤ 6.5) were stratified into the low-risk group (p < 0.001).Patients with a high MNPI ( > 6. Age was not an important factor in patients with a low MNPI, whereas young age was the second key factor in subdividing patients with a high MNPI according to prognosis (p= 0.023).The 5-year DFS of patients with high risk (high MNPI and age ≤ 35 years) was estimated as 41.7%, in contrast with patients with intermediate risk (high MNPI and age > 35 years; 72.7%).
For the validation of the impact of age on DFS, multivariate analysis of subgroup according to MNPI was performed.This also demonstrated that young age was a significant factor in the high MNPI group (p= 0.034, HR, 2.65), while its significance disappeared in the low MNPI group.
DISCUSSION
This study identified the prognostic value of the MNPI in TNBC.A high MNPI was found to contribute to decreased DFS and OS.Patients less than 35 years of age had decreased DFS, though age was not correlated with decreased OS.Interestingly, however, the influence of age varied between the risk groups according to MNPI.In patients with a low MNPI, survival outcome was not affected by age, but DFS was significantly decreased in patients less than 35 years of age with a high MNPI.
To the best of our knowledge, no previous study has evaluated the prognostic value of the MNPI in TNBC cases including a non-PST group, although the NPI has been validated in a large-scale study without a PST group [6][7][8].In a study of 168 patients who did not receive PST, Albergaria et al. [10] showed that the NPI is able to predict prognosis in TNBC patients.In this study, the MNPI, including MSBR grade, was not analyzed.The first study evaluating the MNPI and other NPI-related indices showed that MBGI was the only prognosticator for DFS in multivariate analysis [9].However, a subsequent study including 710 patients demonstrated a high prognostic significance of the MSBR and MNPI [13].Unlike our study, these studies were conducted only with patients, regardless of intrinsic subtype, who had been treated with PST.
The reason why the MNPI is more closely related to prognosis than the NPI can be ascribed to differences in the histologic grade system.Use of MSBR to score the histologic grade is based on nuclear pleomorphism and mitosis without considering tubular formation.Thus, it allows the grading of all tumors, including non-invasive ductal carcinoma, unlike SBR.Also, unlike SBR' s three groups, MSBR categorizes tumors into five groups [12].In this study, most of the patients were SBR grade 3 (78.92%).However, when applying MSBR, the patients with SBR grade 3 were classified into two categories (MSBR grade 4, 23.6%; MSBR grade 5, 76.4%), and patients with SBR grade 2 were separated into MSBR grades 1 to 4 (Supplementary Figure 1, available online).This resulted in high prognostic influence outcomes (MSBR vs. SBR: DFS, p=0.07 vs. p=0.90;DMFS, p=0.03 vs. p=0.55).These findings are consistent with those reported in earlier studies [12,13].
It is well known that TNBC is associated with young age at diagnosis [2,14,15].However, the prognostic value of young age in TNBC remains controversial.Lee et al. [14] showed that, although age had more influence on survival, age < 35 years was not a prognosticator in TNBC.Fayaz et al. [15] also reported that young age did not correlate with other prognostic factors, such as histologic grade, T stage, N stage, LVI, and Ki-67 positivity, and did not negatively impact any survival outcomes of TNBC patients.Meanwhile, our study showed that young age was significantly associated with decreased DFS, but not OS.Ovcaricek et al. [16] observed that patients younger than 65 years had a higher risk of relapse, as compared to older patients; however, as with our results, younger age did not result in decreased OS in multivariate analysis, despite the discrepancy in cutoff value.Taken together, it remains unclear whether younger patients are at a higher risk of death from TNBC than older patients.However, an impact of young age on recurrence could not be completely ruled out.As shown in this study, the prognostic significance of young age might be enhanced in the high-risk subgroup.This inconsistency between studies may be because previous studies did not analyze the prognostic impact of young age according to other risk factors, such as the MNPI.Thus, further investigation is warranted.
TNBC has a high propensity for DF, despite an LRF rate similar to the luminal subtypes [17].Therefore, it is important to identify the patients at risk of distant metastasis.The shortened DFS observed in patients with a high MNPI and young age seems to stem mainly from a high rate of DF in our study.In addition to high MNPI and age, an NR > 0.2 was also significantly associated not only with poor DMFS, but also with poor OS.
The number of positive axillary LNs has been considered to be the most important prognostic factor in breast cancer [11,13,18].Thus, N stage is determined solely by the number of positive LNs.However, the possibility of underestimating the LN status by incomplete axillary LN dissection has been repeatedly raised, since a more extensive axillary LN dissection could increase the chances of finding positive LNs [18].Furthermore, the growing use of sentinel LN biopsy leads to a less extensive axillary evaluation.It has been suggested that NR could adjust the extent of dissection.
NR as a prognosticator of breast cancer has been validated in our previous studies, as well as large-scale studies [11,18].Meta-analysis also showed that NR significantly correlates with OS, DFS, and breast cancer-specific survival [19].However, in most studies of TNBC patients, the prognostic significance of nodal status was evaluated by the number of positive LNs, not NR.These studies demonstrated that patients with fewer positive LNs had significantly more favorable DFS and OS [20,21].Recently, Solak et al. [22] reported a clearer prognostic separation for TNBC using NR as compared to using pN staging.These results are consistent with our findings, but the prognostic value of NR in TNBC requires further validation in a large study.
In addition to the abovementioned traditional factors, there have recently been attempts to identify biomolecular markers for more individualized prediction.Ki-67, which reflects the proliferation rate of various malignant tumors, has been sug-gested as a prognostic or predictive factor in breast cancer [23,24].In our study, the median Ki-67 value was 40%.This was similar to a previous report showing that TNBC had a higher Ki-67 index (median, 50%) than other breast cancer subtypes [24].The study by Nishimura et al. [24] that included 356 patients with TNBC revealed that tumor size and nodal status, but not Ki-67 (cutoff value, 20%), significantly affected DFS in multivariate analysis.They also showed that the higher Ki-67 index was closely related with the following clinicopathologic factors in their whole cohort (including 2,638 breast cancer patients): young age, large tumor size, positive LNs, a high nuclear grade, negative for ER/PR, p53 overexpression and positive for HER2.Similarly, in our study, despite no association between prognosis and Ki-67, the Ki-67 index was significantly higher in patients of young age (p = 0.021) and with a higher MNPI (p= 0.020).However, high Ki-67 expression was not correlated with various survival outcomes.Keam et al. [25] reported that TNBCs with high Ki-67 ( ≥ 10%) expression were associated with a higher pCR rate than TNBCs with low Ki-67 expression, and high Ki-67 was correlated with poor DFS and OS, paradoxically, in 105 TNBC patients treated with PST.Several recent retrospective studies have consistently suggested that Ki-67 may be a prognosticator in TNBC patients, as well as in patients with luminal type A breast cancer [20,26].However, this needs further study to examine whether the Ki-67 index can be used to subdivide TNBC.
COX-2 is a key enzyme for the production of prostaglandins, and elevated prostaglandins can enhance angiogenesis, cell proliferation and tumor cell invasion.In breast cancer, COX-2 has been observed to be expressed more frequently in TNBC than in other subtypes [27].However, there are few studies addressing the relationship between COX-2 expression and prognosis in TNBC patients.Kim et al. [28] reported that COX-2 expression translated into shortened relapse-free survival in ER negative breast cancer, but they did not report any association with TNBC.Chikman et al. [29] showed that TNBC patients with COX-2 expression had significantly decreased DFS.However, the study consisted of a small number of TNBC patients (67 patients).In this study, we found no relation between COX-2 expression and survival outcome.It is still difficult to draw a clinical relevance of COX-2 expression in TNBC, considering the scarcity of data.
Some studies have reported that the relationship between stage and survival outcomes is not clear in TNBC, unlike other intrinsic subtypes [1,2].Nevertheless, many researchers have observed that tumor size or LN status have a significant association with DFS or OS in TNBC patients [14,16,20,21].Thus, the MNPI, which is calculated with input from tumor size, LN stage, and modified histologic grade, appears to be theoretically appropriate to predict the prognosis of patients with TNBC, and our study revealed that the MNPI is the most significant prognosticator for DFS, DMFS, and OS.With the increasing use of PST, the prognostic value of the MNPI based on surgical specimens after PST has already been determined [9,13] and our subgroup analysis yielded identical results (DFS and OS, p < 0.001 for both in the PST group).Therefore, the MNPI could be applied to predict prognosis in the PST group.Various tools for stratifying patients according to prognosis, in particular, those including molecular markers, have been developed [30].These tools could help us consider an individual patient's biological features.However, the advantage of the MNPI is that it does not require additional efforts or costs.
This study is the first to evaluate the prognostic impact of the MNPI on DFS, DMFS, and OS.In addition, our recursive partitioning revealed that young age ( ≤ 35 years) is an important prognosticator of DFS in patients with a high MNPI.To the best of our knowledge, we are the first to demonstrate that the prognostic impact of young age could change according to risk group.This study has several limitations related to it being a retrospective study.Furthermore, our study includes a relatively small number of patients, and the population was heterogeneous in terms of stage, pathologic features, and treatment modality.In particular, the mixed cohort of patients in our study with regards to PST suggests that the effects of prognosticators be confounded.Thus, the results should be interpreted with caution.Additionally, the follow-up duration was not sufficiently long, considering the disease course of breast cancer.Therefore, caution should be exercised when drawing definite conclusions from our results.However, to minimize the selection bias, we tried to include all consecutive patients with TNBC.This increased the heterogeneity of the patients, but could have had an impact on the realistic clinical setting.
In conclusion, the MNPI that contains information on tumor size, LN status, and tumor grade according to MSBR can be used to stratify patients with stage I to III TNBC according to prognosis.It was the most important prognosticator for DFS, DMFS, and OS in patients with TNBC.The prognostic meaning of young age for DFS (especially with respect to DMFS) varied according to MNPI in our recursive partitioning analysis.
Table 1 .
Patient characteristics BCS = breast-conserving surgery; SLNB = sentinel lymph node biopsy; ALND = axillary lymph node dissection.*Median (range); † Clinical staging system was applied to patients with neoadjuvant chemotherapy and pathologic staging system was used for patients treated by upfront radical surgery; ‡ Mean ± SD.
Table 2 .
Tumor characteristics The presence of LVI affected both DFS and OS.Of the four indices for predicting prognosis, a high NPI or MNPI was well correlated with poor DFS and OS, while neither BGI nor MBGI were significant prognostic factors.Adjuvant chemotherapy or radiotherapy was not associated with any endpoint.Neither baseline Ki-67 nor COX-2 contributed to survival outcomes.Table3summarizes the key results of univariate analysis for each endpoint. | 2018-04-03T00:00:37.095Z | 2017-03-01T00:00:00.000 | {
"year": 2017,
"sha1": "80925faccf22c31e8e96c6b831a918cc63894a5e",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.4048/jbc.2017.20.1.65",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "80925faccf22c31e8e96c6b831a918cc63894a5e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
30357787 | pes2o/s2orc | v3-fos-license | SOX8 regulates cancer stem‐like properties and cisplatin‐induced EMT in tongue squamous cell carcinoma by acting on the Wnt/β‐catenin pathway
A sub‐population of chemoresistant cells exhibits biological properties similar to cancer stem cells (CSCs), and these cells are believed to be a main cause for tumor relapse and metastasis. In our study, we explored the role of SOX8 and its molecular mechanism in the regulation of the stemness properties and the epithelial mesenchymal transition (EMT) of cisplatin‐resistant tongue squamous cell carcinoma (TSCC) cells. We found that SOX8 was upregulated in cisplatin‐resistant TSCC cells, which displayed CSC‐like properties and exhibited EMT. SOX8 was also overexpressed in chemoresistant patients with TSCC and was associated with higher lymph node metastasis, advanced tumor stage and shorter overall survival. Stable knockdown of SOX8 in cisplatin‐resistant TSCC cells inhibited chemoresistance, tumorsphere formation, and EMT. The Wnt/β‐catenin pathway mediated the cancer stem‐like properties in cisplatin‐resistant TSCC cells. Further studies showed that the transfection of active β‐catenin in SOX8 stable‐knockdown cells partly rescued the SOX8 silencing‐induced repression of stem‐like features and chemoresistance. Through chromatin immunoprecipitation and luciferase assays, we observed that SOX8 bound to the promoter region of Frizzled‐7 (FZD7) and induced the FZD7‐mediated activation of the Wnt/β‐catenin pathway. In summary, SOX8 confers chemoresistance and stemness properties and mediates EMT processes in chemoresistant TSCC via the FZD7‐mediated Wnt/β‐catenin pathway.
eventually develop tumor relapse and become resistant to chemotherapy. 2,3 Although the mechanism of acquired chemoresistance in TSCC patients remains unclear, increasing evidence suggests that drug-resistant cancer cells acquire features of cancer stem cells (CSCs) and undergo epithelial mesenchymal transition (EMT). [4][5][6] Complete Twist1 depletion reverses Adriamycin-induced EMT and invasiveness in breast cancer. 7 In addition, cisplatin-selected bladder cancer cells displayed a strong self-renewal capacity and EMT characteristics. 8 Therefore, chemotherapy-induced cancer stem-like properties and EMT in tumor cells are closely related to chemotherapy resistance, and examining the mechanisms that regulate chemotherapy-induced cancer stem-like features and EMT is necessary for the development of novel therapies.
The Sry-like high-mobility group box (SOX) genes regulate different aspects of development, and the complex participation of the SOX family in various aspects of oncology has attracted increasing attention. 9 Accumulating evidence has indicated that SOX genes play an important role in drug resistance and CSCs or EMT. For example, SOX4 is a tumor promoter that contributes to drug resistance and progression in cervical cancer and regulates the EMT programme in breast cancer. 10,11 More interestingly, SOX4 is also highly expressed during metastasis and is specifically associated with lymph node metastasis in TSCC. 12 Conversely, the overexpression of SOX9 promotes self-renewal properties and in vivo tumorigenicity by facilitating symmetrical cell division in liver cancer. 13 Moreover, the upregulation of SOX9 endows cancer cells with stemness features through the activation of the Wnt/b-catenin signalling pathway, and constitutive activation of the Wnt/b-catenin pathway in TSCC is essential for the maintenance of CSC self-renewal and the promotion of chemoresistance. 14,15 However, whether SOX genes are responsible for conferring stem cell-like properties or EMT of chemoresistant TSCC remains unknown.
Here, we searched for SOX genes that were differentially expressed between two cisplatin-resistant cells and their parental TSCC cells, and we identified that SOX8 was significantly upregulated. Next, we investigated the role of SOX8 and its relationship with the Wnt/b-catenin pathway in regulating chemotherapy-induced stem-like features and EMT in TSCC cells and evaluated the function of SOX8 expression on the tumor growth and apoptosis of chemoresistant TSCC xenografts. Finally, we analyzed the correlation between SOX8 expression with clinicopathological status and survival outcomes in TSCC patients.
Material and Methods
Patients and tissue specimens TSCC specimens (n 5 103) were collected from three independent centres, including Sun Yat-sen Memorial Hospital (n 5 52), West China Hospital (n 5 18), and the Affiliated Hospital of North Sichuan Medical College (n 5 33), between 2006 and 2012. Patient responses were classified as cisplatinsensitive or cisplatin-nonsensitive according to previous studies. 16 The tumor samples were examined by two independent pathologists, and tumor grade was defined according to WHO criteria (2004). 16 Our study was approved by the ethics boards of the three hospitals, and all patients provided informed consent for participation.
Cell culture and drug treatment
The CAL27 and SCC9 cell lines were obtained from the American Type Culture Collection. The stable cisplatinresistant cell lines, CAL27-res and SCC9-res, were established by clonal selection of CAL27 or SCC9 treated with cisplatin (Sigma, Carlsbad, CA) from 10 27 M to 10 25 M as described. 17,18 The CAL27 and CAL27-res cells were cultured in DMEM (Gibco, Rockville, MD) supplemented with 10% foetal bovine serum (Invitrogen, Carlsbad, CA). The SCC9 and SCC9-res cells were cultured in DMEM-F12 (Gibco, Rockville, MD) supplemented with 10% foetal bovine serum and 400 ng/mL hydrocortisone (Sigma-Aldrich, USA). Cisplatin (5 lM in CAL27 cells and 10 lM in SCC9 cells) is routinely added to the culture medium every other day and is removed prior to the experiments being performed.
Immunohistochemistry
Paraffin sections were dewaxed with xylene and rehydrated in descending concentrations of ethanol. Endogenous peroxidase was inhibited with 3% H 2 O 2 , and the slides were incubated with primary antibodies overnight at 48C. Positive cells in five 200lm fields were counted in each section under a microscope.
RNA interference
To establish SOX8 and b-catenin knockdown TSCC cells, recombinant lentivirus was generated by co-transfecting 293FT cells with shRNA from the lentiviral vector pLV3/H1/ What's new? Tongue cancer frequently spreads to the lymph nodes, and while chemotherapy with cisplatin has improved 5-year survival rates, all too often the cancer becomes resistant to chemotherapy and returns. Here, the authors show that tongue squamous cell carcinoma (TSCC) cells that have acquired cisplatin resistance express more SOX-8 mRNA than their parent TSCC cells. Getting rid of SOX-8, they showed, hampered the cells' chemoresistance as well as the epithelial to mesenchymal transition. Adding active beta-catenin to the cells lacking SOX-8 partially restored these properties, showing that SOX-8 acts through the Wnt/beta-catenin pathway.
Tumorsphere formation assay
A total of 1000 cells were seeded into ultra-low-attachment 6-well plates (Corning, USA) and grown in DMEM/F12 medium (Gibco, Rockville, MD) supplemented with 20 ng/ mL EGF, 20 ng/mL bFGF (PeproTech, USA), and B27 (Invitrogen, Carlsbad, CA) for 14 days. Tumor colonies containing >20 cells were counted. To test the secondary capacity for tumorsphere formation, primary tumorspheres were dissociated to a single-cell suspension and resuspended in DMEM/F12 containing the above supplements and then cultured in ultra-low-attachment plates.
Real time quantitative RT-PCR
For real time quantitative RT-PCR, total RNA was isolated from cells using TRIzol reagent (Invitrogen, Carlsbad, CA) and then treated with RNase-free DNase (Roche, Indianapolis, IN) according to the manufacturer's instructions. Total RNA was converted to cDNA using a M-MLV Reverse Transcriptase Kit (Invitrogen, Carlsbad, CA). Then, real-time PCR analyses were carried out in triplicate for each sample using a standard LC480 SYBR PCR kit (Roche, Indianapolis, IN) on a LightCycler 480 (Roche). The primers listed in Supporting Information Table 3 were used for PCR amplification.
Western blot analysis
Total protein was extracted from tissue and cell samples using RIPA lysis buffer (Beyotime) supplemented with protease inhibitor mixture (Sigma-Aldrich). An equal amount of each protein sample was loaded onto a 10% SDS-PAGE gel for electrophoresis, transferred onto a PVDF membrane (Millipore Corporation, Bedford, MA), blocked with 5% (w/ v) skim milk at RT for 1 hr and then incubated with primary specific antibodies overnight (Supporting Information Table 4). Peroxidase-conjugated anti-mouse IgG or anti-rabbit IgG (Proteintech, USA) was used as a secondary antibody. Finally, the antigen-antibody reaction was visualized using enhanced chemiluminescence reagent (ECL, Thermo, Rockford).
Flow cytometry assays
To assay apoptosis, TSCC cells were treated with cisplatin (5 lM) for 24 hrs and then examined using an Annexin V-FITC Apoptosis Detection Kit I (556547, BD Pharmingen TM ) according to the manufacturer's protocol. To analyse surface markers, 1 3 10 5 cells were dissociated into a single-cell suspension and incubated with anti-CD44 antibody conjugated to FITC and anti-CD24 antibody conjugated to PE at 48C in dark for 30 min (BD BioSciences). The cell suspension was washed with PBS to remove excess antibody, and data were collected and analysed on a FACS Calibur (BD Biosciences, Franklin Lakes, NJ).
Cell growth/survival assays
Viable cells were measured via MTS assay as recommended by the manufacturer (Promega, Tokyo, Japan). Briefly, cells were cultured in a 96-well plate overnight at a concentration of 2,000 cells/mL per well and treated with the indicated concentrations of cisplatin (2, 4, 6, 8, and 10 lM) for 24 hrs. Then, 20 mL of MTS solution was added to each well, followed by a 1 hr incubation at 378C. The reaction was quantitatively measured in a Microplate Reader (BioTek, Winooski, VT) at a wavelength of 490 nm. To assay cell growth, 2,000 cells/mL per well were measured using an MTS assay after 6 days culture as described above. To assay clonogenicity, 1,000 cells were seeded per well in a 6-well plate and then fixed and stained with 0.5% crystal violet solution after 7 days of culture. Colonies with a diameter >50 lm were counted.
Boyden chamber assays
For boyden chamber assays, 1 3 10 5 cells in serum-free medium were seeded into the upper inserts of 24-well Boyden chambers (Corning, New York, NY) with (for invasion) or without (for migration) Matrigel (R&D, USA). DMEM supplemented with 10% FBS was added to the lower chambers. Cell mobility was detected after 36 hrs (for migration) or 48 hrs (for invasion) for CAL27 and CAL27-res cells and after 16 hrs (for migration) or 24 hrs (for invasion) for SCC9 and SCC9res cells. The migrated and invaded cells were stained with 0.1% crystal violet and counted in five random fields.
The relative TOP/FOP activity (%) was calculated to show changes in Wnt/b-catenin activation.
Statistical analysis
Statistical analysis was performed using SPSS 20.0 software (SPSS Inc., Chicago, IL). All data are expressed as the group mean 6 standard deviation (SD). The v 2 test was used to analyse relationships between related proteins. Kaplan-Meier survival curves were plotted, and the log-rank test was used. All experiments were performed at least three times. The results of the experiments are expressed as the mean6 SD. p < 0.05 was considered statistically significant.
Cisplatin resistant cells CAL27-res and SCC9-res display characteristics of cancer stem-like cells and have undergone EMT
We continuously treated CAL27 and SCC9 cells with 10 27 M to 10 25 M cisplatin to establish cisplatin-resistant cells as previously reported. 17,18 CAL27-res and SCC9-res cells were resistant to clinically relevant doses of cisplatin (1-10 lM) and more anti-apoptotic than their parental cells after cisplatin treatment (Supporting Information Figs. 1a and 1b). CAL27res and SCC9-res cells exhibited a decreased expression of the epithelial marker E-cadherin, but the mesenchymal markers vimentin and N-cadherin exhibited increased mRNA and protein levels (Supporting Information Figs. 2a and 2b; p < 0.01). Additionally, CAL27-res and SCC9-res cells acquired stronger invasiveness and migratory capacity examined by chamber assay, indicating that cisplatin-resistant cells had undergone EMT (Supporting Information Figs. 2c and 2d; p < 0.01). Next, we cultured cisplatin-resistant TSCC cells and their parental cells in sphere culture medium (SCM) to examine their selfrenewal capacity. The cisplatin-resistant cells formed more spherical colonies than their parental cells from primary to tertiary spheres, demonstrating the self-renewing potential of cisplatin-resistant cells was maintained in vitro (Supporting Information Fig. 3a). Moreover, the mRNA and protein levels of stemness-associated genes, including BMI1, SOX2, OCT4, and ABCG2, were upregulated in CAL27-res and SCC9-res cells (Supporting Information Figs. 3b and 3c). It has been reported that CD441/CD242 cells possess CSC-like features in oral squamous cell carcinoma. 19,20 Our study showed the proportion of CD44 1 CD242 cells in CAL27-res and SCC9res cells (24.5 and 26%) was higher than that in CAL27 and SCC9 cells (10.9 and 13.3%) by flow cytometric analysis (Supporting Information Fig. 3d). In general, TSCC cells acquire cancer stem-like properties and features of EMT after cisplatin application.
The expression of SOX8 is upregulated in TSCC with cisplatin-resistance and correlated with poor prognosis To explore the roles of the SOX family in cisplatin-resistant TSCC cells, we measured the mRNA levels of SOX family genes in cisplatin-resistant cells and their parental cells. SOX2, SOX8, and SOX9 were significantly upregulated in CAL27-res and SCC9-res cells compared to their parental cells (fold change >2, p < 0.01), but no significant differences were found in the mRNA expression levels of the other SOX family genes (Fig. 1a). We found that mRNA and protein levels of SOX8, which showed the most significant differences between cisplatin-resistant cells and the parental cells, were increased in both CAL27-res and SCC9-res cells (Fig. 1b). Then, we analyzed the clinical significance of SOX8 expression in chemosensitive and chemo-insensitive TSCC specimens to determine its clinical significance. SOX8 was mainly localized in TSCC nuclei and was significantly upregulated in TSCC samples, while it was virtually absent in normal oral epithelium samples. Interestingly, we found a higher number of SOX8-positive cells in chemo-insensitive patients than in chemosensitive specimens, and this number was significantly higher than that found in normal oral tissues (Fig. 1c, p < 0.01). In addition, statistical analysis showed high SOX8 expression was positively associated with high tumor metastasis rate and reduced sensitivity to cisplatin, but no significant associations were found between SOX8 expression and age, sex, or clinical stage (Table 1). Kaplan-Meier analysis indicated that high SOX8 expression in TSCC patients was significantly associated with a reduced overall survival rate (Fig. 1d). Collectively, these data indicate that SOX8 is not only overexpressed in chemo-insensitive TSCC patients but also correlated with a poor prognosis, indicating that SOX8 may have an important function in TSCC chemoresistance.
SOX8 knockdown reduces cisplatin-induced stemness properties and EMT in cisplatin-resistant TSCC cells
To investigate the significance of SOX8 in cisplatin-resistant TSCC cells, we first designed three shRNAs targeting SOX8 using a lentiviral-based approach. The shSOX8#3 construct caused an obvious reduction of SOX8 in CAL27-res and SCC9res cells (Figs. 2a and 2b) and was therefore used to generate stable cell lines. Then, we examined whether SOX8 knockdown in cisplatin-resistant tongue cancer cells inhibited CSC-like features. SOX8 knockdown in CAL27-res and SCC9-res cells significantly reduced the expression of the stem cell transcription factors SOX2, OCT4, and BMI1 as well as ABCG2 (Figs. 2c and 2d; p < 0.01) and prevented cisplatin-induced cell death (Fig. 2e, p < 0.001). Furthermore, SOX8 knockdown significantly decreased tumorsphere-formation efficiency ( Fig. 2f; p < 0.001) and the proportions of CD44 1 CD242 cells in CAL27-res cells (from 24.2% to 12.9%) and SCC9-res cells (from 26.1% to 14%), which suggested SOX8 downregulation repressed self-renewal capacity in vitro ( Fig. 2g; p < 0.05).
We next investigated whether SOX8 knockdown may reverse the mesenchymal features and cell growth in chemoresistant TSCC cells. MTS and colony-formation assays both indicated that CAL27-res and SCC9-res cells with stable shRNA-mediated SOX8 knockdown showed a significantly lower capacity for proliferation than the control cells (Figs. 3a and 3b; p < 0.01). Meanwhile, SOX8 knockdown in cisplatin-resistant cells inhibited migration and invasion and reversed the mesenchymal markers, as indicated by the downregulation of vimentin and N-cadherin and the upregulation of E-cadherin (Figs. 3c and (Figs. 3e and 3f). Together, these data indicate that SOX8 knockdown inhibited cisplatin resistance in CAL27-res and SCC9-res cells, possibly by suppressing the adoption of a CSC-like and mesenchymal phenotype in these cells.
The Wnt/b-catenin pathway regulates cisplatin-resistant tongue CSCs As the Wnt/b-catenin pathway regulates TSCC cell proliferation and differentiation, we explored whether the increase in the TSCC CSC population that results from cisplatin pressure is regulated by Wnt/b-catenin signaling. We first detected the expression of two key targets of Wnt/b-catenin signaling, p-GSK3b and b-catenin. We observed an upregulation of p-GSK3b and b-catenin in cisplatin-resistant TSCC cells, whereas the total GSK3b level remained unchanged (Fig. 4a). A TOP/FOP Flash reporter assay also showed Wnt pathway in CAL27-res and SCC9-res cells was transactivated (Fig. 4b).
We next designed two shRNAs targeting b-catenin to establish stable cell lines. As shown in Figures 4c and 4d, knockdown of b-catenin resulted in a decreased expression of b-catenin and c-MYC, but an increased expression of Dkk1, in CAL27-res and SCC9-res cells (p < 0.01). b-catenin knockdown led to reduced TOP/FOP luciferase activity in cisplatin-resistant cells (Fig. 4e). Moreover, we observed that knockdown of b-catenin in cisplatin-resistant cells enhanced their sensitivity to cisplatin ( Fig. 4f; p < 0.01). Knockdown of b-catenin in cisplatin-resistant cells inhibited tumoursphere formation efficiency in vitro and decreased the tumorformation rate in nude mice compared to the control group (Fig. 4g, p < 0.01, Supporting Information Table 5). The above results suggest that Wnt/b-catenin pathway activation mediates cisplatin resistance in TSCC CSCs.
Regulatory interaction between SOX8 and the Wnt/b-catenin signaling pathway
SOX8 is a regulator of the Wnt/b-catenin signaling pathway, 21 and consistent with this information, we observed reduced b-catenin staining in both CAL27-res and SCC9-res shSOX8 cells compared to their corresponding parental cells.
A decrease of C-myc levels and an increase of Dkk1 levels were observed in chemoresistant cells infected shSOX8 compared to uninfected cells (Supporting Information Figs. 4a and 4b; p < 0.01). In addition, silencing of SOX8 decreased Wnt/b-catenin pathway activation according to the results of a TOP/FOP Flash reporter assay (Supporting Information Fig. 4c). To further clarify the above findings, we performed rescue experiments by stably expressing constitutively active b-catenin plasmids in SOX8-knockdown cells. As shown in Supporting Information Figures 4d and 4e, b-catenin expression increased in the SOX8 knockdown cells after transfection with active b-catenin plasmids. Additionally, there was increased TOP/FOPFlash reporter activity in SOX8-silenced clones (CAL27-res and SCC9-res) following the forced expression of b-catenin (Supporting Information Fig. 4f, p < 0.01). However, the forced expression of b-catenin only partially rescued the effect of SOX8 silencing on tumorsphere formation in vitro, indicating a rescue of self-renewal ability even if SOX8 knockdown (Supporting Information Fig. 4g). Based on these results, we conclude that the Wnt/b-catenin pathway mediates the downstream effects of SOX8 on chemoresistance in TSCC.
SOX8 promotes the Wnt/b-catenin pathway by binding to the promoter of Frizzled-7
Based on our findings, we further explored the possible role of the SOX8-mediated Wnt/b-catenin pathway. Previous studies have reported that SOX proteins can transactivate the FZD family (key receptors in Wnt signaling), 14,22,23 we attempted to examine the relationship between SOX8 and the FZD family. We found FZD7 mRNA levels in SOX8 stable-knockdown cells were reduced by 82% (p < 0.001) among all FZD family members (Fig. 5a). Furthermore, the silencing of SOX8 in CAL27res and SCC9-res cells efficiently decreased FZD7 protein levels (Fig. 5b). Using JASPAR programme for transcription factor binding analysis, we found one possible binding site of SOX8 in the 3.5-kb region upstream of FZD7 (jaspar.genereg.net; Supporting Information Table 6). We performed ChIP assays in CAL27-res cells to detect the interaction of SOX8 with the FZD7 promoter region. Using two independent SOX8-specific mouse antibodies (sc-374445 and sc-374445, Santa cruz, CA), we found that SOX8 binds to the FZD7 gene promoter region (Fig. 5c). We further used the luciferase reporter assay to examine the direct binding of FZD7 promoter region. A significant decrease in signals was detected in all SOX8-silenced cisplatinresistant cells, but FZD7 overexpression rescued the reduction in TCF/LEF luciferase activity induced by SOX8 knockdown in CAL27-res and SCC9-res cells (Fig. 5d). Similar results were obtained by Western blot assay for b-catenin expression (Fig. 5e). Collectively, these findings indicate that SOX8 promotes the activation of Wnt/b-catenin pathway through the transcriptional regulation of FZD7.
SOX8 knockdown decreases the capacity of xenograft formation and tumor metastasis initiation in CAL27-res cells
We subcutaneously injected five groups of stably transfected CAL27-res cells into nude mice to investigate whether the SOX8-b-catenin axis affected the apoptosis and cisplatin sensitivity of TSCC cells in vivo. As shown in Figures 6a-c, SOX8 knockdown attenuated xenograft growth and increased its sensibility to cisplatin. However, stable transfection of activated b-catenin in shSOX8 cells enhanced their tumor growth and resistance to cisplatin. Furthermore, a reduced number of PCNA1 cells and an increased number of apoptotic cells were found in SOX8-knockdown xenograft specimens compared to the negative control group, but the apoptosis cells was repressed and the number of PCNA 1 cells markedly increased after cisplatin treatment when activated b-catenin was stably expressed in shSOX8 xenografts (Fig. 6d).
A definite feature of CSCs is efficient tumorigenicity. 24-27 When 5 3 10 3 , 1 3 10 4 , and 1 3 10 5 CAL27-res cells were inoculated into immunodeficient mice, three, four and five mice in each Group (five mice) generated tumors, respectively (Supporting Information Table 7). On the contrary, mice inoculated with 5 3 10 3 , 1 3 10 4 CAL27 cells developed no tumor, while tumors developed in only one out of five mice inoculated with 1 3 10 5 CAL27 cells. Therefore, cisplatin resistant CAL27-res cells were at least 20-fold more tumorigenic than CAL27 cells. Furthermore, xenograft formation capacity was also detected in SOX8 knockdown CAL27-res cells with or without active b-catenin plasmids. One out of five mice injected with 1 3 10 4 SOX8 knockdown resistant cells and two out of five mice similarly injected with 1 3 10 5 cells generated tumors, while no tumor developed in mice inoculated with 5 3 10 3 cells. Nevertheless, activation of Wnt pathway rescued its strong tumorigenicity of SOX8 stable konckdown CAL27-res cells (Supporting Information Table 8). An important stem marker OCT4 was detected in xenograft samples. OCT4 was downregulated in xenografts with shSOX8 cells and upregulated in b-catenin overexpressing cells (Fig. 6d). These results suggest CAL27-res have stronger potent tumorigenic capability and the SOX8-b-catenin axis contributed to the high self-renewing capacity in vivo.
It has been hypothesized that only cancer cells with CSC properties can initiate metastases. 24 Six weeks after injection with 1 3 10 5 CAL27-res cells, lung metastases were observed in four out of five mice using microscopy, but lung metastases were invisible in lung specimens of mice inoculated with 1 3 10 5 CAL27 cells (Supporting Information Table 7). Moreover, no mice with 1 3 10 5 shSOX8 CAL27-res cells developed microscopic lung metastases. However, three out of five mice injected with b-catenin overexpressing shSOX8 cells developed lung metastases (Fig 6e; Supporting Information Table 8). IHC staining for vimentin and E-cadherin also showed SOX8-b-catenin axis contributed to EMT in vivo (Fig. 6d). These data suggest that SOX8 mediates cisplatin sensitivity and regulates CSC-like and EMT features through the Wnt/b-catenin pathway in vivo.
Disscussion
In this study, we demonstrated that SOX8 upregulation was crucial for chemotherapy-induced CSC enrichment, a mesenchymal phenotype and the chemoresistance of TSCC cells, and knockdown of SOX8 expression inhibited cancer stem-like properties, reversed mesenchymal features and repressed the tumor metastasis of chemoresistant TSCC via activation of the Wnt/b-catenin pathway. In addition, SOX8 bound to the promoter region of FZD7 to enhance the activity of the Wnt/b-catenin pathway. Furthermore, SOX8 levels were positively associated with lymph node metastasis and chemotherapeutic resistance of TSCC patients, and high SOX8 expression indicated a poor prognosis for TSCC patients.
Dysregulation of SOX genes have been well studied in numerous human neoplasms, and various SOX genes that act as oncogenes or tumor suppressors are involved in tumor formation and progression. 9,23,28 Here, we identified SOX8 as a functional oncogene that is involved in the maintenance of cancer stem-like capacities, the mesenchymal phenotype and chemoresistance in TSCC cells. SOX8 belongs to the SOXE subfamily and is an important transcription factor that is mainly involved in regulating mammalian testis and nervous system development. 29,30 Previous studies in hepatocellular carcinoma and gliomas have demonstrated that SOX8 promotes cell growth through the activation of Wnt/b-catenin signaling. 21,31 Xie et al. also reported that SOX8, as a target of miR-124b, is overexpressed in lung cancer and closely associated with a poor prognosis. 32 In addition, we found that SOX8 expression correlated with lymph node metastasis rate and poor survival time in 103 TSCC patients and was higher especially in chemoresistant TSCC patients than in chemosensitive patients, indicating its potential value in chemoresistance. Through detecting SOX8 expression in TSCC patients after surgery, we could determine the patient's sensitivity to chemotherapy, leading to a reduced probability of recurrence and metastasis after chemotherapy.
Our findings that SOX8 enhances the expression of FZD7, a transmembrane receptor for the Wnt pathway, suggest a correlation between SOX8 and the Wnt/b-catenin signalling. Wnt/b-catenin signalling in HNSCC has been shown to control self-renewal and differentiation of cancer cells, 33 and aberrant activation of Wnt pathway leads to EMT in epithelial tongue cancer cells 34 and acquisition of cancer stem-like properties. 35 As a key receptor of the Wnt pathway, FZD7 is highly expressed in oesophageal squamous cell carcinoma, breast cancer and in oral squamous cell carcinoma, and it is associated with the activation of Wnt signaling in these cancers. 36,37 For instance, FZD7 activates the Wnt/b-catenin signaling in oesophageal squamous cell carcinoma and enhances cell growth and metastasis and inhibits multidrug resistance. 35 In addition, FZD7 enhances the activity of the Wnt/ b-catenin signaling in oral cancer cells. 37 Herein, we showed that FZD7 overexpression in tongue cancer cells due to increased SOX8 expression led to abnormal activation of Wnt pathway and therefore induced CSCs and EMT in cisplatin-resistant cells. Moreover, we identified FZD7 as a direct target of SOX8, and SOX8 enhanced the activity of FZD7-mediated Wnt/b-catenin signaling in chemoresistant TSCC cells, along with the EMT phenotype with reduced Ecadherin and increased vimentin and N-cadherin and cancer stem-like features with increased SOX2, BMI1, OCT4, and ABCG2.
Numerous studies have shown that the EMT process and/ or CSCs confer tumor drug resistance. For instance, depletion of Twist1 in chemoresistant breast cancer cells can partially reverse sensitivity to chemotherapeutic agents. 7 Similarly, Slug or Snail overexpression also increases resistance to chemotherapy-induced cell death. 38 In our study, we identified SOX8-FZD7-Wnt/b-catenin as another important pathway regulating EMT and chemoresistance in TSCC. Therefore, EMT in tongue cancer cells may result in chemoresistance either through Twist1, Slug, or Snail, which directly controls the EMT process, or through the SOX8-FZD7-Wnt/ b-catenin pathway. Tumor cells with a more aggressive phenotype display strong self-renewal capacity and tumorigenicity under average chemotherapy pressure, which are similar characteristics to those harboured by CSCs that enable them to escape drug-induced cell death. 39,40 For example, Oct4 overexpression contributes to tamoxifen resistance and ALDH1 upregulation in hormone receptor-positive breast cancer. 41 Silencing SOX2 expression also attenuates the resistance to cisplatin in glioblastoma multiforme. 42 Our study demonstrated that chemoresistant TSCC cells acquired abilities similar to CSCs, which were accompanied by an increase in the CSC markers ABCG2, SOX2, OCT4, and BMI1 as well as strong self-renewal capacity and tumorigenicity in vivo. In addition, recent studies have shown a clear relationship between EMT and the acquisition of stem cell-like characteristics, 43 and cancer cells that have undergone EMT may acquire stem cell-like properties. 43,44 The SOX proteins have emerged as important modulators of Wnt/b-catenin signaling in various diseases and numerous studies indicate diverse regulating mechanisms including direct protein-protein interactions, binding to the promoters of Wnt signaling genes, and the recruitment of co-factors to regulate the stability of the target proteins. 15 For instance, SOX6, SOX9, and SOX17 directly combine with b-catenin in the region where TCF proteins also combine with b-catenin. 15 SOX17 regulates Wnt pathway via direct interaction with TCF3, TCF4, and LEF1 as well as b-catenin. 45 Alternatively, the histone de-acetylase HDAC1 can be recruited by SOX6 to b-catenin complexes. 46 Additionally, ChIP assays have shown that SOX9, a member of the SOXE group, directly binds to promoters of TCF4 and LRP6 and enhances the activation of Wnt signaling in breast cancer. 23 We have demonstrated that silencing of SOX8 inhibited the expression of b-catenin and its downstream genes and reduced the transcriptional activity of Wnt signaling in resistant TSCC cell lines. Moreover, we identified that SOX8 has a DNA-binding site in the promoter region of FZD7, a vital receptor of Wnt signaling. Our research is the first to explicitly show that SOX8 promotes CSC properties, chemoresistance and EMT in TSCC by acting on the FZD7-mediated Wnt/b-catenin pathway.
In summary, we identified SOX8 as the most differentially expressed protein between two cisplatin-resistant cells and their parental TSCC cells, and the ectopic expression of SOX8 promoted chemoresistance, CSC properties and EMT features in chemoresistant TSCC cells. The Wnt/b-catenin pathway mediated the cancer stem-like properties in cisplatin-resistant TSCC cells. SOX8 could bind to the promoter region of FZD7 and activated FZD7-mediated Wnt/bcatenin pathway to regulate chemoresistance, stem-like properties and EMT. Furthermore, SOX8 was upregulated in chemoresistant TSCC patients and significantly associated with high lymph node metastasis and poor prognosis. | 2018-04-03T01:39:03.874Z | 2018-03-15T00:00:00.000 | {
"year": 2018,
"sha1": "d6040b95bde614f0d27f68e6336452aecf883d21",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ijc.31134",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "d6040b95bde614f0d27f68e6336452aecf883d21",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
253966486 | pes2o/s2orc | v3-fos-license | Advances in E-Health and Mobile Health Monitoring
E-health as a new industrial phenomenon and a field of research integrates medical informatics, public health and healthcare business, aiming to facilitate the provision of more accessible healthcare services, such as remote health monitoring, reducing healthcare costs and enhancing patient experience [...].
Introduction
E-health as a new industrial phenomenon and a field of research integrates medical informatics, public health and healthcare business, aiming to facilitate the provision of more accessible healthcare services, such as remote health monitoring, reducing healthcare costs and enhancing patient experience. There has been a wide array of new technologies introduced for developing mobile solutions in healthcare. Mobile apps have been the most popular trend due to the wide availability and affordable price of smartphones. The majority of smartphone-based studies employ wearable sensors and AI (artificial intelligence) techniques to collect and analyze multiple vital signs on phones and the present useful reports or recommendations to users. While e-health and mobile health monitoring are rapidly growing fields of research and practice, there still remain challenges that need to be addressed and areas that could benefit from further improvements. Despite the large number of studies on mobile health-monitoring apps, their adoption in clinical practice and studies has been limited. The lessons learned from existing studies could significantly contribute to a better understanding of how the new approaches and technologies could be implemented and used more effectively and efficiently.
This Special Issue aims to address this gap in the research by discussing and sharing the results and implications of new methods and platforms introduced in e-health and mobile health monitoring, and providing a comprehensive review and analysis of the key areas within this domain.
Eight high-quality articles were selected and published in this Special Issue. Four articles are review articles, whereas one article conducts a comparative analysis of the datamining techniques and applies them to a real-world data set reporting on the opportunities of such approaches to improve healthcare processes. The four remaining articles present original research that introduces new methodologies, frameworks and algorithms for enhancing mobile health monitoring.
Overview of Contribution
Smartphones have been the key enabler of e-health solutions. Smartphones, at present, have built-in sensors, such as a camera and accelerometer that allow for the collection and analysis of data on the phone, using machine-and deep-learning techniques. The first article [1], by Joachim and colleagues, uses the smartphone's camera to capture the food's image and automatically detects the food item using AI-driven image analytics. The results are used for nutrition management and behavior intervention through the nudge theory. Kaur et al., the authors of the second article [2], introduce a smartphone-based ecological momentary assessment (EMA) method for collecting physical-activity data automatically and continuously on the phone for those with low-back pain. The collected data were then analyzed to recognize physical activities, such as sitting and walking, and understand their relationships with pain intensity. The results contribute to enhancing the data collection process in clinical trials of low-back pain that generally use self-reporting questionnaires to collect data at sparse intervals. The third article [3], by Kim et al., made technical and experimental contributions to improve the accuracy of recognizing physical activities on mobile devices. They proposed a novel algorithm that enhances transformer-based models by using the conformer model that is usually applied in speech recognition.
Smartphones and their built-in sensors provide many opportunities for health monitoring. In their article [4], Kulkarni et al. conducted a systematic review for understanding capabilities and limitations of smartphone sensing. The review presents interesting findings, including opportunities for using standardized sensing approaches and machine-learning advancements, and the predominance of mental health studies.
Mental health conditions have recently been recognized as a serious issue worldwide. A large number of studies strive to build mobile interventions to assist individuals with the monitoring and self-management of these conditions. Conversational agents and chatbots have been a promising technology in building mobile mental health apps. The success of these apps mostly relies on their AI capabilities and personalization. The fifth article [5], by Rathnayaka et al., explored the use of the behavioral activation (BA) therapy in building more effective chatbots for supporting mental health. The study's methodology and its participatory evaluation in a pilot study setting present useful insights to the research community in this field.
Data mining has become widely used for healthcare data analytics in hospital settings. The application of such techniques requires a combination of the technical expertise of computer scientists, as well as domain knowledge to prepare the data sets for analysis and the interpretation of the results. In the sixth article [6], Gurazada, with her colleagues, explored a common problem of a prolonged length of stay in an emergency department. The article presents a review of potential data-mining techniques that have been applied to predict what factors affect the length of stay of patients. As the result, they chose an approach that was suitable for a data set provided by the hospital. The authors presented the lessons learned and identified some future research opportunities based on this application.
E-health has experienced a remarkable evolution with regard to telehealth adoption since the start of the COVID-19 pandemic. The seventh article [7] presents a systematic review conducted by Murphy et al. that investigates the social, psychological, health and economic impacts of COVID-19 on cancer patients. The review identifies and discusses telehealth successes and failures, and the results provide a valuable guide to healthcare providers to better prepare their future operations.
Following the COVID-19 pandemic, the importance of disease screening using smartphones has been stressed further. The final article [8], by Moses et al., involves a systematic review of the literature to examine the use of mobile apps for disease screening and technology acceptance among the users and healthcare practitioners. The results could inform future research on assessing mobile apps as a reliable screening tool.
Conclusions
This Special Issue was created to collate original research papers and review articles that explore new techniques, solutions and applications in e-health and mobile health monitoring. The collection of the eight articles in this Special Issue demonstrated the variety of studies in this domain and a rich repertoire of approaches and technical solutions identified by the authors in order to address the complex needs of healthcare stakeholders, particularly in the Australian context. The entire collection of eight articles makes an important contribution to the field of digital health in general, and technological advancements for health monitoring in particular. This said, the opportunities do not come without challenges and we would like to thank the authors for explicitly describing the special conditions for making their propositions successful, as well as highlighting the potential real-life challenges for implementing such solutions, especially based on the systematic reviews they conducted of the areas of telehealth, mobile disease screening and smartphone sensing.
The articles and their results provide valuable information about potential challenges and opportunities for the research community in e-health and mobile monitoring, and identify some future research directions for those who are interested in conducting similar research in the future. | 2022-11-27T05:14:16.666Z | 2022-11-01T00:00:00.000 | {
"year": 2022,
"sha1": "18eaa0dd57bcdce021b1ca31f3ded8cec6816358",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/22/22/8621/pdf?version=1668497189",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "18eaa0dd57bcdce021b1ca31f3ded8cec6816358",
"s2fieldsofstudy": [
"Medicine",
"Computer Science",
"Business"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
13118905 | pes2o/s2orc | v3-fos-license | A Genetically-Encoded YFP Sensor with Enhanced Chloride Sensitivity, Photostability and Reduced pH Interference Demonstrates Augmented Transmembrane Chloride Movement by Gerbil Prestin (SLC26a5)
Background Chloride is the major anion in cells, with many diseases arising from disordered Cl− regulation. For the non-invasive investigation of Cl− flux, YFP-H148Q and its derivatives chameleon and Cl-Sensor previously were introduced as genetically encoded chloride indicators. Neither the Cl− sensitivity nor the pH-susceptibility of these modifications to YFP is optimal for precise measurements of Cl− under physiological conditions. Furthermore, the relatively poor photostability of YFP derivatives hinders their application for dynamic and quantitative Cl− measurements. Dynamic and accurate measurement of physiological concentrations of chloride would significantly affect our ability to study effects of chloride on cellular events. Methodology/Principal Findings In this study, we developed a series of YFP derivatives to remove pH interference, increase photostability and enhance chloride sensitivity. The final product, EYFP-F46L/Q69K/H148Q/I152L/V163S/S175G/S205V/A206K (monomeric Cl-YFP), has a chloride Kd of 14 mM and pKa of 5.9. The bleach time constant of 175 seconds is over 15-fold greater than wild-type EYFP. We have used the sensor fused to the transmembrane protein prestin (gerbil prestin, SLC26a5), and shown for the first time physiological (mM) chloride flux in HEK cells expressing this protein. This modified fluorescent protein will facilitate investigations of dynamics of chloride ions and their mediation of cell function. Conclusions Modifications to YFP (EYFP-F46L/Q69K/H148Q/I152L/V163S/S175G/S205V/A206K (monomeric Cl-YFP) results in a photostable fluorescent protein that allows measurement of physiological changes in chloride concentration while remaining minimally affected by changes in pH.
Introduction
Chloride is the major anion in cells, and plays various physiological roles. For example, chloride is a key determinant of intestinal fluid secretion and cell volume primarily through affecting osmotic gradients [1][2][3]. Chloride is also important in setting neuronal resting membrane potential through a number of chloride channels and inhibitory neurotransmitter receptors that have chloride conductance [4]. As a natural consequence of affecting these diverse phenomena many diseases result from disordered Cl 2 regulation [5]. These diverse effects of chloride underscore the need to accurately and dynamically measure physiological intracellular chloride concentration.
While many proteins work to regulate intracellular Cl 2 , others are regulated by intra-and extracellular Cl 2 . For example, the unique membrane protein prestin in outer hair cells (OHCs), SLC26a5, functions as an ultrafast molecular motor, converting electrical to mechanical energy. This protein is thought to bring about cochlea amplification in mammals, which is responsible for the exquisite sensitivity of mammalian hearing. Intracellular chloride ions in the 0-10 mM range regulate the behavior of prestin, shifting its voltage responsiveness by 22 mV per mM of chloride (with gluconate as the counter anion). These in vitro findings have been supported by in vivo experiments demonstrating effects of chloride on cochlear amplification [6]. Hence, dynamic monitoring of intracellular Cl 2 concentrations near prestin's intracellular chloride binding site will aid in understanding chloride's role in cochlear amplification. We previously have used the fluorescent dye MQAE to measure chloride flux in OHCs [7], but the technical problems with this approach are overwhelming. Consequently, we have worked to develop a genetically encoded chloride sensor that can be tagged on the intracellular C-terminus of prestin, sensing chloride fluctuation during prestin activity.
EYFP, GFP-S65G/V68L/S72A/T203Y, has proven to respond rapidly and reversibly to concentration changes of halides, which enables YFPs to be genetically encoded Cl 2 sensors in living cells. YFP-H148Q and its derivatives [8], including CFP-YFPbased Clomeleon [9], were introduced as genetically encoded chloride indicators, in which H148Q enhances the halide affinity due to better binding cavity access via a solvent channel, thereby favoring chromophore protonation following halide binding to reduce fluorescence [20]. The K d of YFP at pH 7.5 is 777 mM, and 154 mM for YFP-H148Q, far removed from intracellular [Cl 2 ] under physiological conditions. In the YFP-H148Q library, I152L and V163S exhibit higher chloride affinity with a K d of 88 mM and 62 mM, respectively [10,21,22]. The triple mutant of YFP-H148Q/I152L/V163S was adopted in the CFP-YFP-based emission ratiometric Cl 2 indicator (Cl-sensor) [11]. In Cl-sensor, the real sensor for chloride is the YFP mutant itself with K d ,30 mM, instead of requiring a peptide linker between CFP and YFP, or an extrinsic sensor such as calmodulin in Cameleon [23]. In YFPs, the halide-binding cavity near the chromophore has the ability to modulate the protonation state of the chromophore, and this is the basis of YFP's chloride sensitivity. In the CFP-YFP based indicator, CFP is insensitive to halides, providing an in situ ratiometric calibration for YFP.
Compared to calcium FP-sensors, the YFP-based chloride sensor has less sensitivity (high mM scale instead of mM or nM), less photostability and is usually accompanied by significant, confounding pH effects near physiological pH. To date, YFPbased Cl 2 indicators are limited in their application because of these three reasons. In this study, we developed a series of YFP mutants extending previous work. We introduced a positive charged residue, Q69K, into the halide-binding cavity to decrease pK a , and two folding mutations, F46L/S175G, to enhance folding. Furthermore, we uncovered a key mutation in the proton delivery pathway, S205V, which 1) increased the time constants of photobleaching 15-fold greater than wild-type EYFP, 2) lowered the pK a away from physiological pH, and 3) enhanced the chloride sensitivity to 14 mM. These improvements provide for a superior chloride sensor. Finally, using this new enhanced chloride sensor, we demonstrate dynamic flux of chloride in the mM range into prestin expressing HEK cells.
ClsM, a YFP Modification with Intermediate Chloride Sensitivity and pH Dependence
In designing a chloride sensitive YFP we evaluated and adopted a number of amino acid changes that had previously been shown to enhance fluorescence and stability. The maturation of YFP includes two phases, peptide chain folding and chromophore formation. F46L significantly accelerates chromophore oxidation, while the well-known folding mutations of F64L, M153T, V163A and S175G facilitate the folding process of the peptide chain [20,24]. These mutations that affect folding and the rate limiting chromophore oxidation step also affect chloride sensitivity. F64L counteracts the conformational change in orientation caused by V68L in the YFP variant Venus, and also induces reduced halide sensitivity by preventing halide ion access to its binding site [24]. V68L is included in native EYFP, and we therefore opted to not include F64L in our new Cl 2 sensor. Since V163A may also be involved in eliminating chloride sensitivity in Venus by presumed shortening of the side chain, we introduced V163S, with a longer side chain that enhances halide sensitivity [10] while simultaneously maintaining photostability. Similarly, S175G that breaks an existing hydrogen bond network facilitates folding and enhances fluorescence intensity of Venus [24,25] and ECFP [25], and was incorporated in our constructs. M153T by virtue of its smaller side chain and increased flexibility [26] has also been shown to facilitate folding and increase fluorescence intensity. However, this mutation seemed to affect the expression of YFP in several of our constructs and was therefore not incorporated in our final constructs.
There are nine residues in the halide-binding cavity near the chromophore of YFP that have direct interaction or a distance of less than 5 Å from the binding halide anion, namely, Q69, R96, V150, I152, V163, F165, Q183, L201 and Y203 [20]. T203Y is the key mutational difference between YFP and GFP, and I152L/ V163S already exist in the sequence of EYFP-H148Q/I152L/ V163S from which we began our development effort. Q69 is fairly close to the chromophore anion inside the b-barrel of YFP. It was previously reported that Q69K could promote the anionic form of the chromophore to hinder its protonation, and therefore reduce the apparent pK a to 6.1, with little effect on its other sensitivities [27]. But we found that the mutant Q69K when added to EYFP-H148Q/I152L/V163S folds poorly at 37uC, which was previously reported in EYFP-V68L/Q69K [27], possibly resulting from the extra length of the Lysine side chain and disturbing the hydrogenbond network in the halide-binding cavity. Fortunately, this folding problem of Q69K can be compensated by the folding mutation of F46L. F46L can greatly accelerate the oxidation of the chromophore at 37uC, the rate-limiting step of maturation of chromophore, without changing the pK a of EYFP [24]. The new construct, EYFP-F46L/Q69K/H148Q/152L/V163S, which we term ClsM, expressed very well in HEK-293 cells, showing bright fluorescence. The pK a of ClsM was reduced to 6.5,7.1 and was dependent on the chloride concentration (0.2 mM and 140 mM, respectively). The chloride sensitivity of ClsM remained at ,30 mM (data not shown). Interestingly, the alternative replacement of Q69 with arginine could barely fold to exhibit fluorescence in transiently transfected HEK-293 cells, probably because of the large side chain of the arginine residue. In this study, all further mutants are made based on ClsM.
Mutations to Reduce pH Sensitivity Resulted in Enhanced Photostability and Further Increased Chloride Sensitivity
Positive charges in the halide-binding cavity close to the YFP chromophore not only increase the affinity of halide binding, but also promote the anion state of the chromophore, which in turn can reduce the pK a of the chromophore by dispersing negative charge over the conjugated structure of the chromophore. So our first strategy for improvement was introducing more positively charged residues into the halide-binding cavity.
We evaluated the addition of another positive charge besides Q69K into the halide-binding cavity of ClsM attempting to reduce confounding pH effects by single-site mutation of the individual residues V150, F165, Q183, L201 to lysine or arginine. However, all of the double-charge mutants (V150K/R, F165K/R, Q183K/R, L201K/R, each in combination with Q69K) could not fold to exhibit fluorescence, even in the presence of two additional folding mutations of M153T and S175G, as well as F46L. The double-charge mutants of Q69K/F165K and Q69K/ L201K express somewhat better, but still showed lower than normal YFP fluorescence. We concluded that the double-charge residues in the halide-binding cavity can significantly affect the folding of YFP. We speculate that this could result from difficulty in burying an extra positive charge (in addition to R96 and Q69K) into the cavity proximal to the chromophore, due either to excess electrostatic forces or cavity size limitations.
The other strategy to lower the pK a of the YFP-based chloride sensor is to set a barrier within the proton delivery pathway to the phenolate anion of the chromophore, which is protonated via a hydrogen bond network composed of the main chain of residue N146, the side chain of S205 and a bridging water molecule linked with a surface water molecule. The protons are delivered from the external solvent into the YFP b-barrel through this hydrogen bond network. If the proton delivery pathway is blocked or impeded, lower pH (higher [H + ]) is needed to overcome the obstructed proton access, indicating that pK a is reduced. We chose S205 as the essential target site to impede proton access. S205 was substituted with other neutral residues without hydroxyl groups, such as alanine, leucine, isoleucine, and valine. Interestingly, we found that in addition to its influence in altering pK a , S205 substitutions also enhanced photostability. Valine substitution of S205 has the best photostability. The percentage of residual fluorescence after photobleaching is the highest with this mutant S205V. Thus, the time constant of fluorescence photobleaching is 175 s compared to the ClsM bleaching time constant of ,10 s ( Fig. 1). Note that the folding mutations S175G or M153T does not enhance photostability.
Work from the Tsien laboratory has shown that the tendency for YFP dimerization can be greatly reduced or eliminated by mutating the hydrophobic amino acids in the dimerization interface to positively charged residues [28]. The order of effectiveness is A206K.L221K.F223R. The dissociation constant K d of mYFP-A206K derived from the association constant (K a ) is much higher than that of mYFP-L221K and mYFP-F223R [29]. Because prestin can oligomerize, we included the A206K mutation into the construct to make a monomeric version that will avoid aggregation when making fusion proteins with a targeting protein. Therefore, EYFP-F46L/Q69K/H148Q/I152L/V163S/ S175G/S205V and its monomeric version with A206K, now termed Cl-YFP and mCl-YFP (or mClY for short), respectively, were characterized in detail to determine their pK a and chloride sensitivity.
Cl-YFP and mCl-YFP are significantly less pH sensitive than ClsM, with a pK a of ,5.3 for Cl-YFP (data not shown) and a pK a of ,5.9 for mCl-YFP ( Fig. 2A), both of which are far removed from physiological pH conditions. The pK a of mCl-YFP is slightly higher than that of Cl-YFP, indicating better access for protons to reach the chromophore. This is likely due to a smaller steric barrier on the dimer interface. Although Cl-YFP and mCl-YFP have similar time constants for photobleaching, about 175 s, the percentage of residual fluorescence after bleaching was different; 50% and 65%, respectively (Fig. 1). Finally, mCl-YFP has a higher chloride sensitivity of 14.4 mM (Fig. 2B) compared to the previously reported best value of ,30 mM for all YFP variants [11]. It should be noted that the larger reduction in fluorescence (F/F 0 ) for low pH ( Fig. 2A) versus high chloride (Fig. 2B) likely arises because protonation of the chromophore directly quenches the fluorescence of YFP, while chloride binding would change the proton affinity of chromophore, consequently reducing fluorescence intensity. Residual fluorescence that we measure likely depends on the oxidation equilibrium of fluorophor under the given spectrum and level of illumination such as optical power. It might be that if we changed these photobleaching conditions or solution pH that these asymptotic levels would differ. The point that we make is that under the same photobleaching conditions the relative response differs among mutations, with our mCl-YFP showing the best bleaching time constant, best pK a and best K d for chloride as well as the strong capability of removing YFP dimerization. The important point about bleaching is that it is minimized during the course of an experiment.
The Outer Hair Cell Protein Prestin Shows Dynamic Chloride Movement that is Demonstrable with the Enhanced Chloride Sensor
To test our improved chloride sensor, mCl-YFP was fused to the C-terminus of the OHC motor protein, prestin, and transfected the construct into HEK-293T cells. After 24 hrs of incubation, chloride flux into the transfected cells was monitored during changes in extracellular chloride from 0.2 mM Cl 2 to 140 mM Cl 2 (Fig. 3). The standard bath solution contained 0.2 mM chloride. Upon perfusion with higher chloride solutions, the fluorescence dropped immediately demonstrating chloride influx. The most sensitive response was near the K d of the sensor. Standard calibration of the sensor with 100 mM TBT and 50 mM nigericin provides a translation of fluorescence response to chloride concentrations. TBT and nigericin are ionophores that allow Cl 2 and H + ions to pass freely through the cell membrane.
In experiments using prestin-mClY, we noted a decrease in fluorescence upon exposure to increasing concentrations of extracellular chloride, compared to mCl-YFP fused to the control membrane protein CD80 (a B cell protein, the extracellular portion of which acts as a co-stimulatory molecule of T cells, and has no known chloride transport function). The relative reduction in fluorescence was proportional to the concentration of extracellular chloride applied to the cell. Although the differences in individual mean fluorescence did not reach statistical significance, the rates of change per extracellular Cl 2 were significantly different (slope difference: 0.02 unit/100 mM Cl 2 ; p,0.05), and indicates a more rapid transmembrane flux of Cl 2 with prestin when extracellular Cl 2 increases. The dilution of sub-membranous Cl 2 into the cytosolic pool of Cl 2 likely limits the accumulation of the anion near the plasmalemma. Importantly, prestin-mClY demonstrated unchanged prestin function evidenced in parameters of charge movement in the membrane (non-linear capacitance, data not shown). Moreover, there was tight correlation between estimates of intracellular chloride concentration determined by changes in voltage at peak capacitance (V h ) [7,30] and decrease in fluorescence intensity in prestin-mClY. Both methods of estimating Cl 2 concentration are concordant with a rise in intracellular peri-membranous Cl 2 concentration of , 10 mM, when cells were perfused with 140 mM Cl 2 . Note that there was an increase in intracellular chloride concentrations with both prestin-mClY and CD80-mClY, although the reduction in fluorescence with prestin was more marked. We interpret these data to suggest that prestin enhances a basal Cl 2 influx into the cell. The rise in intracellular juxta-membrane chloride concentration is due to a complex contribution from prestin activity (conductance or transport), native channels and transporters in the HEK cell, and diffusional dilution into the cytoplasm away from the plasma membrane. Resolving these issues will require additional work.
Discussion
Here we report the development of a powerful fluorescent chloride sensor that displays 1) superior photostability, 2) reduced susceptibility to [H + ] fluctuations near physiological pH, and 3) enhanced chloride sensitivity to permit assessment of low-level physiological changes in [Cl 2 ]. For example, the K d estimates of Prestin sensitivity to chloride range from 1-6 mM [7,31,32], and the fused Prestin/mCl-YFP sensor will be ideal for monitoring chloride levels in the OHC sub-plasmalemmal compartment where chloride concentrations have been speculated to fluctuate and affect prestin function [33][34][35]. Additionally, we demonstrate here dynamic changes in intracellular chloride concentration in response to changes in extracellular chloride that we attribute to prestin. It is unclear if this increase in intracellular chloride concentration is a result of transporter activity or a more channellike conductance [36][37][38]. We are confident that the benefits of the probe will extend to other physiological preparations, as well.
Protonation & Cl 2 Binding
We used known characteristics of YFP and previous iterations of other chloride sensor homologues to engineer our new sensor. In previous published YFP variants, alterations in the Clsensitivity is usually accompanied by alterations in confounding pH effects because halide binding promotes the protonation of the chromophore, and vice versa [8,11,20]. The relationship of pK a and halide sensitivity of YFP variants exhibits features that imply positive cooperativity of protonation and halide-binding [8,39]. In some variants of GFP, pH-sensitivity is exploited to measure compartmental pH [12,40], but for FRET-based studies, the sensitivity of YFP to environmental pH is highly undesirable because it can interfere with the interpretation of the energy transfer efficiency or distance estimation between donor and acceptor. Indeed, YFP variants with low pH and halide sensitivity have been developed, namely Venus [24,25] and Citrine [27]. Despite the apparent interdependence of pH and halide sensitivity, each can be separated.
In YFP, pK a is mainly determined by two factors: the negative charge density on the phenolic oxygen of the chromophore and the local proton availability in the surrounding environment. The negative charge of chromophore anion mainly distributes on the phenolic oxygen or the carbonyl oxygen of imidazolinone as two major resonance structures. If the phenolate negative charge can delocalize over the conjugated skeleton to the carbonyl oxygen of imidazolinone, the phenolic oxygen would have less negative charge to attract protons for protonation, indicating that the pK a would decrease. Our first strategy to decrease pK a was to introduce as many positive charged residues as possible into the halide-binding cavity that is adjacent to the carbonyl oxygen of imidazolinone. Positive charges can help maintain the chromophore's anionic state, and also likely provide a large fraction of anion-binding energy for increasing halide sensitivity. In our study, Q69K decreased the pK a to 6.5-7.1 from 7.1-8.0 depending on the chloride concentration, and F46L resolved the folding problem caused by Q69K.
The local proton availability around the phenolic group of chromophore is proportional to the pH value of the external solution depending on proton accessibility. If the proton has unencumbered access to the phenolic anion of the chromophore, the pK a will be higher. When proton access along the proton delivery pathway is encumbered, the pK a value will decrease, indicating that a higher H + concentration is needed to overcome the more difficult delivery. Therefore, the other strategy to decrease the pK a was to identify mutations of some key residues that can block or raise the barrier within the proton delivery pathway to the phenol group of the chromophore. In the proton delivery pathway from the external solvent to the phenolic group of the chromophore, a water molecule adjacent to the chromophore phenolate forms H-bonds with the side chain of S205 and the main chain of N146, linking the chromophore phenolate with a surface water molecule within H-bond range (Fig. 4). Upon binding a halide ion, the distance between the bridging water molecule and the nearest surface water molecule increases to 4.7 Å , likely due to conformational rearrangement. S205 is the only residue that can be mutated to affect the bridging water molecule linked with a surface water molecule. Interestingly, S205 plays an important role for photostability, as it links E222 and the chromophore. In keeping with this possibility we also found that S205 mutations affect the photostability of the fluorophore, in addition to its effects on pK a .
Photostability
Using selective screening assays and directed evolution strategies, highly photostable variants of mOrange and TagRFP were developed by Tsien's group [41]. Nevertheless, the YFP photobleaching mechanism and details of the photo-reactive process remain poorly understood. We tried to endow our chloride sensor with higher photostability using a structure-guided strategy.
As noted above, we view S205 as being important for both proton delivery and photostability because of the hydrogen-bond chain network between the chromophore phenolate, S205 and E222 (Fig. 4B). Both GFP and YFP show decarboxylation of E222, evidenced as a loss of 44 Daltons (CO 2 ), upon intense illumination, and in the case of YFP it is associated with photobleaching [42][43][44]. Continuous illumination irreversibly photobleaches YFP into a weakly fluorescent species, which absorbs at 390 nm and fluoresces at 460 nm, similar to its spectroscopic properties as free chromophore [42]; this behavior indicates that the photon-induced chemical destruction happens within the chromophore via excited states, while the protein is partially unfolding and aggregating [43,44]. Though the detailed mechanism of how the E222 decarboxylation of YFP induces chromophore destruction remains to be determined, it could involve the hydrogen bond network between the chromophore and E222. In YFP-H148Q, E222 could be either H-bonded to S205 or to the nitrogen on the imidazole ring of the chromophore, but not to both at the same time (Fig. 4).
We found that the mutation S205V increased YFP photostability (bleach tau = ,175 s) more than 15 fold over wild-type YFP or ClsM. In fact, not only is its photostability enhanced, but the possible H-bond network rearrangement afforded by the mutation of S205 reduces pK a , and improves chloride sensitivity, indicating that a new equilibrium was reached between the chromophore protonation and halide-binding. S205A, S205L and S205I similarly cannot support proton transfer in the absence of a hydrogen bond donor or acceptor at position 205. S205A, which is less bulky than S205V, provides a bleach time constant of ,30 s. S205L and S205I, which have larger side chains than S205V, show bleach time constants of ,110 s and ,220 s, respectively. This may indicate that side chain size or rotational freedom of residue 205 is important for photostability.
Interestingly, it was reported that wtGFP-S205V and wtGFP-S205A slow down the travel time through the excited-state proton transfer pathway (ESPT) from several tens of picoseconds to a few nanoseconds by rearranging E222 and Thr203 to form an alternative ESPT pathway without S205 [45,46]. It is unlikely that this would occur in YFP because it lacks the corresponding neutral form of the chromophore that wtGFP possesses, and the orientation of Y203 in YFP cannot permit its phenolic group to interact in an alternative H-bonding network for proton transfer. Furthermore, any potential effects of these mutations on GFP photobleaching were not reported.
In summary, we have developed an YFP-based chloride sensor that has enhanced chloride sensitivity and photostability, while possessing reduced confounding pH effects. We have used it to measure sub-membranous chloride flux in HEK cells when fused to the transmembrane protein prestin, and show that it is capable of monitoring changes in intracellular chloride at levels expected to have physiological impact. We also note that the stability of our YFP mutants could be useful in studies where photobleaching plays a key role, for example in single molecule [47] and superresolution microscopy [48] methodologies.
Gene Construct and Mutations
Mutagenesis was performed using the Quick Change method adapted from Stratagene QC protocol. Mutations were verified by sequencing the entire gene. The vector is EYFP-N1. The sequences of EYFP-H148Q/152L/V163S and ClsM were synthesized by GeneWiz (USA). as the fluorescence illumination source. Shutter and filter wheel (Lambda10-3 optical filter changer with smart shutter, Sutter Instr., USA) were connected between the microscope and the illumination source, in which Semrock ET430/24x-32 was used as the excitation filter for CFP, Semrock ET500/20x-32 as the excitation filter for YFP and Chroma HQ520LP as the emission filter. A 14-bit back-illuminated EMCCD camera system (1286128 pixels, 24 mm array, Andor iXon EM + DU-860E, USA) was used to record the fluorescence images under CFP or YFP excitation. All peripheral hardware control, image acquisition and image processing were achieved and/or synchronized on a PC computer via a 16-bit/1-MHz USB Data Acquisition System (Personal Daq/3000 Series, IOtech, USA) by using customized Figure 4. The H-bond network comprising the proton transfer pathway within the chromophore (in Yellow) of YFP-H148Q drawn with PyMol. Water molecules displayed as red balls. (A) YFP-H148Q without halide-binding (PDB: 1F0B). The phenolic oxygen of the chromophore is H-bonded to a water molecule that is also H-bonded to the side chain of S205 and the main chain of N146; additionally, there is a H-bond to a surface water molecule that is exposed to exterior solvent. E222 is H-bonded to the nitrogen on the imidazole ring, indicating that E222 is protonated, while the phenolic group of Y203 forms H-bond with Q69 and a nearby water molecule. Notably, there is no H-bond between S205 and E222 whose nearest distance is 3.9 Å . (B) iodide-bound YFP-H148Q (PDB: 1F09). The water molecule H-bonded to N146, S205 and the phenolic oxygen of the chromophore was separated away from the surface water molecule, whose distance increased to 4.7 Å . The deprotonated E222 forms H-bond to S205 that is in the H-bond chain with protonated chromophore, while the phenolic group of Y203 is near H-bonding distance with iodide, which may be a reason why YFPs are halide-sensitive since Y203 is the unique residue of YFP from GFP. doi:10.1371/journal.pone.0099095.g004
Cell Culture & Excitation Ratiometric Imaging System
Enhanced Chloride Sensor PLOS ONE | www.plosone.org software (jClamp & FastLook, SciSoft, USA; www.SciSoftCo. com). The average fluorescence intensity of regions of interest (ROI) was measured, and the background fluorescence was subtracted using ImageJ. The excitation ratios (F 500 /F 430 ) of fluorescence intensity were then determined. The emission spectrum of mCl-YFP has the same shape as wt-YFP, with the peak around 527 nm. This was determined at an excitation of 485 nm using a microplate reader (TECAN infiniti M1000 Pro). The determinations of photobleaching, pK a and chloride sensitivity were made using HEK-293 cells expressing YFP mutants directly in the cytosol, and not with membrane bound fusion proteins, which avoids the possibility of confounding results caused by limited expression or dim fluorescence. Data were analyzed with Matlab, Origin 8.0 and SigmaPlot 10.0.
Photobleaching
Photobleaching data and fluorescence images were achieved with our ratiometric imaging system controlled by jClamp & FastLook. Photobleaching efficiency at a wavelength of 500 nm is lower than at 430 nm, although the absorption at 430 nm is much lower than 500 nm. Because of this enhanced bleaching capability and to optimize our identification of photostable products, we bleached at the CFP excitation wavelength of 430 nm (approximately 17 mW). At the beginning of every episode, the excitation filter was changed to the YFP filter (500 nm) by the filter wheel and an image was captured by the camera and then the excitation filter was changed back to CFP filter (430 nm) for bleaching until the next acquisition. Filter change and acquisition took about 100 ms. The protocol includes 400 episodes and 200 ms interval time between each episode. CFP excitation remained on during intervals. Stable optical power at the utilized wavelengths was confirmed using an analog optical power meter (ThorLabs PM30-130, w/S130A Slim Sensor). Photobleaching of mCl-YFP did not shift its emission peak.
pK a Measurement
The pK a of the YFP mutants were directly measured from the fluorescence change of the transfected HEK-293T cells under local perfusion (Y-tube) with low [Cl 2 ] solutions (0.2 mM Cl 2 ) at different pH values containing 50 mM nigericin and 100 mM TBT, which eliminates pH and Cl 2 gradients across the cell membrane, respectively. A Hill function (SigmaPlot, unconstrained, 4 parameters, y~y 0 z ax b c b zx b ) was fitted to the data points to calculate the apparent pK a .
Chloride Sensitivity Calibration
The chloride sensitivity of YFP mutants was measured from the fluorescence change of the transfected HEK-293T cells under local perfusion (Y-tube) with near neutral solutions (pH 7.20) of different [Cl 2 ] containing 50 mM nigericin and 100 mM TBT, according to the standard nigericin-tributyltin equilibrating protocol. A Hill function (SigmaPlot, unconstrained, 4 parameter) was fitted to the data points to calculate the apparent K d . Local perfusion were performed with high K + solution (Na + -deficient) to minimize any pH effect on the chloride sensor by native membrane Na + /H + exchanger.
Chloride Flux Promoted by Local Perfusion
The chloride flux into HEK-293 cells expressing prestin-mClY or CD80-mClY was measured by fluorescence change during local perfusion using high K + solutions containing different chloride concentrations at pH 7.20. Photobleaching compensation was corrected based on the photobleaching time constant that was measured prior to Cl 2 perfusions. ImageJ was used to define a region of interest around membrane sections that were free from movement artifact in which fluorescence intensity changed with perfusion with different Cl 2 concentrations. Perfusion data were analyzed using mixed model repeated measurements with a groupspecified compound symmetry structure within SAS software. The repeated-measures design was used for the experiment of comparing the Cl 2 influx pattern between prestin and CD80 constructs. Since each cell was perfused with a series concentration of extracellular Cl 2 , fluorescence measured within each cell was correlated. A repeated measures analysis using the procedure of Proc Mixed in SAS software (Cary, NC) was performed to model the change of fluorescence per extracellular Cl 2 . The dependency between repeated measures for same cell was incorporated into analysis by using a group specified compound symmetry covariance structure, which assumes common variance and covariance within group and accounts for the heterogeneous structure between groups. The interaction between group and concentration of Cl 2 was included in the model to examine the difference of rate of change in fluorescence between groups [49,50].
Data Availability Statement
All data are supplied in the manuscript, including information on point mutations. Further requests can be made to the corresponding author. | 2017-04-03T04:58:56.944Z | 2014-06-05T00:00:00.000 | {
"year": 2014,
"sha1": "1cd10087b4a76c4b1d50ddea981b90e442eb75b9",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0099095&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1cd10087b4a76c4b1d50ddea981b90e442eb75b9",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
235286844 | pes2o/s2orc | v3-fos-license | X-ray spectral analysis of surface structures formed on copper alloys during film fasting
The paper establishes the nature of the redistribution of chemical elements and their concentration in secondary surface structures for the material systems “steel 45XN2MFA-grease-BrOTsS4-4-2.5”, “steel 45XN2MFA-grease-BrOTsS4-4-4”, “steel 45XN2MFA-grease-L-63 ”, tested under the conditions of simulating film starvation when lubricated with 15W40 Lukoil-Super SAE SG/CD engine oil. Since the formula obtained by us includes the Gibbs energy, the self-organization of the structure under consideration naturally also enters. General laws of self-organization are observed when a number of conditions are met: irreversibility, openness (nonequilibrium), nonlinearity, instability (coherence), dissipativity. All these elements were observed and described by us in this work. It is due to self-organization that the distribution of chemical elements carbon and oxygen occurred on the surfaces L63 and BrOF4-4-4. To this effect must be added a decrease in the surface tension of the lubricant.
Introduction
Disclosure of the mechanisms for reducing friction and wear in tribological couplings of mechanical engineering objects on the basis of the established patterns of formation of tribological and secondary structures in the near-surface layers underlies not only increasing their operational reliability, but also seems to be a toolkit in developing measures to ensure the manifestation of self-organization processes in tribosystems of materials. Such processes include, for example, selective transfer. It is especially important to take into account when, in complex tribosystems of materials working with the supply of lubricant, hidden, previously unrevealed and undescribed interaction mechanisms may appear under certain conditions. The kinematics and dynamics of such mechanisms are primarily determined by the motion and forces of interaction between the components of the lubricating medium and alloying elements of alloys when changing modes of power, high-speed loading and lubrication. The components of the lubricating medium and alloying elements of the alloy represent a complex environment, however, the calculations do not take into account the compressibility of the lubricating fluid, the dependence of its viscosity characteristics on pressure and temperature. A micropolar fluid is characterized by three physical constants μ, ae, γ, in contrast to a Newtonian fluid, which has only one constant μviscosity. The parameter ae has the dimension of viscosity. Since it manifests itself as a result of taking into account microrotations, it is called the coefficient of viscosity during rotational motion (coefficient of vortex viscosity). It characterizes the resistance to rotational movement, just as the coefficient μ characterizes the resistance to translational movement. The γ coefficient has the , and with its help the length parameter ℓ = √γ/4μ is determined, which characterizes the size of the lubricating liquid microparticles. Such tribosystems of materials are used, for example, in plain bearings, where the functions of the trunnion are performed by alloyed steels and cast irons. The role of the sleeve is performed either by inserts with an antifriction layer, or by a one-piece sleeve made of a copper alloy, polymer, powder composite material, etc., and their use in mechanical engineering objects is very wide. Based on the above, the need for further disclosure of previously identified and development of new mechanisms of contact interaction between surface secondary structures is determined, and the development of recommendations for their implementation. It is also obvious that the mechanisms must be disclosed taking into account the mass transfer between the chemical elements of tribosystem materials. It is also obvious that the availability of data on the molar masses of lubricants, patterns of change in the ratios of their surface tension and viscosity coefficients when using commercial additives, such as remetallizers, geomodifiers, will indicate the peculiarities of the behavior of the lubricating layers in tribo-conjugations. Moreover, we should talk about both the circulating lubricant in the tribological system, and about the structures that are orderly formed and connected with each other and directly with the materials of the contacting surfaces. The issues of studying the properties of new antifriction materials, including those for plain bearings, are currently relevant, as evidenced by a very long series of scientific publications, including those of the authors of [1][2][3][4]. Thus, in [1], the secondary structures on the friction surfaces of aluminum alloys were investigated, the difference between the secondary structures from the surface of the alloys to friction was analyzed, and the properties of the AO-6.1 aluminum alloy proposed by the authors with the BrO8S12 bronze were compared. Aluminum antifriction alloys, yielding to bronzes in mechanical properties, significantly surpass them in tribological ones. Aluminum alloys wear out the steel counterbody 6 times less than bronze; the scuffing load of aluminum alloys is 2.5 times greater than the scuffing load of bronze; the wear rate of aluminum alloys is 2 times less than the wear rate of bronze. The paper does not disclose the mechanism for reducing friction, but only shows the ability of an aluminum antifriction alloy to self-organize with the formation of beneficial secondary structures on the friction surface, which became one of the determining factors when choosing it as a material for monometallic bearings of diesel locomotive turbochargers. Individual zones of friction surfaces are not analyzed. In [2], a deep analysis of metal antifriction materials (babbitts B88, B83, B16, BKA, BK2, SnSb 8 Cu 4 ) was carried out and technological recommendations were developed for plasma-powder surfacing of a babbitt alloy of the SnSb 8 Cu 4 brand on a steel base St3sp. Tribotechnical tests of the developed new babbitt were carried out under conditions of dry sliding friction. At the same time, attention was not paid to the peculiarities of the formation of surface structures; to a greater extent, the wear resistance of the deposited metal was evaluated, depending on the volume structure. In [3], the scientific foundations of technologies for the formation of new functional-gradient layered compositions and coatings from composite materials based on aluminum, tin and their alloys with enhanced tribotechnical properties were developed. The author found that the wear of composite materials occurs predominantly by the oxidative mechanism; matrix alloys have a high adhesive wear component. The following systems are considered as composite materials: Al-Si-Mg, Al-Si-Cu, Al-Mg, Al-Cu-Mg, Al-Sn-Cu, Sn-Sb-Cu, containing micron-sized particles of silicon carbide (SiC), titanium carbide (TiC), aluminum oxide (Al 2 O 3 ), intermetallic compounds of the A lx Ti y system, silvery graphite (C), as well as submicron particles of boron (B), boron carbide (B 4 C), carbon nanotubes and powders of modified shungite rock. As a result of contact interaction on the friction surfaces of the samples of the developed composite materials, a transition layer or "third body" is formed, which, according to the results of X-ray phase analysis, is a mechanical mixture of the materials of the test sample, counterbody and their oxides. In [4], it is proposed to obtain a combined electroerosive coating on bronze BrOCS5-5-5 alloyed with silver, lead, copper, which allows the formation of running-in structures on its surface, which reduce friction force by 20% during the running-in period. Such a coating is not a continuous (homogeneous) layer, but is in the form of discrete zones with a maximum thickness of 30 μm, that is, a regular surface microrelief is formed. In this case, the resulting bronze bushings have high reliability and durability during operation due to the fact that even with the destruction of the coating, the bearing continues to work [4]. However, the question is not given to the disclosure of the mechanism of manifestation of the friction reduction system. The work [5] presents the results of tribotechnical tests of new experimental graphitized steels, secondary aluminum alloys AL 25 , AlSi 12 Cu 1 (Fe), ASCh-2 cast iron, copper alloys BrOTsS4-4-4, BrOTsS4-4-2.5, L63 in collaboration with steel 45HN2MF on small-sized samples according to the friction pattern "movable disk -fixed block". This was part of the research aimed at developing recommendations for the selection of materials for the plain bearing of a turbocharger of an internal combustion engine. Moreover, in contrast to the works [1][2][3][4], the loading modes were simulated, causing the violation of the lubrication conditions when working in the environment of 15W40 Lukoil-Super engine oil. Discontinuity of lubrication during lubrication of a bearing occurs during operation of a turbocharger, and it is important to ensure stable lubrication of the friction surfaces to improve the reliability of the plain bearing and the turbocharger as a whole. The lack of stability of the lubricating action of the boundary films during film starvation was shown by BrOCS4-4-2.5, ASCh-2, and some graphitized steels. Greater stability of the lubricating layers was caused by BrOCS4-4-4, L63, graphitized steel (C -1.78%, Si -2.25%, Cu -3.19%, Al -0.23%, Mn -0.64, Ni -0.15%, Cr -0.17%, S -0.016%, P -0.029%). The formed lubricating formations ensured boundary friction without the external features of catastrophic destruction for 6.5 ± 0.5 min at a load of 165 N. At a load of 250 N for 3 min, the lubricating layers did not completely collapse. Also, only when testing L63 was the fact of the formation of a secondary lubricant of a dark color in the aggregate state of a paste of a sticky consistency with a content of elements: C -0.55%, O -0.85%, Fe -4.43%, Cu -66.5% was revealed, Zn -27.66%. Such a composition withstood a load of 350N for 2.5 minutes, then its continuity on the friction surface was broken. It seems obvious that the stability of the manifestation of the lubricating action under the given modes is determined by the composition of the interacting structures. However, there is no information on the chemical composition of the surface structures of the isolated materials in [5], which necessitates their X-ray spectral analysis. In work [7] it is emphasized that in modern operating conditions of lubricated tribosystems one of the most promising is the fluid friction mode, which is realized in plain bearings and significantly reduces the power loss to overcome friction. And in this lubrication mode, the undoubted advantage belongs to the hydrodynamic process, in which the lubricating medium plays the leading role -one of the main structural elements of plain bearings. It is also emphasized there that information on the chemical composition of the surface structures of liquid lubricants, structural elements made of an alloy on the bearing surface, shows that the main performance characteristics of the bearings under consideration depend on the viscosity parameter. However, as we have shown in [8], the surface tension plays a certain role on the distribution of chemical impurities in lubricating structures. The aim of the work is to establish the nature of the redistribution of chemical elements and their concentration in secondary surface structures for the material systems "steel 45XN2MFA-grease-BrOTsS4-4-2.5", "steel 45XN2MFA-grease-BrOTsS4-4-4", "steel 45XN2MFAgrease-L-63 "tested under the conditions of simulating film starvation when lubricated with 15W40 Lukoil-Super SAE SG/CD engine oil. This assumes a physical assessment of possible cases of behavior of the lubricant when changing its parameters: change in dispersion due to abrasion of the surface of the materials of the friction pair; changing the adhesion of carbon and oxygen due to surface tension; change in temperature and pressure in the lubricating medium, due to changes in its viscosity; a change in the chemical μ and electrical potential φ, leading to oxidative processes with the participation of oxygen.
Objects and methods of research
For research, we used the working surfaces of the block specimens, Figure 1, made of BrOTsS4-4-2.5, BrOTsS4-4-4 rectangular bronzes and L63 cylindrical brass and their transverse thin sections. The subject of research was the chemical composition of secondary microstructures formed in the near- surface layers of the pads in contact with a rotating disc made of 45KhN2MFA steel. The tribological state of the surfaces of copper alloys was formed at sliding friction V = 0.78 m/s and normal step loading of 165 N, 250N, 350N for 3 min on the residual components of the mineral motor oil 15W40 Lukoil-Super SAE SG/CD [5]. At the same time, on the surface of the samples, the following were separately visualized: on L63 brass a golden zone with a reddish tint and a dark zone; dark and yellow zones on bronzes. These zones are a kind of markers for the manifestation of types of lubrication within the nominal friction area of the pads. Based on the characteristic colors and residual lubricating formations, the following assumption is made. In the golden zone, dry friction occurred with discontinuity of the formed lubricating formations, i.e. film starvation with respect to the formed secondary lubricant. In the dark zone, there was boundary boundary lubrication with the solid phase material, i.e. film starvation relative to the action of the components of the original lubricant (motor oil). In the yellow zone, boundary friction (boundary lubrication) took place without disrupting the continuity of the lubricant, i.e. in the absence of film starvation. Thus, the yellow and dark zones can be considered as initial (basic) ones for assessing the nature of the distribution of chemical elements in the formed secondary surface structures. X-ray spectral analysis of the surface and transverse sections of the pads was carried out on a REMMA JSM-6360 LA setup in the mode of linear probe displacement U = 15 kW, I = 50 nA. The penetration depth of the X-ray beam into the analyzed layers for chemical elements was: h C ≈2.2 μm; h Fe , Sn ≈0.7 μm; h Cu ≈0.6 μm, h Pb ≈0.5 μm. The chemical composition, the concentration of elements were determined by zones (points). The distribution of scanning zones (points) was dictated by the need to obtain the most complete picture of the layer-by-layer and local character of the distribution of chemical elements. Secondly, the state of surfaces formed during boundary lubrication is also characterized by the presence of transferred Fe and Cr, oxide compounds, and carbon. However, their ratio has a significant difference. So, for L63 with boundary lubrication on a secondary lubricant on the iron surface, it is 1.7 times more than with film starvation. For BrOCS4-4-2.5, the amount of iron in the surface structure during film starvation is 6 times higher than its amount with boundary lubrication. At the same time, iron does not concentrate at all on the surface of BrOCS4-4-2.5 with boundary lubrication, only carbon in a small amount (up to 0.93%) in the absence of oxides. Analysis of the given data for alloy L63 showed the following, Fig. 2 areas with a smoothed relief 2x2 mm in size within the analyzed depths of up to 0.5 microns, there is an increase in the% copper content by 1.15 times compared to its volumetric content. Moreover, the same decrease in the % content is characteristic of zinc. The smoothed relief indicates a dense packing of the structure of thin surface layers and their inherent properties to resist shear with a very small fraction of the adhesive friction component. The presence of oxygen may indicate the oxidative wear mechanism of the analyzed area; areas with a size of 0.3x0.3 mm with a porous ruled relief and areas of 30x30 μm with a porous point relief within the analyzed depths up to 0.5 μm there is an increase in the% copper content by 1.12 times compared with its volumetric content. At the same time, zinc becomes 2.6 times less. Apparently, zinc in very small local volumes dissolved and went into the lubricating medium, and its mass content was replaced by Fe (≈3.6%), Cr (≈7.2%), Al (≈0.9%), Si (≈0 , 33%) as a result of mechanochemical interaction with steel and C (≈2.2%) with S (≈0.6%) as a result of their adsorption and adhesion from the secondary lubricant, Table 1 of zone 023-026. The described relief with the average statistical redistribution of chemical elements indicates the appearance of an increased fraction of the adhesive interaction of local microfragments of surface structures with the active zones of steel, i.e. grasping. The latter, due to the increased proportion of oxygen by 1.8 times in comparison with the smoothed areas, excludes the manifestation of the mechanism of plasticization and smoothing. In this case, the formed oxides are harder than within the smoothed areas. A characteristic feature of the surface in zone B is a smoothed profile, which, with a magnification of x1000, represents uniformly distributed fine-pored (cellular) structures with a size of 10x10 microns. The exception is minor foci of damage to such a microprofile in areas of 0.6x1 mm. There is no obvious decrease in the% copper content. In this case, zinc is partially replaced by transferred iron up to 4% and chromium up to 1.4%, and the dark background is due to adsorbed relatively uniformly distributed carbon up to 1% from the secondary lubricant. The described picture of the redistribution of chemical elements in the formed structures during film starvation caused, according to the data of [5], an abrupt change in the friction coefficient from 0.2 to 0.15 and to 0.2 within three minutes at loads of 250-350 μm with a subsequent increase in the friction coefficient to 0.4 at 450 N. And with a decrease in the load to 165 N, a decrease in the coefficient of friction to 0.28. From the above, it seems obvious that such secondary structures can partially resist pathological damage in the indicated field of force loading for 3 minutes. Taking into account the calculated thicknesses of the boundary lubricating layers considered in [6], which are 0.4-4 microns at small radii of single surface irregularities up to 13 microns, within the analyzed thickness of the L63 surface 0.5-2.2 microns when switching to the mode film starvation, the following can be assumed. Favorable conditions are created for the formation of adhesive bonds of oil components with active metal centers with low values of shear strength. This, in turn, can be considered as one of the rationales in the recommendation of the material L63 for a turbocharger sleeve bearing with an operating range of 90,000-100,000 min -1 . The analysis of the given data on the BrOCS4-4-2.5 alloy showed the following, Fig. 5-7, table 3-5. The characteristic features of the surface in zone A (Fig. 5 b, c) are strip-like areas formed in the direction of sliding without obvious pathological differences. The relief feature is that the bottom of the stripes is smoother than the tops. A change in the content of copper and alloying elements in bronze with boundary lubrication is practically not observed, with the exception of certain areas in which there is an insignificant transfer of chromium up to 2.6% and iron up to 1.5% with a very small amount of oxygen up to 0.4%, Table 3. Characteristic features of the surface in zone C (Fig. 6 b, c) are areas up to 6-8 mm wide with a smoothed and "ripped, loose" strip-like relief. The stripes of the sections are formed in the direction of there is a slight decrease in the% copper content by 10-13 % times in comparison with its volumetric content. At the same time, the same decrease in the% content is characteristic of zinc, tin and lead. There is a fairly high average statistical content of transferred iron 8-14% and chromium 1.6-2.2 (Table 4). At the same time, these alloying elements of steel can be both pure and in the form of oxides, since there is an insignificant presence of oxygen of 0.6-0.94%. No carbon was found on the surface from the composition of the engine oil, which excludes the possibility of the formation of a solid lubricating secondary material, as is typical for the A63 alloy. It seems obvious that there is a significant difference in the amount of transferred iron Fe during film starvation. Its increase is 4-7 times compared to mass transfer with boundary lubrication. This fact determines the lack of stability of the model tribo-conjugation with bronze BrOTsS4-4-2.5 noted in [5]. In this regard, it is not advisable to consider recommendations for using this material to initiate self-organizing processes under conditions of film starvation with low friction and wear indicators. The analysis of the given data on the BrOTsS4-4-4 alloy showed the following, Figure 8-10, Table 6-8. The characteristic features of the surface in zone A (Fig. 8) is a flat, flat profile without any foci of pathological destruction. Moreover, the distribution of chemical elements over the surface is even, practically not differing from the average distribution in the volume of the material. There is a presence of carbon adsorbed from engine oil up to 1%. In this case, there is no oxygen. The characteristic features of the surface in zone B (Fig. 9) is also a flat, flat profile with a slight formation of pores of various depths, rounded and elongated. At the same time, the distribution of chemical elements over the surface is even, but with a 1.8 times reduced zinc content. Moreover, the amount of Table 7. Alternatively, carbon displaced zinc by mass formation of the secondary structure, which went into the engine oil during tests. The cause of pore formation is most likely in the transferred gland. However, its amount is at least 2.6-4.6 times less than it was for BrOCS4-4-2.5. Based on the above, it seems obvious that the described redistribution of chemical elements, taking into account their mass transfer and adsorption, determines the stability of the functionality of the lubricating layers, including during film starvation. BrOCS4-4-4 is capable of creating the preconditions for carbon adsorption and thereby providing the ability to work for a short time during film starvation, which was obtained earlier in [5]. Thus, as in the case of L63, favorable conditions for the formation of adhesive bonds of oil components with active metal centers with low values of shear strength appear. This, in turn, can be considered as one of the rationales in the recommendation of the material for the BrOCS4-4-4 sliding bearing of a turbocharger with an operating range of 90,000-100,000 min -1 [6]. The above X-ray data can be regarded as manifestations of nanosuspensions arising from the rotation of bearings. The theory of effective viscosity of rarefied coarsely dispersed suspensions was developed by A. Einstein. He found that the effective viscosity coefficient η increases in proportion to the volume concentration φ of dispersed particles: ). 5 .
(1) where η 0 is the coefficient of viscosity of the carrier fluid. In deriving this relation, the perturbations introduced into the velocity field of the carrier fluid by an isolated solid particle were taken into account, and the additional stresses associated with this were calculated. Subsequent experiments showed that formula (1) 13 that the effective viscosity of a nanofluid depends not only on the volume concentration of nanoparticles, but also on their mass and size, is one of the key ones in [9]. It was found that the dependence of the viscosity coefficient on the volume concentration of nanoparticles is described by a quadratic function of the form (2), unless these concentrations are too high. ).
(2) The coefficients of this correlation are functions of both the sizes of nanoparticles R and their masses M. An important criterion that determines the growth of the effective viscosity of a nanofluid is the ratio of the mass densities of the material of nanoparticles and molecules ρ = ρn / ρm. The transparency in the coefficients k 1 and k 2 can be clarified using the approach outlined by us in [8]. For the viscosity coefficient it is possible to write (see formula (4) from work [8]): where σ is the surface tension of the lubricant, ρ is the density of the lubricant, G 0 is the Gibbs energy of the lubricant, and C is some constant. The change in the Gibbs energy dG 0 due to physicochemical processes that occur, for example, during the rotation of bearings, is equal to: dq dn dS VdP SdT dG This equation expresses the increment of the Gibbs energy of the system in terms of the algebraic sum of the increments of other types of energy. This sum indicates five possible processes of transformation of surface energy (respectively, from left to right): 1) into Gibbs energy dG; 2) to heat -SdT; 3) into mechanical energy VdP; 4) into chemical energy ∑µ ί dn ί ; 5) into electrical energy φdq. This demonstrates broader possibilities for achieving a minimum energy and corresponds to certain surface phenomena, for example, (following the scheme from left to right) such as 1) a change in reactivity with a change in dispersion, 2) adhesion and wetting, 3) capillarity, 4) adsorption, 5) electrical phenomena. Thus, equations (3) and (4) cover all cases of the behavior of the lubricant when its parameters change: a change in dispersity due to abrasion of the surface of materials of a friction pair; a change in the adhesion of carbon and oxygen due to surface tension; changes in temperature and pressure in the lubricating medium, leading to a change in its viscosity; a change in the chemical μ and electric potential φ, leading to oxidative processes with the participation of oxygen. Since formula (3) includes the Gibbs energy, the self-organization of the structure under consideration naturally also enters. General laws of self-organization are observed when a number of conditions are met: irreversibility, openness (nonequilibrium), nonlinearity, instability (coherence), dissipativity [9]. All these elements were observed and described by us above. It is due to self-organization that the distribution of chemical elements carbon and oxygen occurred on the surfaces L63 and BrOF4-4-4. To this effect it is necessary to add a decrease in the surface tension of the lubricant.
Conclusion
The performed X-ray spectral analysis of the surfaces of copper-containing materials made it possible, on the basis of the kinetics of chemical elements, to present the characteristic features of the surface structures of copper alloys, which determine the manifestation of mechanisms for reducing friction in the previously tested model tribo-couplings of sliding. The experimental data considered by us allow us to draw a conclusion about the self-organization of surface structures during film starvation. In this case, the manifestation of irreversibility lies in the fact that during the operation of the material, both in its surface and in the inner layers, certain qualitative changes occur, confirmed by X-ray spectral analysis, which, under certain conditions, lead to wear, chipping or volumetric destruction, forming micro-and macro relief. The manifestation of openness (non-equilibrium) lies in the fact that friction is not constant and depends on certain tribological phenomena that occur in the lubricating film | 2021-06-03T00:39:07.316Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "60419beac6a400a3f81a1fb77c6684c532602107",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1901/1/012091",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "60419beac6a400a3f81a1fb77c6684c532602107",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
31876841 | pes2o/s2orc | v3-fos-license | A contribution to Asian Afidentula Kapur (Coleoptera, Coccinellidae, Epilachnini)
Abstract Two new species of Afidentula, Afidentula dentata sp. n. and Afidentula jinpingensis sp. n. are described from China. Afissa siamensis Dieke is moved to Afidentula comb. n.. All three species are described and illustrated, and a distribution map is given. A key to Asian species of Afidentula is updated. Diagnostic similarities and differences between Afidentula and Afidenta are discussed and illustrated.
Introduction
The genera Afidentula Kapur, 1958 andAfidenta Dieke, 1947 belong to the tribe Epilachnini Mulsant, 1846, the group of phytophagous Coccinellidae. The taxonomy and nomenclatural history of species of both genera have been confused for decades.
The genus Afidenta was established by Dieke (1947) for species having bifid claws with a sharp basal tooth and sixth abdominal ventrite of female not longitudinally divided. Afidenta mimetica Dieke (=A. misera (Weise)) was designated as the type species. Other two species, A. minima (Gorham, 1894) and A. bisquadripunctata (Gyllenhal in Schönherr, 1808) were placed in this genus at the same time, although Dieke noted that the mandibles and male genitalia of A. bisquadripunctata were different from the type species of Afidenta. Kapur (1958) established the genus Afidentula with Epilachna manderstjernae Mulsant as the type species and distinguished it from Afidenta by the antennae subequal to the width of the head with a relatively thick and compact club and subtriangular mandibles with three teeth and without any additional denticulations or serrations. Kapur (1958) also pointed that both A. minima and A. bisquadripunctata should not belong to Afidenta but transferred only A. minima to Afidentula. Subsequently, Afidentula himalayana Kapur, 1963 from India andA. thanhsonensis Hoang, 1977 from Vietnam have been described, and several other mainland Asian species were added to that genus, e.g. Epilachna stephensi was transferred to Afidentula by Booth and Pope (1989). Bielawski (1963) transferred the Papuan Epilachna aruensis Crotch to Afidentula and Bielawski (1963Bielawski ( , 1965 and Jadwiszczak (1986) added further new species from New Guinea.
Li in Li and Cook (1961) described Afidenta arisana from Taiwan, which was moved to Afissula Kapur by Zeng (1995). Pang and Mao (1979) transferred Afissa siamensis Dieke into Afidenta and moved A. bisquadripunctata into Afidentula. Chazeau (1975Chazeau ( , 1976 studied African Epilachninae, and described 29 new species, which included nine species of Afidenta. Fürsch (1986) revised species of Afidenta describing five new species and included 25 species but not Chazeau's (1975Chazeau's ( , 1976 species. Jadwiszczak and Węgrzynowicz (2003) listed 39 species belonging to Afidenta (of which 37 have been distributed in Africa and two in Asia) and 18 species of Afidentula (11 species distributed in mainland Asia and seven in New Guinea and Aru Island). Tomaszewska and Szawaryn (2013), and Szawaryn and Tomaszewska (2013) revised Asian and Papuan species of Afidentula. They concluded that the mainland species of the Afidentula form uniform group which can be characterized by: comparatively small body, brown colour with black markings on elytra, compact and short mandibles provided with three apical teeth of which only middle one is sometimes weakly serrated, maxilla with basistipes and mediastipes separated entirely or almost so, terminal labial palpomere shorter than subterminal one, tibial spurs absent, tarsal claw with basal tooth present, and sternite VIII in females undivided. Species from New Guinea and Aru Island are considerably different having among others the body much larger and entirely black or black with orange spots on elytra, mandibles large and thin laterally with apical and subapical teeth, often additionally serrated, elytral epipleura complete (incomplete in Afidentula), the distance between antennal sockets about three or four times greater than a distance between antennal socket and inner margin of eye (in Afidentula this distance is about twice as great), coxites with styli and the tegmen with stout parameres. For New Guinean species Szawaryn and Tomaszewska (2013) proposed a new genus Papuaepilachna and for A. aruensis form Aru Island a new genus Lalokia. Szawaryn et al. (2015) conducted phylogenetic research on Epilachnini based on molecular and morphological data. According to this study, both Afidenta and Afi-dentula have not been recovered as monophyletic groups and each of them has been redefined. Studied species of Afidenta from Africa formed monophyletic clade with Asian mainland species of Afidentula and exclusion of the Papuan species from Afidentula has been confirmed by the study. From among two species of Afidenta from Asia, the type species (A. misera) was studied and it formed a separate clade by itself, based on the following combination of characters: ventral surface of the mandible densely tuberculate, galea transversely oval, terminal palpomere of labium distinctly narrower than penultimate one, metaventral postcoxal lines joined or almost so on metaventral process, forming somewhat w-shaped line along discrimen, male tergite VIII rounded apically and styli absent. The definition of Afidentula has been extended after inclusion of African species of Afidenta and some Malagasy Epilachna and Henosepilachna, and it has been characterized by the following combination of characters: gular sutures shorter than half-length of gula, mandibular incisor edge without teeth, terminal maxillary palpomere weakly elongate, expanded apically, labial apical palpomere distinctly narrower than penultimate palpomere and styli absent.
Based on the results of the phylogenetic analyses of Szawaryn et al. (2015), the present paper describes two new species of Afidentula from China, A. dentata sp. n. and A. jinpingensis sp. n. The study of Afidenta siamensis permits the move of this species from Afidenta to Afidentula as Afidentula siamensis comb. n., confirming with this that Afidenta now includes only one species.
Material and methods
The external morphology was observed with a dissecting stereoscope (SteREO Discovery V20, Zeiss and Leica Mz Apo). The following measurements were made with an ocular micrometer: total length, length from apical margin of clypeus to apex of elytra (TL); total width, width across both elytra at widest part (TW=EW); height, from the highest part of the beetle to elytral outer margins (TH); head width in front view, widest part (HW); pronotal length, from the middle of anterior margin to margin of basal foramen (PL); pronotal width at widest part (PW); elytral length, along suture, from the apex to the base including scutellum (EL). Male and female genitalia were dissected, cleared in 10% solution of NaOH by boiling for several minutes, and examined with an Olympus BX51 and Leica compound microscope.
Morphological characters were photographed with digital cameras (AxioCam HRc and Coolsnap-Procf & CRI Micro*Color), connected to the dissecting microscope. The software AxioVision Rel. 4.8 and Image-Pro Plus 5.1 were used to capture images from both cameras, and photos were cleaned up and laid out in plates with Adobe Photoshop CS 8.0.
Coccinellidae morphological terms follow Ślipiński (2007) and Ślipiński and Tomaszewska (2010). Type specimens designated in the present paper are deposited at SCAU-the Department of Entomology, South China Agriculture University, Guangzhou, China.
Afidentula is also similar to Afissa Dieke (=Afissula Kapur) in general appearance, but it can be separated by having antennae distinctly shorter than width of the head and with at least antennomeres 7 and 8 subquadrate (in Afissa antennae are longer than width of head and have antennomeres 3-8 elongate) and tibiae without apical spurs (tibial spurs present in Afissa).
Monographic revision of all Epilachnini genera based on the results of phylogenetic analysis is in preparation (Tomaszewska and Szawaryn, in prep.) and richly illustrated; detailed descriptions of all genera will be provided there.
Figures 2, 5
Afissa siamensis Dieke, 1947: 127. Afidenta siamensis: Pang and Mao 1979: 119;Cao 1992: 221;Ren et al. 2009: 250. Diagnosis. This species is most similar to A. dentata and A. stephensi (known from India and Pakistan) but can be distinguished from both by having pronotum with two large black oval spots, apex of penis with small sharp process directed outwardly (Fig. 2a-c, 2l) and apex of penis guide curved outwardly (Fig. 2m-n). Body short oval, dorsum strongly convex, densely pubescent (Fig. 2a-c). Head yellowish brown. Pronotum yellowish brown except anterior corners yellowish white, with two large black, triangularly-oval spots. Scutellum yellowish brown. Elytra yellowish brown, with 14 rounded black spots, arranged as in Fig. 2a-c. Underside yellowish brown, except metaventrite and middle area of abdomen black. Epipleura and legs yellow.
Male genitalia. Penis short and stout, strongly curved at base, apex with small and sharp process directed inwardly, capsule inconspicuous (Figs 2l). Tegmen stout ( Fig. 2m-n); penis guide in lateral view widest at base and narrowing to apex, strongly curved outwardly at apical 1/4, apex pointed (Fig. 2m); parameres slender, distinctly shorter than penis guide (Fig. 2m); penis guide in ventral view flattened and asymmetrical at apex, lateral margins almost parallel, apex blunt (Fig. 2n).
Specimens examined. Holotype. Nan, Siam, Jan. 27/28, Cockerell/ Type No. 57138 USNM/ Afissa siamensis Dieke, holotype. CHINA, Yunnan Prov.: 1 male, Jiluoshan, Xishuangbanna National Natural Reserve, Mengla County, 6.v.2009, Wang XM et al. leg;1 female, Lafu, Menglian County, 1130m, 7.v.2008Guizhou Prov.: 3 males, Dadugang, Badu Town, Ceheng County, 15.x.2006, Wang XM leg. Distribution. China: Guizhou, Yunnan; Thailand. Remark. Pang and Mao (1979) transferred Afissa siamensis Dieke into Afidenta without any explanation. However, a detailed examination of A. siamensis and Afidenta misera left no doubt that they do not belong to a same genus, and that diagnostic characters of A. siamensis match Afidentula. Thus this species in formally transfered to the genus Afidentula. Diagnosis. This species is most similar to A. siamensis in general appearance and colouration, e.g. having two mutual maculae on elytra along suture (anteriorly and medi-ally) but can be distinguished from the latter by having pronotum with a large black spot which almost covers entire surface of the pronotum leaving only lateral and anterior margins brown ( Fig. 3a-d), and apex of penis with two tooth-shaped appendices inwardly ( Fig. 3f-g). In A. siamensis, pronotum has two large black spots, and apex of penis has a small and sharp process directed outwardly (Fig. 2a- (Figs 3a-d). Head yellowish brown. Pronotum mostly black with only lateral and anterior margins yellowish brown (Fig. 3c). Scutellum yellowish brown. Elytra yellowish brown, with 14 rounded black spots arranged as in Figures 3d; spots may connect to each other forming transverse bands (Fig. 3a, b). Underside yellowish brown, except meso-, metaventrite and middle area of abdomen dark brown. Epipleura yellowish brown, except areas close to meso-and metaventrite dark brown. Legs yellow.
Male genitalia. Penis stout, strongly curved, apex simple and pointed, capsule with an expanded outer arm and a small inner one (Fig. 4e). Tegmen stout (Fig. 4f); penis guide in lateral view subparallel along 4/5 of its length and hook-like at apex; apex curved outwardly; parameres extremely slender, distinctly shorter than penis guide. Distribution. China (Yunnan). Etymology. The specific epithet is named after Jingpin County, China, the type locality of this ladybird. | 2016-05-04T20:20:58.661Z | 2015-06-08T00:00:00.000 | {
"year": 2015,
"sha1": "5323453349a85170bfb22f4aaa4f276dccffa433",
"oa_license": "CCBY",
"oa_url": "https://zookeys.pensoft.net/article/5844/download/pdf/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5323453349a85170bfb22f4aaa4f276dccffa433",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
256561594 | pes2o/s2orc | v3-fos-license | Symptom control and health‐related quality of life in allergic rhinitis with and without comorbid asthma: A multicentre European study
Abstract Background Allergic rhinitis (AR) is a major non‐communicable disease that affects the health‐related quality of life (HRQoL) of patients. However, data on HRQoL and symptom control in AR patients with comorbid asthma (AR + asthma) are lacking. Methods In this multicentre, cross‐sectional study, patients with AR were screened and administered questionnaires of demographic characteristics and health conditions (symptoms/diagnosis of AR and asthma, disease severity level, and allergic conditions). HRQoL was assessed using a modified version of the RHINASTHMA questionnaire (30, ‘not at all bothered’ ‐ 150 ‘very much bothered’) and symptom control was evaluated by a modified version of the Control of Allergic Rhinitis/Asthma Test (CARAT) (0, ‘no control’ ‐ 30, ‘very high control’). Results Out of 643 patients with AR, 500 (78%) had asthma as a comorbidity, and 54% had moderate‐severe intermittent AR, followed by moderate‐severe persistent AR (34%). Compared to the patients with AR alone, patients with AR + asthma had significantly higher RHINASTHMA (e.g., median RHINASTHMA‐total score 48.5 vs. 84, respectively) and a significantly lower CARAT score (median CARAT‐total score 23 vs. 16.5, respectively). Upon stratifying asthma based on severity, AR patients with severe persistent asthma had worse HRQoL and control than those with mild persistent asthma. The association was significantly higher among non‐obese participants compared to obese ones, with RHINASTHMA‐upper symptoms score but not with CARAT. Conclusions Our observation of poorer HRQoL and symptoms control in AR patients with comorbid asthma supports the importance of a comprehensive approach for the management of AR in case of a comorbid allergic condition.
| INTRODUCTION
Allergic rhinitis (AR) is a type-2 chronic inflammatory disease affecting the nasal mucosa and characterized by nasal symptoms such as sneezing, rhinorrhoea (nasal discharge), pruritus, and nasal congestion. [1][2][3] It is one of the most common non-communicable chronic diseases in the world, affecting over 400 million people of all ages, particularly the paediatric population. [1][2][3][4][5][6] While the prevalence of physician-diagnosed AR in the United States has been observed as high as 15% and 30%, based on self-reported nasal symptoms, 7,8 the prevalence was as high as up to 50% in many European countries. 9 According to the Allergic Rhinitis and its Impact on Asthma (ARIA) and the Global Alliance against Chronic Respiratory Diseases (GARD) statements, severe, refractory, or mixed forms of AR are significantly increasing across the globe and have contributed substantially to the socio-economic burden of the disease. [10][11][12] Allergic rhinitis often coexists with other conditions, such as atopic dermatitis, rhinosinusitis, rhino-conjunctivitis, and particularly asthma -a coherent feature often referred to as 'the atopic March' due to common systemic inflammatory processes. 2,4 40%-50% of patients with AR also have asthma whereas the prevalence of AR as a comorbidity in asthmatic patients is even higher, that is, 70%-90%. 13 Several reports described that the patients suffering from AR show a poorer quality of life (QoL), being affected by impaired sleep patterns, increased amount of fatigue, depression, risk of driving accident, and altered physical and social functions. 8,[14][15][16] Often, a poor perception of AR symptoms is associated with poor control of AR. 17 However, studies assessing health-related quality of life (HRQoL) and symptoms control in AR patients with concomitant asthma are lacking.
The Aerobiological Information Systems and allergic respiratory disease management (AIS Life +) study focused on this aspect, by using specifically designed and validated questionnaires on QoL and control for AR with comorbid asthma. In this multicentre study, using validated questionnaires, we aimed to assess the differences in symptom control and HRQoL between AR patients with or without comorbid asthma.
| Study design and participants
In the international multi-centre (Austria, France, and Italy) crosssectional AIS Life + study, conducted between 2013 and 2014, we enrolled participants suffering from nasal allergy. A convenient sample of individuals with an active condition of pollen-induced AR was selected from pre-existing epidemiological study databases or through web advertisement (Pisa, Italy), clinics of general practitioners (Paris, France) or public health databases and pulmonary clinics (Vienna, Austria) and invited to participate in this epidemiological survey. All potential participants were administered a screening questionnaire through a telephone interview to check whether they were eligible for the study. We included participants who: (1) were adults (≥18 years of either sex); (2) and ARIA (2008) 6 were used to classify asthma according to its severity. Control of Allergic Rhinitis/Asthma Test is available in several languages including those of the participating countries.
| Statistical analyses
Data were described as frequency (%), mean (standard deviation [SD]), or median (interquartile range [IQR]) for categorical, continuous, and ordinal variables, respectively. To test the association between QoL and control (RHINASTHMA and CARAT -Total and subdomains) scores, and AR + asthma (independent variable), we first used a bivariate analysis using Wilcoxon rank-sum test. Then, we constructed univariable (unadjusted) and multivariable (adjusted) regression models among the independent variable and HRQoL and control scores using a mixed effect Poisson regression model. As potential confounders, we tested fixed factors (age, sex, BMI, smoking status, exposure to smoke, education, ARIA grade, sensitivity to allergens, and drugs taken in the last 12 months) and a random factor (the country). To include confounders in the regression models, we used a priori evidence criteria, that is, covariates were considered as confounders if were found consistent in previous literature. However, confounders were retained in the model if they modified the estimates of the remaining variables by more than 10%. We checked the collinearity of the confounders using the variance inflation factor (VIF). The parsimony of the models was confirmed by Akaike information criterion.
We also performed two secondary analyses. Firstly, we tested if there was any effect modification by obesity on the association between AR + asthma, and the HRQoL and control scores. Secondly, we performed meta-analyses to determine if there were any heterogeneity in the HRQoL and control (total) scores between the participating countries. All analyses were conducted using a complete case approach in Stata V.16 (StataCorp, College Station, TX, USA), and a pvalue <0.05 was considered statistically significant.
| RESULTS
The demographic and clinical characteristics of all the participants, stratified by country, are presented in Table 1. Of all participants, nearly 40% were males with a mean age of 44 (standard deviation, SD: 14) years, 15% of the participants were obese, 47% were smokers and nearly 33% reported exposure to smoke, 78% of the participants had asthma as comorbidity, 54% had moderate-severe intermittent AR and 34% had moderate-severe persistent AR. As for allergic sensitization, pollens were the most prevalent allergen (89%) among the participants, followed by house dust mites (57%).
Concerning the HRQoL parameters, the participants had a median (IQR) RHIN-Total score of 76 (53, 91) and CARAT-Total score of 18 (14,22). In the bivariate analysis, we found that participants with both AR and asthma had significantly higher RHINASTHMA (Total and subdomain) scores than the participants with AR alone (Figure 1). MOITRA ET AL. Moreover, CARAT (Total and subdomain) scores were significantly lower in AR with comorbid asthma than in AR alone ( Figure 2).
In the multivariable analysis, we observed that, compared to AR alone, AR with comorbid asthma was significantly associated with (Figure 3 and Supplementary Table 1). We did not find any multicollinearity between the covariates (VIF<3).
We observed a poorer control of symptoms in AR patients with asthma comorbidity than in patients with AR alone (β for CARAT- Table 4).
However, the overall estimates from the meta-analyses for the association between AR + asthma, and RHIN-Total and CARAT-Total scores were similar to the ones reported in the main analysis.
| DISCUSSION
In our study, we found a significantly worse QoL (RHINASTHMA Total and subdomain scores) and symptoms control (CARAT Total and subdomain scores) in patients with AR + asthma than in patients with AR alone. We also found that the association was significantly higher among non-obese participants compared to obese ones, when assessed through RHIN-Upper symptoms score but not with CARAT.
F I G U R E 2 Differences in CARAT-Total and subdomain scores between patients with Allergic rhinitis (AR) alone and AR + asthma. Data presented as median (solid line) and interquartile range (IQR) (dashed line) unless otherwise stated. P-values were calculated from the Wilcoxon-ranked sum test F I G U R E 3 Adjusted association between AR + asthma and RHINASTHMA-Total and subdomain scores. Data presented as regression coefficient (β) (symbol) and 95% confidence interval (CI) (horizontal bar) unless otherwise stated. Models were adjusted for age, sex, body mass index (BMI), smoking status, exposure to smoke, education, Allergic Rhinitis and its Impact on Asthma (ARIA) grade, sensitivity to allergens, and drugs taken in the last 12 months as fixed factors, and the country as a random factor We also observed country-specific variations in the RHINASTHMA and CARAT Total scores. Although one previous study compared the individual/social burden of disease between asthmatics and asthmatics with concomitant AR, unlike ours, that study did not compare the difference in disease control and HRQoL between the two groups of patients. 25 It is well-known that several triggers such as seasonal meteorological changes, pollen season, air pollution, or even occupational F I G U R E 5 Meta-analysis results of the association between AR + asthma and (A) RHIN-Total score and (B) CARAT-Total score, stratified by countries. Models were adjusted for sex, age, smoking status, exposure to smoke, education, Allergic Rhinitis and its Impact on Asthma (ARIA) grade, sensitivity to allergens, and drugs taken in the last 12 months as fixed factors. I-squared, variation in estimated effect attributable to heterogeneity F I G U R E 4 Adjusted association between AR + asthma and CARAT-Total and subdomain scores. Data presented as regression coefficient (β) (symbol) and 95% confidence interval (CI) (horizontal bar) unless otherwise stated. Models were adjusted for age, sex, body mass index (BMI), smoking status, exposure to smoke, education, Allergic Rhinitis and its Impact on Asthma (ARIA) grade, sensitivity to allergens, and drugs taken in the last 12 months as fixed factors, and the country as a random factor exposures may lead to poor QoL of asthmatic patients with or without AR. 8,[26][27][28] It has also been observed that AR patients are often reported to have poor control over their symptoms if persistent comorbid asthma is present. [29][30][31][32] Although no direct comparative study on the control and HRQoL of AR and AR with asthma has been reported yet, our findings well reciprocate the previous results.
Asthma and AR share eight common genes (CLC, EMR4P, IL5RA, FRRS1, HRH4, SLC29A1, SIGLEC8, IL1RL1) that are presumed to describe the link for multimorbidity. 33 They also share common risk factors such as atopic genetic background (for the allergic endotypes), environmental exposures (allergens, moulds, indoor and outdoor air pollution, some respiratory viruses, etc.), type of occupation, and active tobacco smoking.
We found that the non-obese patients with AR + asthma had 36 Our result could also be influenced by other factors such as physical activity or environmental conditions; however, we could not confirm them in our study.
We found significant country-wise heterogeneity in RHI-NASTHMA and CARAT-Total scores between AR alone and AR + asthma. This could be explained by a higher number of AR + asthma patients in France than in the two other countries.
Another reason for such heterogeneity could be due to significant variation in allergen sensitivity between the countries, which has also been previously reported describing a significant difference in aeroallergens and allergies between European countries. [37][38][39] Air pollution is another important perturbation that can significantly affect HRQoL and symptom control in allergic patients and a recent metaanalysis suggested that there is significant variability in air pollution between different European counties that have differently attributed to the risk of AR. 40 However, studying air pollution was beyond the scope of our current study. Nevertheless, this variation of HRQoL and control could also be influenced by other, such as co-occurrence of any food allergies, and other environmental conditions, which we could not assess in this current study.
Our findings add important clinical knowledge to the existing strategies for the management of AR with concomitant asthma.
Although AR and asthma are two different diseases with distinct clinical features, when AR persists with asthma, either condition is often overlooked 32,41 due to the lack of a combined tool for monitoring control and HRQoL of both diseases at the same time. Despite the well-established guidelines of ARIA and GARD for a new management protocol for AR and asthma together, 10,12,[42][43][44][45] reports adopting these guidelines in the management of AR with persistent asthma are still lacking. Our findings would help guide practitioners to use the appropriate assessment tools while treating such patients.
Our study recruited patients from three European countries which have distinct geographical, climatic, and aerobiological conditions. Moreover, we recorded sensitivity data to a wide variety of indoor and outdoor allergens which enabled us to observe the distribution of those allergens across the participant countries. Our meta-analytic approach to assessing country-wise variation in HRQoL and control provides a novel understanding of the divergent population and their disease conditions. Our findings underline the impact of respiratory hypersensitivity conditions on the QoL of patients and call for prevention and public health strategies to diminish the burden of these conditions. Currently, there are effective treatments for AR and asthma, several risk factors are known (e.g., allergies, rhinitis, tobacco smoke) and tools to control the disease have been developed.
| CONCLUSION
In summary, using combined assessment tools for AR and asthma, we found that AR patients with comorbid asthma have a poorer quality of life and symptom control than those with AR alone. This finding highlights the importance of a comprehensive approach for the management of AR in case of a comorbid allergic condition for optimum care, and such strategies would be the gateway to reducing the global burden of these diseases. | 2023-02-04T16:02:55.390Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "688b6f8671ce60b740211c6e67dcdab3594fb229",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "311c7052938ee56a723c65bf5f446824e37deb0e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
54512131 | pes2o/s2orc | v3-fos-license | A quantitative method for measuring the transfection efficiency of CD 19-directed chimeric antigen receptor in target cells
Background. Adoptive cell therapy (ACT) based on chimeric antigen receptors (CARs) expressed on the surface of T cells shows a remarkable clinical outcome, particularly for B-cell malignancies. However, toxicity and side effects of CD19-redirected CAR T cells have been observed concurrently in most cases due to cytokine release and tumor cell lysis. Therefore, strictly controlling the amount of valid T cells re-transfused to patients seems to be an important step in reducing toxicity and side effects of CAR T cells. Transfection efficiency via lentiviral particles varies widely in different cases.
Introduction
Chimeric antigen receptors (CARs) were engineered by Gross et al., 1 who first generated and expressed the chimeric T-cell receptor (TCR), which is composed of the TCR constant domains and variable domains of the antibody, and these chimeric genes are non-MHC-restricted.A specific single chain fragment variable (scFv) from anti-tumorassociated antigen (TAA) antibody was fused to T-cell activation-related domains and was expressed on membrane of the cells. 2As a result, the T cells display tumor-targeted cytotoxic activity and the ability to proliferate sustainably.Subsequently, a CD19-redirected CAR T cell targeted against B-cell malignancies was developed and clinically tested, and it demonstrated remarkable effectiveness both in children and adult patients. 3,4Chimeric antigen receptor T-cell therapy provides a novel treatment choice for blood cancer patients, particularly for relapsed and refractory leukemia.
CD19 is a specifically expressed antigen in the B lymphocyte lineage, maintaining an upregulated expression during the early B-lineage cells to the mature B cell differentiation until final downregulation during terminal differentiation cells, and that CD19 has become a chimeric immunotarget for malignant B cells, including acute lymphoblastic leukemia (ALL) and non-Hodgkin's lymphomas (NHL). 5In addition, another publication revealed an essential role for CD19 in promoting early B-cell activation events in response to membrane-bound ligand stimulation. 68][9] Furthermore, a recently modified CAR based on cancer-associated Tn-Glycoform has revealed the effect of CAR T cells on solid cancer. 10In clinical studies, T cells were collected from a patient with leukapheresis and then isolated and activated with antibodies 11 which induced T cell activation and proliferation, thereby making the cells more likely to be responsible for viral transduction.These transduced cells can be expanded to the required quantity.Quality control and assurance assays are necessary before the prepared T cells can be re-transfused to patients. 12owever, there are many challenges and considerations.For instance, if the dose of infused CAR T cells is insufficient, the expected result may not be achieved; on the other hand, an over-infused dose may cause toxicities mediated by CAR T cells, such as cytokine release syndrome (CRS). 13An effective method for examining and quantifying the transduced cells could help to accurately control the amount of infused CAR T cells administered to the patient, and could therefore lead to a more effective treatment.Flow cytometry (FCM) is a good choice, but a new set of detection apparatus must be used for each individual patient to protect against cross-infection.The cost of the FCM detection method can be prohibitive.
Green/red fluorescent protein (GFP) has been a useful tool for intracellular study since it was discovered 14 and it is used as a marker for gene expression. 15These noninvasive and visualization proteins can serve as a marker for quantifying the number of interest infected T cells when administered clinically 16 or can be used to determine the ratio of infected cells to total cells.In addition, an antibiotic-resistance gene used in a recombinant vector is another method of screening for infected cells or valid cells with removing the uninfected cells.However, both approaches have employed exotic genes/proteins which are immunogenic molecules and pose a potential hazard to patients in clinical use.In this study, we fused GFP to CD19 as a fusion protein, expressed it in Escherichia coli and purified it as a probe, so as to examine the infection efficiency of the CD19-directed chimeric antigen receptor on the surface of T cells based antigen-antibody interactions.Positive cells are marked with the GFP-CD19 fusion protein, due to the CD19-directed CAR being expressed on the surface of the infected cells.Valid CAR T cells can be calculated by counting the number of positive cells.Using this method, the number of valid cells can be accurately calculated before being re-transfused to patients.
Plasmid construction
The target DNA was isolated through electrophoresis in 1.0% agarose gel.In brief, the gels with DNA fragments were excised and purified with a Gel Extraction Kit (Lifefeng, Shanghai, China).These fragments were inserted into pGEM T-Easy plasmid (Promega, Madison, USA) using T/A cloning.The ligation products were transformed into Top10 competent E. coli cells.Following a heat shock, the transformed cells were recovered in super optimal broth with catabolite repression (super optimal catabolite repression (SOC) medium, 2% tryptone, 0.5% yeast extract, 0.05% sodium chloride (NaCl), 2.5mM potassium chloride (KCl), 10mM magnesium chloride (MgCl 2 ), and 20mM glucose) at 37°C for 45 min.Recombinant colonies were cultured in agar plates (1% tryptone, 0.5% yeast extract, 1% NaCl, and 1.5% agar).Notably, the ligation product should be incubated at 70°C for 5 min to inactivate the ligase before the transformation.The recombinant plasmids were extracted from the bacteria and identified using endonuclease analysis.The CD19-T vector recombinant plasmid was identified by BamHI and HindIII, and the GFP-T vector by NdeI and BamHI.The inserted sequences were verified through DNA sequencing (Sangon Biotech, Shanghai, China).
Subsequently, the verified CD19-T vector recombinant plasmid and pCold TF plasmid which contained CspA promoter (a cold promoter, used to induce expression at low temperatures) and an ampicillin-resistance gene were digested by BamHI and HindIII, respectively.Electrophoresis with 1.0% agarose gel was performed.The target CD19 gene fragment and the linearized pCold TF plasmid in the gels were purified and ligated with T4 DNA ligase.The ligation products were transformed into Top10 competent E. coli cells, which were then cultured in agar plates and SOC medium with ampicillin.The reconstructed TF plasmid was extracted from the bacteria and identified by BamHI and HindIII.Finally, the GFP-T vector recombinant plasmid and this CD19-pCold TF plasmid were digested by NdeI and BamHI, respectively.The GFP gene fragment and CD19-pCold TF plasmid vector were purified and the GFP-CD19-pCold TF recombinant plasmid was constructed through ligation and identified with NdeI and HindIII.
Construction of CD19-CAR lentiviral vector and packaging of lentivirus
The CD19-CAR lentiviral expression plasmid was constructed and the insert included the Kozak sequence, the IL-2 signaling peptide, scFv antibody, Fc hinge, CD28 transmembrane domain, 4-1 BB signaling domain, and CD3 zeta.The scFv antibody fragment is a gift from our colleague Dr. Jianghai Liu (Sichuan University, Chengdu, China), and the rest of the insert was a gift from Dr. Guo's lab (University of Saskatchewan, Saskatoon, Canada).The whole insert was inserted into lentiviral expression vector pCDH-EF1 (no fluorescence).The recombinant vector was identified with XbaI and SalI and verified through sequencing.The lentiviral particles were purchased from Shanghai GenePharma Co. Ltd (Shanghai, China).The final virus titer was 1 × 10 8 TU/mL.
Protein GFP-CD19 expression
The GFP-CD19-pCold TF recombinant plasmid was transformed into BL21(DE3) competent E. coli bacteria for expression. 17After overnight incubation in agar plates, single colonies were picked from the plates and placed into lysogeny (LB) broth with ampicillin to grow overnight at 37°C in a shaker.
This cultured medium was transferred into fresh LB (1:200) with ampicillin in a flask to culture for about 2 h until its OD 600 value was 0.5.The culture was cooled at room temperature.When the temperature of the broth dropped to 15°C, isopropyl-β-d-thiogalactopyranoside (IPTG; Solarbio, Beijing, China) was added into the culture.The final concentration of IPTG in the culture was 0.5mM and the bacteria were cultured at 15°C for additional 4 h.Cells were harvested using centrifugation at 4000 g at room temperature for 10 min and they appeared to be green under visible light.The harvested cells were stored at −80°C or used immediately.
Protein purification
The cell pellet harvested from 100 mL of the culture described above was re-suspended in 25 mL of phosphatebuffered saline (PBS; 137mM NaCl, 2.7mM KCl, 10mM disodium phosphate (Na 2 HPO 4 ) and 2mM monopotassium phosphate (KH 2 PO 4 at a pH of 7.0) with 1mM of phenylmethylsulfonyl fluoride (PMSF) and 0.5% Triton X-100 (0.5%) on ice.The sample was frozen at −80°C for 30 min and were taken out.After thawing out, it was sonicated at 85 W for 6 s, followed by 7 s on ice in order to shear the DNA for lower viscosity.Sonication and icing were repeated 5-7 times in all samples until the solution was clear.The cell debris was centrifuged at 12,000 g for 10 min at 4°C.The supernatant was then transferred to a new tube.Then, sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) for the supernatant was performed to figure out the solubility of the protein.The rest of the supernatant was stored at −20°C or on ice for the following step.
Ni-NTA-agarose purification was prepared and 4 mL of Ni-NTA resin (beads) was pipetted into a 10 mL purification column.The resin was washed using 5 column volume (CV) of deionized water and then 5 CV of PBS (pH 7.0) for equilibrium.Twenty-five milliliters of the supernatant was slowly added into the column for binding.The protein in the column was washed with 5 CV of PBS (pH 7.0) and 3 CV of wash buffer (PBS pH 7.0 buffer containing 20mM imidazole) successively, and eluted with PBS (pH 7.0) with 250mM of imidazole into a new tube.The resin was again washed with PBS (pH 7.0) with 800mM of imidazole (3 CV).Finally, the column was washed with PBS (pH 7.0) 5-8 times, with sterile deionized water 3-5 times and washed again with 20 mL of 20% ethanol; then it was filled with 20% alcohol and stored at 4°C.The eluted protein was confirmed using SDS-PAGE analysis.The 250mM of imidazole was centrifuged at 4500 g for 50 min at 4°C in an ultrafiltration centrifuge tube (Merck Millipore, Billerica, USA), in order to remove small molecular substances and to concentrate the purified protein.This step was repeated and the protein was concentrated to 2.0 μg/mL.
Infection of 3 cell lines
The lentiviral vector was packaged for CD19-CAR expression and the titer was 1 × 10 8 TU/mL (GenePharma).First, the HEK 293 cells (5 × 10 4 /mL) were seeded in 6 of the wells (500 μL each) of a 24-well plate.After the seeded cells were cultured for 12 h, the media were replaced with infection mix.Three of the wells had 490 μL of complete medium, 10 μL of lentivirus and 0.5 μL of polybrene added, with a working concentration of 5 μg/mL for each one.The other 3 control wells had only 500 μL of complete medium added.Daudi cells and Jurkat cells were infected through the same process as the HEK 293 cells, but the method of treating the cells was different.The cells were cultured at 37°C in 5% CO 2 for 72 h.Then, 5 μL of the GFP-CD19 fusion protein was added to each culture (0.2-2.0 μg/mL).Subsequently, the cells were cultured under the same conditions for another 6 h, the media were replaced by PBS and the cells were observed with fluorescence microscope and quantified using cell counting in each chamber.
Plasmid construction
The CD19 gene (coding sequence; 1671bp) and the GFP gene (717bp) were acquired through PCR from commercial cDNA and pET28b-GFP plasmid, respectively, and were identified with 1% agarose gel electrophoresis, as shown in Fig. 1.Then, the target genes were ligated into T vector.The CD19 gene was identified by BamHI and HindIII.A 1671bp fragment can be seen in Fig. 2A.Similarly, the GFP gene was identified by NdeI and BamHI.A 717bp gene fragment shows correctly in Fig. 2B.Furthermore, the sequencing results of the constructed T vector plasmids were also correct.The CD19 gene was cut and then ligated to pCold TF vector.The recombinant plasmid was identified by BamHI and HindIII.A 1671bp gene fragment and a 5769bp gene fragment can be seen in Fig. 3A.The GFP gene was ligated to this recombinant plasmid and identified by NdeI and BamHI.The 2388bp gene fragment of the CD19-GFP gene and a 5769bp gene fragment of linearized pCold TF plasmid visible in Fig. 3B indicate that the recombinant plasmid was constructed successfully.
Expression and purification of the CD19-GFP fusion protein
The CD19-GFP fusion protein induced by IPTG was expressed in BL21 E. coli bacteria and collected.It was purified using Ni-NTA-agarose purification and concentrated using protein concentrators.The expression and purification results were confirmed using SDS-PAGE (Fig. 4).
Quantitative analysis of infected efficiency
The CD19-CAR was transduced by lentiviral vector into HEK 293 cells.There was no green or red fluorescence particles visible in the cells which were infected by the lentivirus.Expression of GFP can be detected under a microscope due to the specificity of CAR T cells.The images captured
Discussion
CD19 mainly expresses on B-lineage cells, which makes it a specific antigen for immunotherapy. 180][21][22] Except engineered T cell, CD19-redirected NK92 cells also show a promising result against B-lineage leukemia. 23n addition, CD19-redirected T memory stem cells represent a potential method for the treatment of B-lineage leukemia. 24In the process of engineering therapeutic cells, another vector, apart from the lentiviral vector, is also employed to mediate the expression of CAR. 25 Toxicity and the side effects of CD19-directed CAR T cells are increasingly drawing attention to this immunotherapy.The main mechanism of the toxicity and side effects could be attributed to cytokine release, tumor cell lysis, B-cell aplasia, and macrophage-activation syndrome. 26,27Although most mild toxicity and side effects are reversible, some severe toxicities need medical treatment.A low dose of engineered cells may not trigger a response of antitumor activity, and an extra dose of immunotherapeutic cells may lead to severe toxicity and side effects.Therefore, administering the appropriate amount of engineered T cells to patients may be the first step in controlling the occurrence of toxicity and side effects.It is widely accepted that a correct dosage of CAR T cells is required for each re-transfusion to patients, 12,19 so the infection efficiency and the number of valid CAR T cells is accurately calculated before re-transfusion.Due to the alteration of infection efficiency, valid CAR T cells cannot be estimated from the total number of infused cells.In order to track infection efficiency or to screen for valid infected cells by removing uninfected cells, a fluorescence protein gene and a resistant gene are commonly used in molecular manipulation.Valid cells can be counted and calculated before re-transfusion by the former method, and valid cells can be gathered by removing uninfected cells via the latter one.Both methods can provide us with an approach using which we can accurately control the infusion dosage of CAR T cells rather than the number of total cultured cells.
Unfortunately, both proteins are derived from exotic genes and their products carry immunogenicity to acceptors, 28 leading to impairment when these proteins are infused into patients along with CAR T cells.On the other hand, if flow cytometry is employed with an immunofluorescence antibody, it will increase the cost of treatment because each sorting process requires a new set of sorting apparatus.In addition, the application of an immunofluorescence antibody may become a new immunogen, leading to new barriers in the application.
A safe and simple approach is needed in order to quickly determine the infection efficiency and to calculate the number of valid engineered cells in the application.In this study, we fused CD19 to GFP and expressed it in E. coli under a regular molecular biological process.The fusion protein solubility was optimized at 15°C and was purified with Ni-NTA agarose beads. 29After the target cells (including adherent and suspension cell lines) were infected with lentiviral particles, the purified fusion protein was directly added to the culture medium and left to culture for 6 h.GFP could be directly observed under a microscope and the infected cells (expressing CD-19 CAR) could be determined from other cells (data not shown).The results would become clearer than before when the cultured medium (including CD19-GFP) was removed by washing the cells with a regular culture medium.
Because CD19-GFP (as an antigen) can be recognized by CAR expressed on the surface of the target cells, the fusion protein serves as a probe to visualize the CARexpressing cells.The non-specific binding of CD19-GFP to other proteins is negligible when a fluorescence microscope is used.The ratio of fluorescence cells to total cells can be calculated by determining the number of cells in fluorescence fractions and the number of total cells.In clinical use, the number of valid cells can be accurately administrated to patients.This method provides a novel approach to administrating a dose of CAR T cells in clinical practice using vectors free of the antibiotic-resistance gene and the fluorescence protein gene.
Fig. 1 .
Fig. 1.Polymerase chain reaction (PCR) results of CD19.The 2 lanes are the CD19 encoding region (A); the GFP gene (B) is amplified as shown in both lanes
Fig. 2 .Fig. 3 .
Fig. 2. Identification of the CD19-T vector plasmid (A) through digestion with EcoRI in the right lane.The GFP-T vector plasmid (B) was identified with EcoRI and the right lane is a positive clone.Both vectors were verified with sequencing
Fig. 5 .Fig. 4 .
Fig. 5. GFP-expressing cells under a microscope (×100 magnification).Bright field images and cell fluorescence images of HEK 293 cells.Control -HEK 293 cells without infection (A).Bright field images and cell fluorescence images of Daudi cells.Control -Daudi cells without infection (B).Bright field images and cell fluorescence images of Jurkat cells.Control -Jurkat cells without infection (C) | 2018-12-12T19:54:05.843Z | 2019-02-01T00:00:00.000 | {
"year": 2019,
"sha1": "43a4dfa9431617a17ff3f59b043d78a49977f72f",
"oa_license": "CCBY",
"oa_url": "http://www.advances.umed.wroc.pl/pdf/2019/28/2/159.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d1440f9b6ebe1d65fea69c490add2c8f1459f954",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
225078347 | pes2o/s2orc | v3-fos-license | PI3Kγ Regulatory Protein p84 Determines Mast Cell Sensitivity to Ras Inhibition—Moving Towards Cell Specific PI3K Targeting?
Mast cells are the major effector cells in immunoglobulin E (IgE)-mediated allergy. The high affinity IgE receptor FcεRI, as well as G protein-coupled receptors (GPCRs) on the mast cell surface signals to phosphoinositide 3-kinase γ (PI3Kγ) to initiate degranulation, cytokine release, and chemotaxis. PI3Kγ is therefore considered as a target for treatment of allergic disorders. However, leukocyte PI3Kγ is key to many functions in innate and adaptive immunity, and attenuation of host defense mechanisms is an expected adverse effect that complicates treatment of chronic illnesses. PI3Kγ operates as a p110γ/p84 or p110γ/p101 complex, where p110γ/p84 requires Ras activation. Here we investigated if modulation of Ras-isoprenylation could target PI3Kγ activity to attenuate PI3Kγ-dependent mast cell responses without impairment of macrophage functions. In murine bone marrow-derived mast cells, GPCR stimulation triggers activation of N-Ras and H-Ras isoforms, which is followed by the phosphorylation of protein kinase B (PKB/Akt) relayed through PI3Kγ. Although K-Ras is normally not activated in Ras wild-type cells, it is able to compensate for genetically deleted N- and H-Ras isoforms. Inhibition of Ras isoprenylation with farnesyltransferase inhibitor FTI-277 leads to a significant reduction of mast cell degranulation, cytokine production, and migration. Complementation experiments expressing PI3Kγ adaptor proteins p84 or p101 demonstrated a differential sensitivity towards Ras-inhibition depending on PI3Kγ complex composition. Mast cell responses are exclusively p84-dependent and were effectively controlled by FTI-277. Similar results were obtained when GTP-Ras was inactivated by overexpression of the GAP-domain of Neurofibromin-1 (NF-1). Unlike mast cells, macrophages express p84 and p101 but are p101-dominated and thus remain functional under treatment with FTI-277. Our work demonstrates that p101 and p84 have distinct physiological roles, and that Ras dependence of PI3Kγ signaling differs between cell types. FTI-277 reduces GPCR-activated PI3Kγ responses in p84-expressing but not p101-containing bone marrow derived cells. However, prenylation inhibitors have pleiotropic effects beyond Ras and non-tolerable side-effects that disfavor further clinical validation. Statins are, however, clinically well-established drugs that have previously been proposed to block mast cell degranulation by interference with protein prenylation. We show here that Simvastatin inhibits mast cell degranulation, but that this does not occur via Ras-PI3Kγ pathway alterations.
INTRODUCTION
Mast cells are key effector cells in the pathology of allergic disease and chronic inflammation and are thus target cells of novel therapeutic strategies (1). In sensitized mice and human patients, IgE binds to the high affinity IgE receptor (FceRI) present on the surface of tissue mast cells (2). Multivalent antigen/IgE complexes cross-link FceRI, and trigger a protein tyrosine kinase cascade involving Src family kinase activation and Syk (spleen tyrosine kinase) translocation. The subsequent relay of signals through Bruton tyrosine kinase (Btk) to phospholipase Cg (PLCg) culminates in an extracellular calcium influx committing mast cells to release their histamine granule contents and initiate de novo synthesis of pro-inflammatory and immuno-modulatory mediators, including chemokines, cytokines, growth factors, vasoactive compounds, and more (3).
An important aspect in anaphylaxis is recruitment of mast cell precursors to the tissue, which is also mediated by GPCRs engaging in PI3Kg activation (6,9).
Mice lacking functional PI3Kg are thus resistant to IgE/antigeninduced anaphylaxis (4,6), show a reduced IgE-mediated recruitment of mast cells to tissues (6), and display attenuated airway and pulmonary inflammation (10,11), ventilator induced lung injury (12) and allergic asthma (13). PI3Kg therefore qualifies as a potential therapeutic target in allergic conditions. Furthermore, PI3Kg is highly expressed in leukocytes of the myeloid and lymphoid lineage (14)(15)(16)(17) and is involved in the transduction of innate and adaptive immune responses. Leukocyte chemotaxis, release of inflammatory mediators, and activation of the NADPH oxidase to release reactive oxygen species (ROS) represent crucial host defense mechanisms that require G protein-coupled receptor (GPCR) engagement and activated PI3Kg (4, 14-16, 18, 19). Early on, PI3Kg inhibition with AS-605240 has demonstrated protection against rheumatoid arthritis (20), pancreatitis (21), glomerulonephritis, and systemic lupus (22) in mouse models. Genetic and pharmacological targeting of PI3Kg attenuates macrophage/ foam cell activation and atherosclerosis and supports plaque stability (23)(24)(25). Genetic inactivation of PI3Kg activity also attenuates heart failure during chronic pressure overload (26) and diet-induced obesity (27), partially reliant on kinase-independent functions of PI3Kg as a scaffold protein for protein kinase A and phosphorylase 3B.
The flip-side to a broad action of PI3Kg inhibition in various animal disease models are potential associated adverse effects, including susceptibility to infections, as indicated by reduced neutrophil (14,19), macrophage (14,28,29) and dendritic cell motility (17) in PI3Kg null cells and mice. Moreover, PI3Kg has been implicated in anti-viral response against Influenza A infection recently (30,31). The possibility of cell type-specific PI3Kg targeting, allowing for alleviation of allergic inflammation without a general suppression of host immune defense would therefore be of great value.
PI3Kg acts as a heterodimer of a catalytic p110g subunit and one of two possible adaptor proteins-p84 (also called p87 PIKAP ) (5,32) or p101 (33). Both adaptor proteins take a role in the coupling of GPCR signaling to PI3Kg, but p101 and p84 appear to have discrete physiological functions (5,34). Distinct pools of PtdIns (3,4,5)P 3 at the plasma membrane emerging from the two PI3Kg/adaptor subunit complexes display a differential sensitivity to cholesterol depletion and capacity to promote mast cell granule release (5). Adaptor-specific responses were also described in neutrophils (34,35), where p101 played a key role in cell migration, while p84 was essential for ROS production upon chemoattractant stimulation. Moreover, adaptor proteins are not equally distributed among hematopoietic cells. While lymphocytes express p101, mast cells express only the p84 adaptor subunit, but neutrophils and macrophages contain both p101 and p84 adaptors (5,32).
Finally, a further distinction between adaptor subunits was revealed by analysis of the role of the small GTPase Ras in the activation of the PI3Kg complexes. Whereas p101/p110g is recruited and stimulated by Gbg subunit of GPCRs and does not require Ras to be operational, Ras is indispensable for membrane recruitment and activation of the lipid kinase in the p84/p110g complex (5,36). Differential involvement of Ras opens new opportunities for targeted regulation of the two PI3Kg complexes that could provide novel ways to specifically control distinct cell responses.
In the current study, we tested whether inhibition of Ras could attenuate mast cell activation due to its involvement in p84/p110g complex-dependent cell responses, and assessed if macrophages would be spared by Ras targeting.
Mice
Transgenic mouse strains lacking H-Ras (37), N-Ras (38) and p110g (14) were previously described. Mice were backcrossed to a C57BL/6J background and housed according to the institutional guidelines. In all experiments 8-12-week-old male and female animals were utilized. All animal experiments were carried out in accordance with the guidelines of the Swiss Federal Veterinary Office (SFVO) and the Cantonal Veterinary Office of Basel-Stadt (license number 2143).
The remaining lysate was incubated with 40 μl 50% Glutathione-Sepharose 4B bead slurry in lysis buffer (GE Healthcare, 17-0756-01) for 2 h at 4°C. Beads were resuspended in 20 μl 2× sample buffer; denatured and pulleddown Ras protein was subjected to SDS-PAGE immunoblotting. The list of western blot antibodies is provided in the supplementary section (Table S3).
Recombinant p84 and p101 kindly provided by R. Williams were used as standards to quantify expression levels of PI3Kg adaptor proteins on western blots.
2 μg total RNA was used for reverse transcription with M-MVL RTase and oligo dT primers. Quantitative polymerase chain reaction (qPCR) was performed on StepOnePlus Real-Time PCR System (Applied Biosystems) with MESA Green qPCR MasterMix Plus for SYBR Assay (Eurogentech). qPCR results of Ras isoforms and farnesyltransferases (FTases) were normalized to GAPDH according to the following formulas % of GAPDH = 2 −dCT Â 100 and dCT = CT target − CT GAPDH All primer sequences for qPCR are listed in the supplementary section (Table S1). Relative RNA expression was normalized to GAPDH as endogenous control and WT unstimulated cells as reference sample.
BMMC and HEK-293 Transfection
BMMCs were transfected with Amaxa Nucleofector kit T (VCA-1002) using 14 μg of total plasmid per 10 × 10 6 cells. Medium was changed 5 h after transfection, and cells were cultured overnight in IMDM complete supplemented with IL-3 (2 ng/ml). Plasmids encoding for p101, p84, p110g, and NF1 were previously utilized in (5) and (36). Plasmid sequences of codon-optimized H-Ras, N-Ras, and K-Ras for transfection in human embryonic kidney 293 (HEK293) cells are provided in the supplementary section (Table S2). 24 h prior to the transfection, HEK293 cells cultured in DMEM supplemented with 10% HI-FCS, 2 mM L-glutamine (Sigma, G7513), 100 U/ml Penicillin/Streptomycin (Sigma, N109) were seeded on a 6 cm dish. The following day, 2.5 ml medium was replaced before addition of the mixture of 3 mg total DNA and 6 ml JetPEI transfection solution (Polyplus, 101B-010) in 100 ml 150 mM NaCl. 6 h post-transfection, we added fresh medium and cultured cells overnight until lysis for protein determination or imaging of Ras sub-cellular localization on a Leica DMI6000 microscope with Photometrics CoolSnap HQ2 camera.
Statistics
All data is presented as mean ± standard error of mean (SEM) of n ≥ 3 biological replicates from independent experiments. The exact number of individual experiments is stated in the figure legends. Student's t-test or one-way ANOVA with Bonferroni's post hoc test (GraphPad prism) were used for calculation of p-values as indicated for each panel (ns: p > 0.05; *: p ≤ 0.05, **: p ≤ 0.01, ***: p ≤ 0.001; ****: p ≤ 0.0001).
N-Ras and H-Ras Are Activated Downstream of GPCRs in Mast Cells
First, we investigated which Ras isoforms would qualify for PI3Kg signaling in mast cells. Three Ras genes encode for four protein homologs N-Ras, H-Ras, K-Ras4A, and K-Ras4B (38). We found similar expression of N-Ras, K-Ras4B, H-Ras, and R-Ras (Ras-related protein) at the mRNA and protein level in mast cells and macrophages ( Figures 1A, B). In order to estimate the relative ratio between these isoforms, we expressed codonoptimized 3×-HA tagged N-, K-, and H-Ras in HEK293 cells and used transfected HEK293 cell lysates as standards for protein quantification ( Figures 1C, D). In BMMCs, K-Ras protein was the most abundant isoform, with expression levels twice as high as N-Ras and four times as high as H-Ras. In macrophages, Nand K-Ras levels were equal, while H-Ras was four times less abundant.
To determine which Ras isoform is activated downstream of GPCRs we performed GTP-Ras pull-down assays using the Rasbinding domain (RBD) of Raf. The C5a receptor [C5a anaphylatoxin chemotactic receptor 1 (C5AR1), CD88] expressed on macrophages responds to C5a stimulation and activates PI3Kg (14,28). All three Ras isoforms (N-, K-, and H-Ras) were GTP-loaded after macrophage stimulation with C5a ( Figures 1F, H). In mast cells, however, stimulation with adenosine activates PI3Kg downstream A3AR (4, 7). Adenosine led to the selective activation of N-and H-Ras, but not K-Ras ( Figures 1E, G), suggesting that K-Ras is normally not involved in GPCR-mediated activation of PI3Kg in mast cells.
Loss of N-and H-Ras -Compensated by Upregulation of Substitute Ras Isoforms
To further study the physiologic importance of N-Ras and H-Ras isoforms for mast cell activity, we derived mast cells from N-Ras −/− and H-Ras −/− mouse bone marrow. Unexpectedly, neither adenosine-induced signaling to PKB/Akt (Figure 2A) nor PI3Kg-dependent migration ( Figures 2B, C) or degranulation ( Figure 2D) was affected in any of the analyzed genotypes. However, quantification of Ras proteins revealed an upregulation of alternative Ras isoforms in the knock-out cells ( Figure 2E). The level of K-Ras was elevated twice in N-Ras −/− and H-Ras −/− cells as compared to wild-type BMMCs. H-Ras showed approximately a 1.5-fold increase in N-Ras −/− cells. Interestingly, N-Ras was not upregulated in H-Ras −/− BMMCs. The fact that K-Ras activation occurs only significantly in the absence of N-Ras or H-Ras without any loss of cellular responsiveness illustrates the ability of K-Ras to compensate for absence of N-Ras or H-Ras ( Figure 2F). Altogether this demonstrates that a major part of the GPCR signal is relayed via N-Ras and H-Ras, but that a dynamic compensatory redundancy of Ras isoforms exists in mast cells.
As all Ras proteins have to undergo post-translational isoprenylation to enable stable lipid membrane anchoring, the revealed redundancy between N-Ras, H-Ras, and K-Ras was not expected to interfere with the action of farnesyltransferase inhibitors (FTIs) to achieve a pharmacological Ras inhibition in mast cells.
Isoprenylation inhibitors prevent the addition of farnesyl (FTIs) or geranylgeranyl residues (GGTIs) to the CaaX box motif at the C-terminus of small GTPases, thus detaching these proteins from cell membranes (39)(40)(41). FTI-277 was used here because of its excellent selectivity for FTase over GGTase I (42).
The three N-, H-, and K-Ras isoforms only differ in the 25 Cterminal amino acids containing the CaaX-box motif. To validate the action of FTI-277, GFP-constructs using the N-, H-, and K-Ras CaaX-box sequences were transfected into mast cells. FTI-277 caused delocalization of all GFP-CaaX constructs from the plasma membrane to the cytosol, resulting in complete displacement of H-Ras and reduction of N-and K-Ras in the cortical regions ( Figure 3).
Subsequently, we assessed the effect of FTI-277 on Ras activation: in mast cells, adenosine-induced N-Ras activation was completely blocked, and H-Ras activation was decreased, although not statistically significant ( Figures 4A, B). In macrophages, N-, H-, and K-Ras were activated upon stimulation with C5a, but here FTI-277 only inhibited activation of H-Ras ( Figures 4C-E).
To determine why macrophages remained relatively insensitive towards FTI-277, expression levels of prenyltransferases before and after inhibitor exposure were determined. Treatment with FTI-277 did not cause, however, significant differences in the expression of prenyltransferases in neither macrophages nor mast cells ( Figures S2B, C). The accumulation of prelamin A confirmed that protein farnesylation was effectively blocked under the applied incubation conditions. The fact that FTI-277 had no effect on Rap1A geranyl- geranylation, excludes that FTI-277 affected geranyl-geranylation in BMMC and BMMØ ( Figure S1).
Ras Inhibition With FTI-277 Leads to Reduced PI3Kg Signaling in Mast Cells
Mast cells are the main effector cells during acute IgE-dependent allergic reactions such as anaphylaxis. Activation of IgEsensitized mast cells with allergen leads to release of inflammatory mediators from pre-formed granules. Simultaneously, de-novo synthesis of cytokines, chemokines, and other compounds is initiated. Importantly, secretion of TNF-a supports adhesion of rolling blood leukocytes to endothelia by upregulation of the adhesion molecule VCAM-I (6). FTI-277 attenuated degranulation of IgE/antigen-activated mast cells upon co-stimulation with adenosine, but not with stem cell factor (SCF), which signals through c-kit and PI3Kd ( Figure 5A). Furthermore, p110g-dependent expression of TNF-a and IL-6 was significantly decreased in mast cells co-stimulated with IgE/antigen and adenosine ( Figure 5B).
Next, BMMCs and BMMØs were treated with FTI-277 (5 mM for 72 h) to assess phosphorylation of the main PI3K downstream target PKB/Akt. To distinguish between Ras-and Ras-independent PKB activation, BMMCs and BMMØs were stimulated with GPCR ligands after exposure to FTI-277: adenosine stimulation leads to PI3Kg activation via adenosine receptor A3 (A3AR, ARA3) in mast cells (4,7), and C5a stimulates PI3Kg signaling via C5aR (CD88) in macrophages (14). Receptor tyrosine kinase (RTK) ligands such as SCF and macrophage colony-stimulating factor (M-CSF) served as reference for PI3Kg-independent signaling to PKB/Akt. In mast cells, FTI-277 led to a significant decrease in PKB/Akt phosphorylation at Ser473 upon activation with adenosine but had no effect on PKB/Akt activation upon stimulation with SCF. This demonstrates that the inhibitory action of FTI-277 is specific for the GPCR-mediated activation of PI3Kg ( Figure 5C) but does not affect signaling through PI3Kd to PKB/Akt (43). In contrast to mast cells, Ras inhibition did not affect C5ainduced phosphorylation of PKB in macrophages ( Figure 5D).
The IC 50 for inhibition of PKB phosphorylation by FTI-277 in adenosine-stimulated BMMCs was 1.65 μM for pSer473 and 1.61 μM for pThr308, and 4.76 μM for phosphorylation of mitogenactivated protein kinase (MAPK, also known as ERK1&2; Figure S4). PKB phosphorylation downstream of PI3Kg is therefore more sensitive towards FTI-277 as compared to MAPK, which might be attributed to a proposed multi-step cascade activation of MAPK (44).
Chemoattractant-mediated leukocyte recruitment to inflamed tissues is initiated by GPCR engagement and PI3Kg activation. Here, we assessed the effect of Ras inhibition on mast cell and macrophage migration in vitro in a Transwell migration assay. In line with the results for PKB phosphorylation, only adenosine-stimulated migration of mast cells was significantly impaired by FTI-277 ( Figure 5E). Meanwhile, macrophage migration toward GPCR agonist C5a, as well as the RTK ligand M-CSF remained intact ( Figure 5F).
GGTI-298 Inhibits Mast Cell Activation
Other post-translational modifications such as geranylgeranylation are known to provide lipid anchoring in membranes. Inhibition of geranyl-geranylation with GGTI-298 also interferes with PI3Kg signaling: interestingly, GGTI-298 blocked PKB phosphorylation in mast cells, but not macrophages ( Figures 6A, B). Still, phosphorylation of mitogen-activated protein kinase (MAPK) was inhibited in adenosine-stimulated mast cells ( Figure 6C) and macrophages exposed to C5a or M-CSF ( Figure 6D). Although Ras proteins have previously not been reported to be geranyl-geranylated under normal conditions, we observed that GGTI-298 treatment causes re-localization of GFP-tagged H-Ras ( Figure S3). N-Ras and K-Ras did not translocate to the cytosol under the same conditions. The specific action of GGTI-298 on geranylgeranylation was confirmed by the accumulation of ungeranyl-geranylated Rap1A, while prelamin A did not accumulate in its non-farnesylated form in mast cells and macrophages ( Figure S1).
Altogether, these results suggest that GGTI-298 does not affect PI3Kg signaling in mast cells by interference with Ras activation, but is likely to intercept upstream activation by geranyl-geranylated Gg subunit subtypes, which seems to be set up differently in macrophages.
Statins Inhibit Mast Cell Degranulation Independently of PI3Kg
Statins inhibit HMG-CoA reductase, the rate limiting enzyme of cholesterol biosynthesis, and deplete farnesyl pyrophosphate (FPP). The lack of substrate for FTase and GGTase then reduces protein farnesylation and geranyl-geranylation. It has been previously reported that statins inhibit mast cell cytokine production (45) and degranulation (46), but no mechanistic explanations are available. Recent studies have demonstrated that statins also reduce disease activity of rheumatoid arthritis (47)(48)(49), illustrating immunomodulatory effects of statins.
We therefore investigated, whether statins could interference with the Ras-PI3Kg signal pathway and thereby elicit antiinflammatory actions. Among the statins tested, Simvastatin and its active derivate Simvastatin Sodium Salt (Simvastatin-Na) had the most pronounced effect on mast cell degranulation ( Figure 7A). Simvastatin-Na decreased IgE-antigen-mediated degranulation and IgE-antigen/adenosine co-stimulation in a concentration dependent fashion ( Figure 7B). However, decreased degranulation did not correlate with changes in Ras or PI3Kg pathway activation since PKB and MAPK phosphorylation remained unaffected even at elevated concentrations of Simvastatin-Na ( Figures 7C, D), suggesting that Simvastatin blocks degranulation by pleiotropic action and not specific interference with PI3Kg.
Sensitivity to Ras Inhibition Is Largely Determined by PI3Kg Adaptor Subunit
PI3Kg adaptor subunits p84 and p101 are differentially expressed in various cell types of the myeloid and lymphoid lineages (5,6,32,50). Mast cells express exclusively the p84 adaptor protein, while macrophages harbor both subunits. We measured messenger RNA by qPCR to compare expression of p101, p84, and p110g between BMMC and BMMØ ( Figure 8A). According to qPCR data, while mast cells express only p84 subunit, p101 is the dominant adaptor protein in macrophages, with its expression level exceeding p84 by tenfold ( Figure 8A). Next, we used recombinant p84/p110g and p101/p110g complexes to calibrate the quantification of the corresponding proteins ( Figure 8B). BMMCs and BMMØs possess similar amounts of p110g (≈33,000 vs 35,000 molecules/cell). In BMMCs, the relation of p84 and p110g (≈32,000 and 33,000 molecules/cell) is nearly one to one, while p101 was undetectable. Meanwhile, in BMMØ the total of p101 molecules is seven times higher compared to p84 (≈150,000 vs 20,000 molecules/cell).
It was previously reported that Ras is indispensable for membrane recruitment and activation of p84/p110g, but not for p101/p110g complexes (36). To assess if the observed difference in sensitivity to FTI-277 between mast cells and macrophages could be explained by the difference in the adaptor subunit content of these cells, we used a mast cell complementation technique described in (5): p110g null BMMCs lack both p110g and functional p84 and are ideal to complement either p84/p110g or p101/p110g complexes by nucleofection and to assess differences in signaling outputs of the two complexes in the same cellular context. We reconstituted p110g −/− BMMCs with p84/p110g or p101/p110g complexes and treated them with FTI-277 ( Figure 8D). Only cells that expressed p84 as adaptor subunit exhibited significant reduction of phosphorylated PKB at Ser473. On the other hand, PI3Kg activation in cells expressing p101 adaptor protein remained insensitive to Ras inhibition. Remarkably, regardless of the adaptor protein, FTI-277 caused a significant decrease in the level of phosphorylated MAPK ( Figure 8E). As a complementary approach for Ras inhibition, we overexpressed the GTPase-activating protein (GAP) domain of neurofibromin 1 (NF1) together with p84/p110g or p101/p110g in p110g −/− BMMCs. The ability of cells to migrate towards chemoattractant was then tested in a Transwell migration assay ( Figure 8C). Like the effect of FTI-277 on PKB phosphorylation, only p84/p110g containing cells lost their migratory potential after NF1 overexpression, while migration of cells expressing p101 was insensitive to Ras inactivation.
Since FTI-277 potentially affects all farnesylated proteins and does not specifically target Ras, we further excluded the possibility that the observed inhibitory action of FTI-277 on PI3Kg signaling is a result of impaired G-protein processing and functioning. Farnesylation of Gg and palmitoylation of Ga subunits of heterotrimeric G-proteins are vital for proper GPCR signal transduction (51). In mast cells, only Gaicoupled GPCRs, such as adenosine A3 receptor (A3AR), have been reported to activate PI3Kg (4). Gaq-coupled plateletactivating factor (PAF-) receptor triggers, however, calcium release from internal stores and phosphorylation of cyclic AMP-responsive element-binding protein (CREB) independent from PI3Kg (1). Treatment with FTI-277 had no effect on PAFstimulated CREB phosphorylation in mast cells and macrophages ( Figure S5), showing that trimeric G-protein activity remains intact upstream of PI3Kg.
RAS Isoforms Involved in Mast Cell Activation
The importance of Ras signaling in IgE-dependent mast cell activation has been demonstrated earlier, but the nature of downstream events has remained obscured: it has been observed that the deletion of RabGEF1 leads to the overactivation of Ras (B) Dose dependency for Simvastatin-Na was tested in three independent experiments with total n = 9 biological replicates. Degranulation was assessed with IgE/antigen-stimulation alone (Ag, DNP-HSA 2 ng/ml) and IgE/antigen co-stimulation with adenosine (2 µM). (C, D) Western blot analysis of BMMCs treated with 5µM Simvastatin-Na for 16 h and starved for 4 h. Adenosine (2 µM) was used as stimulus. Phosphorylated PKB and MAPK was quantified from n = 3-9 experiments. Statistical significance was tested with one-way ANOVA applying Bonferroni correction. and thus causes mast cell hyperresponsiveness towards FceRI stimulation and results in severe skin inflammation with an accumulation of tissue mast cells in mice (52). Constitutive Ras activation due to deficiency of neurofibromin 1 (NF1) on the other hand, leads to hyperproliferation of mast cell-rich neurofibromas, in a situation where RTK signaling seems to dominate Ras activation (53). The importance of GPCR-induced Ras activation in mast cells however, has barely been explored, despite the physiological implications for mast cell chemotaxis and synergism with FceRI activation during degranulation (9). PI3Kg in mast cells is a downstream effector of Ras and GPCRs. In the present study we exploited Ras-PI3Kg interactions as a proofof-concept strategy for a cell-specific regulation of PI3Kg activity.
Among the seven Ras isoforms that were previously shown to interact and activate PI3Ks (N-, H-, K4A-, K4B-, R-, R1, and M-Ras (54), only N-Ras and H-Ras were found to be activated downstream of GPCRs in mast cells. N-and H-Ras, but not K-Ras, were previously reported to be associated with cholesterol- were transfected with plasmids encoding functional p110g and either HA-tagged p84 or p101. 5 h after transfection cells were put in fresh medium containing DMSO or 5 µM FTI-277. The next day, cells were starved in IL-3-free medium containing 2% FCS and stimulated with 2 µM Ade for 2 min at 37°C. Phosphorylation of PKB at Ser473 and MAPK was determined by Western blotting and normalized to the total PKB or MAPK levels, correspondingly. Transient expression of p110g was assessed with anti-p110g antibodies, while p84 and p101 were detected with anti-HA antibodies (n = 5-6). Student's t-test was performed to test for statistical relationships. rich lipid raft domains at the plasma membrane (55). Their activation downstream of the GPCR is, therefore, in line with the lipid-raft associated activation of p84/p110g complex (5). H-Ras has been proposed to be mainly localized to lipid rafts in inactive GDP-bound state and to be redistributed to non-raft microdomains of plasma membrane upon its activation (56,57), making N-Ras the most probable candidate for p84/p110g activation at the plasma membrane. However, recent studies using artificial lipid bilayer models show that in the absence of scaffolding proteins also N-Ras relocates from rafts to disordered lipid domains when switching to a GTP-bound, active state (58). But naturally, the plasma membrane of intact cells provides a very different environment that potentially contains interacting scaffolds. One possibility is that p110g itself directs Ras to dedicated membrane microdomains. One could speculate that p110g/p84 complex formation induces a conformational change that favors Ras binding. Analogously, allosteric effects in p110g upon p101 binding have previously been proposed to increase Gbg affinity (59), thus minimizing the need for Ras to achieve membrane recruitment of PI3Kg.
Nonetheless, altering p110g affinity towards Ras does not fully explain the isoform preference we observed in mast cells.
Pharmacologic Inhibition of Ras With FTI, GGTI, and Statins
Pharmacologic Ras inhibition with FTI-277 diminished N-Ras activation in mast cells but did not impact N-Ras and K-Ras activation in macrophages. In the case of H-Ras, we observed reduced activation in both cell types (BMMC p = 0.0863, BMMØ p = 0.0241). Alternative geranyl-geranylation of N-Ras and K-Ras in the presence of FTI-277 could have provided a hypothetical explanation for insensitivity of macrophages towards the inhibitor. According to our qPCR data, macrophages express less FTase, as well as GGTase compared to mast cells, and these expression levels were not influenced by FTI-277. This does not support the assumption that macrophages need higher dosage of FTI treatment to cope with higher amounts of FTase, neither that geranyl-geranylation is more efficient in macrophages than in mast cells. Another hypothesis is that varying half-life of Ras proteins in different cells might explain the differential susceptibility to farnesyltransferase inhibitors.
FTI-277 blocks protein farnesylation in macrophages as well as in mast cells, as controlled by accumulation of prelamin A. It is expected that farnesylated proteins other than Ras are also malfunctioning under FTI-277 treatment. Three subtypes of Gg subunits, Gg1, Gg8, and Gg11 of trimeric G-proteins are farnesylated (51). Phosphorylation of CREB in cells stimulated with platelet activating factor (PAF) however, was not attenuated ( Figure S4). This demonstrates that Gaq containing trimeric Gprotein functions are intact. It is therefore unlikely that insufficient processing of Gg subunits in FTI-277 treated cells cause defective GPCR to PI3Kg signaling. Altogether, it appears that Ras functions are conserved more strictly in macrophages, ensuring proper host defense. Further mechanistic studies are needed to elucidate the mechanism behind the resistance of Ras proteins to FTIs in macrophages, and whether it could be used for developing strategies for cell-specific Ras targeting. Yet, a recent study by Bratt et al. (60) found that FTI-277 unexpectedly worsened asthmatic airway changes in mice instead of ameliorating them. The authors examined Ras localization in bronchial epithelial cells but did not see translocation from membrane-bound to cytosolic state. Instead, accumulation of farnesyl pyrophosphate (FPP) under FTI treatment turned out to exacerbate allergic asthma. Thus, in vivo FTI-277 does not act as cell-type specific agent and more importantly, did not show clinical benefits against allergic asthma.
Although Ras is not known to be geranyl-geranylated in the absence of FTIs, we observed reduced H-Ras membrane localization upon treatment with GGTI-298. We also found that inhibition of geranyl-geranylation with GGTI-298 impacted PI3Kg signaling. This indicates that non-Ras proteins likely affirm different sensitivity of mast cells and macrophages towards GGTIs (and FTIs) as well.
Rab proteins are geranyl-geranylated small GTPases. Rab5 in particular delivers H-Ras from the cell membrane to recycling endosomes (61,62). Still, the di-cystein motif in Rab5's Cterminus is modified by GGTase II (also known as Rab geranyl-geranyl transferase, RGGT) and is not expected to malfunction under treatment with the GGTase I-specific GGTI-298 (63). Several Gg subunit subtypes (Gg2-5, 7, 9, 10, 12 and 13) are geranyl-geranylated by GGTIase I and therefore likely fail to operate as relay from GPCR to PKB after GGTI-298 exposure. Despite the finding that macrophages are more resistant to GGTI-298 than mast cells, inhibition of the majority of Gg proteins is not a viable therapeutic approach for cell-selective PI3Kg inactivation.
Overall, clinical translation of prenylation inhibitors to mast cell targeted therapy is currently complicated by insufficient potency and side-effects of FTIs and GGTIs in clinical trials (64). However, inhibition of protein prenylation has previously also been postulated for statins, a class of clinically tolerable inhibitors that deplete FPP and thus lower protein prenylation. In an effort to translate our findings into therapeutic application we treated mast cells with Simvastatin, Lovastatin and Atorvastatin. Despite effective inhibition of mast cell degranulation by Simvastatin, PI3Kg signaling to PKB/Akt was preserved. Hence, the mast cell stabilizing effect of statins is of different nature than FTI-277 and GGTI-298 effects.
Mast Cell Specific PI3Kg Targeting via p84 and Ras
From the discovery of the second possible adaptor subunit for PI3Kg (32), the questions regarding the physiological importance of having two regulatory proteins for PI3Kg were arising. In the recent years it has become evident that p84 and p101 are non-redundant and confer specific properties to PI3Kg that result in diverse cellular responses. Different outputs triggered by two complexes are most likely explained by differences in the spatiotemporal distribution of PtdIns(3,4,5)P 3 derived from either p101/p110g or p84/p110g (5). Ras was shown to be indispensable for membrane recruitment and activation of p84/p110g, but not p101/p110g complexes (36). Ras, therefore, could contribute to the differential coupling of PI3Kg heterodimers to downstream responses by ensuring their distribution to dedicated membrane compartments.
The two regulatory proteins p84 and p101 are not equally distributed in PI3Kg expressing cells. The fact that p110g −/− mast cells reconstituted with p101/p110g were resistant to Ras inhibition, while p84/p110g mediated PI3Kg pathway was inhibited pharmacologically as well as by overexpression of GAP-domain of NF1, shows that PI3Kg adaptor subunit is a major factor explaining differential sensibility towards FTIs of p101 and p84 dominated cells. Consequently, macrophages/ monocytes with an abundance of p101 adaptor protein (≈150,000 molecules/cell, ≈90% of all adaptor protein) are spared by FTIs. On the other hand, in cells with a predominant expression of p84 adaptor subunit such as mast cells, PI3Kg-dependent responses are susceptible to modulation of Ras signaling.
It was suggested previously that free monomeric p101 is unstable and undergoes cytosolic degradation (65). Therefore, we were surprised to detect sixfold higher abundance of p101 compared to p110g in macrophages. Excess of p101 exogenously expressed was previously shown to localize to the nucleus (66). A surplus of p101 over p110g and p84 in macrophages might favor p101/p110g complex formation even in presence of p84.
Overall, the dominance of p84/p110g complex clearly renders mast cells Ras dependent. But the currently available pharmacological inhibitors, such as FTI-277 or statins, act pleiotropically and cannot be pursued as cell-specific targeting strategy. Cell type specific modulation of PI3Kg might be achieved by interference with p110g-adapter protein complex formation in the future. The development of specific p84/p110g targeting strategies for mast cell-related diseases will presumably have limited effects on macrophages and other p101-dominated leukocytes, thus better preserving a proper host defense as compared to PI3Kg ATP-site inhibitors.
DATA AVAILABILITY STATEMENT
All datasets presented in this study are included in the article/ Supplementary Material.
ETHICS STATEMENT
The animal study was reviewed and approved by Swiss Federal Veterinary Office (SFVO) and the Cantonal Veterinary Office of Basel-Stadt (license number 2143).
AUTHOR CONTRIBUTIONS
JJ, EG, AX, TB, and JV performed experiments. JJ, EG, AX, TB, and MW analyzed data, wrote the manuscript and contributed conceptually. All authors contributed to the article and approved the submitted version.
ACKNOWLEDGMENTS
Preliminary conclusions and parts of the content of this manuscript have been published as part of the Ph.D. thesis of Fabrizio Botindari (67), and will be incorporated in a doctoral thesis of JJ (68) (not accessible in full text). | 2020-10-28T13:07:27.605Z | 2020-10-28T00:00:00.000 | {
"year": 2020,
"sha1": "7df20e5d2e5a82f518acbb24b8d070ef235cb777",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2020.585070/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7df20e5d2e5a82f518acbb24b8d070ef235cb777",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
258586913 | pes2o/s2orc | v3-fos-license | THE EFFECT OF MARKET ORIENTATION, ENTREPRENEURIAL ORIENTATION, INNOVATION AND COMPETITIVE ADVANTAGE ON BUSINESS PERFORMANCE OF INDONESIAN MSMEs
Purpose: The objective of this study was to determine the effect of market orientation, entrepreneurial orientation, innovation, and competitive advantage on MSMEs (Micro Small and Medium Enterprises) business performance. Design/methodology/approach: This type of research is quantitative with a case study research design. The implementation of the research was carried out in Indonesia, especially in Lamongan district, East Java, Indonesia. The research subjects were MSMEs owners/managers with a total sample of 302 using SPSS-AMOS 22 SEM (Structural Equation Modeling) analysis. Findings: The results of this study indicate that there is an significant influence of market orientation, entrepreneurial orientation, competitive advantage on business performance, while innovation has no significant influence on MSMEs business performance. Research, Practical & Social implications: The theoretical implication of this research is to increase knowledge about market orientation and entrepreneurial orientation as well as increase innovation and competitive advantage to get maximum MSMEs business performance. The results of this study can provide information to the government and organizations related to MSMEs. In this case, the Office of Cooperatives and Micro Enterprises, the Office of Industry and Trade in Lamongan district, East Java, Indonesia. Originality/value: The results of the Structural Equation Modeling evaluation with the univariate normality test showed an excellent critical skewness ratio value, and the kurtosis indicated data was normally distributed, and the data is feasible to use. Than 7 methods (X2-Chi square, Sign Probability, CMIN/DF, GFI, AGFI, TLI, CFI, and RMSEA) are used to measure the fit of the model in SEM there are 6 criteria that are fit from 7 existing criteria, so it can be concluded that this measurement model has a good fit level. The results of hypothesis testing affect market orientation has a significant on MSMEs performance, entrepreneurial orientation has a significant on MSMEs performance, competitive advantage has a significant on MSMEs performance, market orientation has a significant on innovation, market orientation has a significant to competitive advantage, entrepreneurial orientation has a significant on innovation, entrepreneurial orientation has a significant on competitive advantage, competitive advantage has a significant on innovation, and innovation has not a significant on MSMEs performance.
INTRODUCTION
Micro Small and Medium Enterprises (MSMEs) are the pillars of the Indonesian economy because this real sector is a pillar or pillar of the overall economic wheel in almost all sectors (Tetuko, 2021).Based on data from the Ministry of Cooperatives and SMEs for 2018-2019, the MSMEs sector can absorb up to 97% of the total workforce, provide up to 99% of employment, and contribute 81% of the National Gross Domestic Product (GDP).The increase in the number of MSMEs units in East Java is in line with East Java's increasing economic growth and is concentrated in urban areas such as the city of Surabaya and its surroundings (Sidoarjo, Mojokerto, Gresik and Lamongan).by many parties to be a determinant in the success of strengthening regional competitiveness.
For this reason, MSMEs need to improve product innovation and business performance.Some of the problems that cause MSMEs to lack competitiveness and business performance are one of the factorsthe limited level of education of MSMEs actors, this causes the knowledge and skills possessed are also low, including in market orientation and entrepreneurial orientation (Apindo, 2012).Business performance is a measure of the success of a business (Westerberg and Wincent, 2008).Improving MSMEs business performance through profit growth, sales growth, customer growth is a benchmark to stay ahead of the competition.
The MSMEs sector has proven to be more resilient in dealing with crises, so MSMEs development needs to receive serious attention from both the government and the private sector so that they are more competitive and have good performance.MSMEs activities still encounter obstacles and problems both internally and externally.According to Rosid (2012) internal barriers to SMEs such as lack of capital, limited human resources (HR), weak business networks and market penetration capabilities.While the external obstacles to MSMEs such as: the business climate is not yet fully conducive, limited facilities and infrastructure, implications for regional autonomy, implications for free trade, the nature of products with short lifetimes, and limited market access.According to Brahmana (2007) competitive advantage is the result of implementing a strategy that utilizes various resources owned by the company.This is supported by the opinion of Morgan et al. (2004) which states that competitive advantage is the ability of a business entity (company) to provide more value to its products than its competitors and that value brings benefits to customers.
LITERATURE REVIEW AND HYPOTHESIS Influence Market Orientation to Business Performance
Market orientation as a process and activity related to creating and satisfying customers by continuously assessing customer needs and wants.Application of market orientation will bring increased performance for the company (Uncles, 2000).Market orientation is important to study and research in relation to business performance because market orientation is an important element that influences competitive advantage and achieves high profitability (Narver & Slater, 2000).Market orientation is a business perspective that makes consumers the focus of attention in all company activities.The results of previous research studies on the effect of market orientation on MSME performance have also been carried out by; Fard (2009),
H1: Market orientation has a positive effect on business performance
The Effect of Entrepreneurial Orientation on Business Performance Companies with a high level of entrepreneurial orientation will try to obtain the resources provided by the environment.These resources can then be allocated towards proactive and innovative projects that enable companies to explore and exploit opportunities to transform resources into superior performance (Rosenbusch, 2011).The entrepreneurial orientation of MSMEs plays an important role in improving company performance (Mohutsiwa, 2012).Entrepreneurial orientation consists of a proactive attitude, dare to take risks, aggressive and autonomy can increase product sales and marketing.Entrepreneurial orientation can be strategic when combined with appropriate sources of competitive advantage (Mahmood and Hanafi, 2013).Kiyabo & Isaga (2020), Rezaei & Ortt (2018), Fadda (2018), Real, et al. (2014), Sulistyo & Ayuni (2020), and Patmi R., et al. (2021) proves that entrepreneurial orientation has an influence on the performance of MSMEs.
Effect of Market Orientation on Innovation
Orientation to competitors can be for example that salespeople will try to gather information about competitors and share that information with other functions within the company, for example to the research and product development division or discuss with company leaders how competitive strengths and innovation strategies are being developed (Ferdinand, 2000).
Effect of Market Orientation on Competitive Advantage
Market orientation not only makes reference choices real but also makes potential customers.Market orientation is divided into three dimensions, namely customer orientation, market information sharing and coordination between functions within the company, whose
The Effect of Entrepreneurial Orientation on Innovation
Entrepreneurial orientation fosters a passion for creativity and innovation in developing company products (Zhou et al., 2005).Being proactive in observing market
The Influence of Entrepreneurial Orientation on Competitive Advantage
Entrepreneurial orientation shows the company's strategy to achieve competitive advantage (Rauch and Frese, 2009).A proactive attitude responds quickly to market changes and accommodates consumer needs so that the company has a competitive advantage over its (2018) proves that innovation has an influence on the business performance of SMEs.
Influence Competitive Advantage to Business Performance
MSMEs business performance will be enhanced with competitive advantage through increased profits, increased sales and a large number of customers.According to the research results of Chan et al., (2004)
Reliability Test
Reliability is an index that shows the extent to which a measuring device can be trusted or relied upon, Usman and Sobari (2013).The reliability test used was Cronbach's Alpha.A Based on the table above, it can be seen that each variable and indicator used has a Cronbach Alpha coefficient value greater than 0.6, so the variables in this study are reliable.
Once it is known that the statement items used in the questionnaire are valid and the variables used are reliable, then it can proceed to the next research stage.
Confirmatory Factor Analysis (CFA)
In Table 3 it is known whether the indicators for each variable are able to reflect each of these variables.that each indicator reflects the variable used with a p-value of less than 0.05, with a factor loading value greater than 0.5, it can be concluded that indicators can be used and are appropriate in reflecting variables.
Goodness of Fit
There are several types of measurements to test the fit of the SEM model to the data (Good fit), namely Chi-square, probability of Chi-square, RMSEA, GFI, AGFI, CMIN/DF, TLI, and CFI.The following table shows the results of the fit model test.Based on table 5, it shows that market orientation has a positive and significant effect on MSMEs performance, entrepreneurial orientation has a positive and significant effect on MSMEs performance, competitive advantage has a positive and significant effect on MSMEs performance, market orientation has a positive and significant effect on innovation, market orientation has a positive effect and significant to competitive advantage, entrepreneurial orientation has a positive and significant effect on innovation, entrepreneurial orientation has a positive and significant effect on competitive advantage, competitive advantage has a positive and significant effect on innovation, and innovation has not a positive and significant effect on MSMEs performance.that market orientation has an influence on the performance of SMEs.
The Effect of Entrepreneurial Orientation on Business Performance
Variable entrepreneurial orientationhas a positive and significant effect on MSMEs business performance with a p-value of 0.000 <0.05.These results support previous research about the effect of entrepreneurial orientation on the performance of MSMEs that have been carried out by; Kiyabo & Isaga (2020), Rezaei & Ortt (2018), Fadda (2018), Real et al. (2014), and Sulistyo & Ayuni (2020) proves that entrepreneurial orientation has an influence on the performance of MSMEs.
Effect of Market Orientation on Innovation
Variable market orientation has a positive and significant effect on innovation with a pvalue of 0.000<0.05.This result is in accordance with the resultsprevious research studies on the effect of market orientation on MSMEs innovation that have been carried out by; Na et al.
Effect of Market Orientation on Competitive Advantage
Variable market orientation positive and significant effect on competitive advantage
The Influence of Innovation on Competitive Advantage
The innovation variable has a positive and significant effect on competitive advantage with a p-value of 0.000<0.Regency.The use of business resources, in this case related to information resources, workforce skills, business networks and product quality, is the key to success insofar as the business performance of each MSMEs.
Entrepreneurial orientation is an entrepreneurial attitude carried out by MSMEs centers in Lamongan Regency which refers to processes, practices, decision-making styles and behaviors in organizations consisting of being proactive, taking risks, and being independent in achieving business performance.
The implications of the results of this study for the grand theory of Resource Based View (RBV) are innovation and competitive advantage as the ability of MSMEs originating from the internal environment to support improving business performance.This reinforces the RBV theory that the resources and assets owned by MSMEs can improve business performance which will be followed by market orientation and entrepreneurial orientation.
The implications of the results of this study for the grand theory of Market Based View (MBV) are market orientation and entrepreneurial orientation as MSMEs strategies to respond to markets originating from the external environment, which are also points of success for MSMEs businesses in Lamongan.Market and competitor information that develops in the external environment can become an orientation idea for marketing programs and entrepreneurial orientation which can later have an impact on improving business performance.
Practical Implications
The results of this study provide practical implications for MSMEs in improving business performance by maximizing their resources.Resources in this case relate to information resources, workforce skills, business networks and product quality.Business It is hoped that with this different information can be obtained and result in a more in-depth discussion about the marketing strategy and business performance of MSMEs.
The costproduction during the Covid-19 pandemic caused the income of MSMEs actors in Lamongan to drop drastically.Raw materials, storage, staff and other costs are all business expenses that have increased during the pandemic.MSMEs have various strategic preferences, including seeking new markets, finding cheaper sources of raw materials, reducing labor, and calling for payment delays.This situation is certainly not profitable for MSMEs and this will have an impact on the MSMEs themselves in reducing their efficiency and performance (Diskopum Lamongan, 2020).The lack of creativity and courage to innovate products is one of the factors causing the slow development of MSMEs in Lamongan Regency.The product innovation factor is believed Yaskun, M., Sudarmiatin., Hermawan, A., Rahayu, W. P. (2023) The Effect of Market Orientation, Entrepreneurial Orientation, Innovation and Competitive Advantage on Business Performance of Indonesian MSMEs , andUdriyah et al. (2019) proves that market orientation has an influence on the performance of SMEs.
The results of previous research studies on the effect of market orientation on MSMEs innovation have also been carried out by; Na et al. (2019), Wang & Chung (2013), Jing Zhang & Zhu (2016), and Setiawan et al. (2020) proves that market orientation has an influence on MSMEs innovation.
Market Orientation, Entrepreneurial Orientation, Innovation and Competitive Advantage on Business Performance of Indonesian MSMEs decisions will lead to long-term focus, competitive advantage and increased profits(Esteban, et al. 2002).These results indicate that the larger the market-oriented company, the greater the company's competitive advantage.The results of previous research studies on the effect of market orientation on competitive advantage have also been carried out by;Na et al. (2019),Udriyah et al. (2019), and Osorio Tinoco et al. (2020) proves that market orientation has an influence on competitive advantage.
developments and being willing to take risks in trying to produce new goods are the advantages of SMEs in winning the competition.The results of previous research studies on the effect of entrepreneurial orientation on MSMEs innovation have also been carried out by; Makhloufi et al. (2021), Adrie Oktavio et al. (2019), Madhoushi et al. (2011), Musawa & Ahmad (2019), Iqbal et al. (2021), and Sulistyo & Ayuni (2020) proves that entrepreneurial orientation has an influence on MSMEs innovation.
competitors.The results of previous research studies on the effect of entrepreneurial orientation on competitive advantage have also been carried out by;Maruta et al. (2017),Zeebaree & Siron (2017),Sadalia et al. (2020), andLee & Chu (2011) proves that entrepreneurial orientation has an influence on competitive advantage.H6: Entrepreneurial orientation has a positive effect on competitive advantageThe Influence of Innovation on Competitive AdvantageCompetitive advantage has a broad meaning in market competition.Muscio et al. (2013) research results explained not only focus on innovation in the successful food process in Guangxi but also how innovation contributes to enterprise competitiveness.This study provides Yaskun, M., Sudarmiatin., Hermawan, A., Rahayu, W. P. (2023) The Effect of Market Orientation, Entrepreneurial Orientation, Innovation and Competitive Advantage on Business Performance of Indonesian MSMEs some valuable insights into food innovation process activities in Guangxi that have achieved success and are competitive in the open market.Sulistyo & Ayuni (2020) empirical study results shows that learning orientation has a positive effect on technical innovation and administrative innovation, technical innovation has a positive effect on competitive advantage, but administrative innovation has no positive effect on competitive advantage.The results of previous research studies on the effect of innovation on competitive advantage have also been carried out by; Udriyah et al. (2019), Na et al. (2019), Karanja, (2015), Alberto et al. (2013),andRatnawati et al., (2018) proved that innovation has an influence on competitive advantage.H7: Innovation has a positive effect on competitive advantageThe Effect of Innovation on Business PerformanceSeveral studies view the importance of innovation for small businesses (SMEs) as an effort to improve the performance of SMEs.Research conducted by Nybakk (2012) measured the financial performance of SMEs, seen from the direct and indirect influence between learning orientation and firm innovativeness from the aspects of product innovation, process innovation and business system innovation.The results of previous research studies on the influence of MSMEs innovation on MSMEs business performance have also been carried out by; Udriyah et al. (2019),Rodriguez & Morant (2016), Hutahayan (2021), Nasser Alyahyaei et al. (2020), Kijkasiwat & Phuensane (2020), Sulistyo & Ayuni (2020), and Ratnawati et al.
, and Majeed (2011), competitive advantage has an influence on company performance.Several indicators have to be created to measure profit including customer loyalty, technology development and product development.Measurement of sales development, customer development, profit development and working capital development are indicators of improving SMEs performance.The results of previous research studies on the effect of competitive advantage on MSMEs business performance have also been carried out by; Udriyah et al. (2019), Kiyabo & Isaga (2020), Rua et al. (2018), Meutia (2013), and
Figure 1
Figure 1 Structural Equation Modeling Results Variable market orientation has a positive and significant effect on MSMEs business performance with a p-value of 0.043<0.05.These results are consistent with the results of previous research studies on the effect of market orientation on the performance of MSMEs that have been carried out by;Fard (2009),Hutahayan (2021), andUdriyah et al. (2019) proves with a p-value of 0.000<0.05.The results of this study are in accordance with the results of research studies that have been carried out by;Makhloufi et al. (2021), AdrieOktavio et al. (2019),Madhoushi et al. (2011),Musawa & Ahmad (2019), Iqbal et al. (2021), and Sulistyo & Ayuni (2020) proves that entrepreneurial orientation has an influence on MSMEs innovation.The Effect of Entrepreneurial Orientation on InnovationVariable entrepreneurial orientation has a positive and significant effect on innovation with a p-value of 0.000<0.05.The results of this research study support the results of previous Yaskun, M., Sudarmiatin., Hermawan, A., Rahayu, W. P. (2023) The Effect of Market Orientation, Entrepreneurial Orientation, Innovation and Competitive Advantage on Business Performance of Indonesian MSMEs studies on the influence of entrepreneurial orientation on MSMEs innovation that has been carried out by; Makhloufi et al. (2021), Adrie Oktavio et al. (2019), Madhoushi et al. (2011), Musawa & Ahmad (2019), Iqbal et al. (2021), and Sulistyo & Ayuni (2020) proves that entrepreneurial orientation has an influence on MSMEs innovation.The influence of Entrepreneurial Orientation on Competitive Advantage Variable entrepreneurial orientation positive and significant effect on competitive advantage with a p-value of 0.009 <0.05.This is in accordance with the results of previous research studies on the effect of entrepreneurial orientation on competitive advantage that has been carried out by; Maruta et al. (2017), Zeebaree & Siron (2017), Sadalia et al. (2020), and Lee & Chu (2011) proves that entrepreneurial orientation has an influence on competitive advantage.
05.The results of this research study are in accordance with the results of previous research on the effect of entrepreneurial orientation on competitive advantage that has been carried out by;Maruta et al. (2017),Zeebaree & Siron (2017),Sadalia et al. (2020), andLee & Chu (2011) proves that entrepreneurial orientation has an influence on competitive advantage.TheEffect of Innovation on Business PerformanceThe innovation variable has no positive effect on MSMEs business performance with a p-value of 0.684 > 0.05.The results of this study contradict the results of previous research studiesregarding the influence of MSME innovation on MSMEs business performance carried out by; Udriyah et al. (2019), Rodriguez & Morant (2016), Hutahayan (2021), Nasser Alyahyaei et al. (2020), Kijkasiwat & Phuensane (2020), Sulistyo & Ayuni (2020), and Ratnawati et al. (2018) proves that innovation has an influence on the business performance of SMEs.Influence Competitive Advantage to Business PerformanceThe competitive advantage variable has a positive and significant effect on MSME business performance with a p-value of 0.000 <0.05.The results of this research study are in Yaskun, M., Sudarmiatin., Hermawan, A., Rahayu, W. P. (2023) The Effect of Market Orientation, Entrepreneurial Orientation, Innovation and Competitive Advantage on Business Performance of Indonesian MSMEs accordance with the results of previous research on the effect of competitive advantage on MSME business performance conducted by; Udriyah et al. (2019), Kiyabo & Isaga (2020), Rua et al. (2018), Meutia (2013), and Sirivanh et al. (2014) proves that competitive advantage has an influence on MSMEs business performance.in this study involves aspects of customer orientation, competitor orientation, and market information carried out by MSMEs centers in Lamongan Market Orientation, Entrepreneurial Orientation, Innovation and Competitive Advantage on Business Performance of Indonesian MSMEs The theoretical implication of this research is to increase knowledge about marketing strategies in the MSMEs sector by integrating market orientation and entrepreneurial orientation to improve business performance.Further research can develop from the results of this study by examining other factors that can affect MSMEs business performance.Future research can also develop in terms of methodology and other research objects, such as using in-depth interviews with business actors or MSMEs outside the Lamongan district, Indonesia.
Table 4
Goodness of Fit ResultsBased on table 4, 7 methods are used to measure the fit of the model in SEM.There are 6 criteria that are fit from 7 existing criteria.So it be concluded that this measurement model has a good fit level.
Table 5
Hypothesis Test Results with T-statistics | 2023-05-10T15:11:58.501Z | 2023-04-18T00:00:00.000 | {
"year": 2023,
"sha1": "d7c1b2c91dd21ad083aba65a30ee70602ccc51eb",
"oa_license": "CCBYNC",
"oa_url": "https://www.openaccessojs.com/JBReview/article/download/1563/620",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "781bc62f084090acce80fba70a9de898352491c4",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": []
} |
250126937 | pes2o/s2orc | v3-fos-license | Single-cell immune atlas for human aging and frailty
older adults, an exhausted status of NK cells associated frailty. Single-cell technology can help to discover rare cell subtypes or subtypes that have never been reported before. In this study, a subpopulation of frailty-specific monocytes was identified, which might be derived from the classical monocyte population. The gene expression pattern of this frailty-specific monocyte subset was different from any of the defined classical, intermediate, or non-classical monocytes, particularly with regard to high-level expression of the long non-coding RNA (lncRNA) NEAT1 and MALAT1, and transcription factors ZEB2, NFKB2 and REL, and their low-level expression of CST3, FTL and CCL4, and MHCII genes. However, no specific cell surface markers are known to identify this frailty-specific monocyte subset at the present time. Further studies are indicated to translate these intriguing finding to the protein and functional levels.
Single-cell immune atlas for human aging and frailty
Aging is associated with significant immune functional decline and remodeling, largely driven by alterations of gene expression and regulation of immune cells. For decades, researchers have investigated the dynamics of gene expression of immune cells through aging, both in humans and in mice. However, most of these studies were done at a bulk-cell level, which were influenced by the proportions of altered immune cell subsets. In addition, only limited inferences can be drawn from observations in mice because the genomic and environment background of laboratory mice are artifactually limited while human beings have diverse genetic background as well as individualized living environments, nutrition, gut microbiota, latent infections, socioeconomic circumstances, and many other lifestyle factors [1]. With the recent advent of single-cell multi-omics technologies, analyses of gene expression of individual immune cells have been performed in great detail at high resolution [2], [3]. For example, Liu and his colleagues studied age-related alternations in gene expression in multiple tissues and organs at the single-cell level and have constructed an Aging Atlas database as a resource [4].
By convention, chronological age is used as the indicator of aging, as do most prior studies of the relationship between aging and immunity. However, everyone ages differently and chronological age does not address the tremendous heterogeneity of the older adult population. Frailty is a common geriatric syndrome and poor health status characterized by decreased physiologic reserve and increased vulnerability, leading to adverse health outcomes [5]. Biologically, frailty serves as a useful tool to address the heterogeneity of aging and associated changes and maladaptation of multiple systems, including the immune system, beyond the chronological age. For example substantial evidence suggests that chronic low-grade inflammatory phenotype (CLIP) plays an important role in the pathogenesis of frailty [6]. Other immune aging parameters have also been linked to poor health status, such as frailty (reviewed in Chen et al.) [7]. In the latest study recently published in Nature Aging, Luo and colleagues performed a comprehensive single-cell transcriptomic and TCR repertoire analysis, identifying gene expression signatures and functional characteristics of immune cell subsets along the whole spectrum of the aging process, from neonates (using cord blood) and young adults as controls to two groups of older adults who either appeared to be healthy ("healthy aging") or who were frail ("frailty") with similar chronological age (85.8 ± 11.1 years vs 88 ± 5.8 years, respectively) ( Fig. 1) [8].
Similar to the study of Zheng [9], Luo et al. confirmed previous findings of different distributions of immune cell subsets at different ages, such as decreased proportions of naïve cells, and increase of memory and inflammatory cells. However, it was difficult to distinguish between the healthy and frail older adult groups in terms of immune cell subset population size, except for slightly increased CD4+ Tcm frequency in the frail group. As expected, an age-dependent accumulation of cellular heterogeneity and transcriptomic variability across the immune cell subtypes was evident. Compared to the healthy older adult group, a set of frailty-specific gene expression differences in 16 immune cell subsets was identified. These results, for the first time, demonstrate significant differences in gene expression dynamics of major immune cells between healthy aging and frailty. The authors also identified characteristic transcription factors (TFs) in various immune cell subtypes of certain age groups based on the specific expression of gene sets. Furthermore, transcriptional regulatory network analysis showed increased expression of TFs of the AP-1 and NF-κB families by immune cells in the frail older adult group, adding molecular evidence supporting a potential role of immune and NF-κB pathway activation in contributing to CLIP in frailty.
Similar to the heterogeneity and variability of gene expression through aging, the immune cell subsets from healthy aging and frailty were of distinct aging dynamics and fates. Based on the results from the cell fate trajectory analysis ("pseudotime analysis"), the aging process of immune cells showed two major modes. In one mode, the pace of gene expression changes matches with the chronological age, such as for B, CD8 Tem, and NK1 cells. The other mode indicates dramatic gene expression alterations at a specific point in time, such as for naïve T, CD4 Tcm, and NK2 cells (Fig. 2). More in-depth studies are needed to further explore how the gene expression characteristics of each immune cell subset contribute to the distinct aging trajectory, and clarify whether the differential pace results in distinct functional outcomes.
Among the immune cells, T lymphocytes are affected broadly by aging. Frequencies of certain T-cell subsets altered significantly during aging, including the well-documented decrease of naïve CD8 T cell and increase of CD4 Tcm and exhausted T cells. CD4 Tcm and CD8 Tem frequencies also changed with frailty and aging. As such, the ratio of CD4 Tcm over CD8 Tem could potentially serve as a biomarker. Weather such change of CD4 Tcm/CD8 Tem ratio corresponds to the change of the well-documented CD4/CD8 ratio remains to be investigated. The observed decrease of TCR repertoire diversity from healthy aging to frailty could potentially serve as a biomarker for frailty as well. Another interesting finding from the study is the shared T-cell clones between healthy aging and frailty. It is not surprising that the same © The Author(s) 2022. Published by Oxford University Press on behalf of Higher Education Press. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
Life Medicine
T-cell clones, which are defined by their possession of the exact same TCR sequences, were found in both naïve and memory T cell pools, as this is the basis of physiological adaptive immunity. However, the study also demonstrated significantly increased shared T-cell clones among different T-cell subsets, especially shared TCR between CD4+ and CD8+ T cells. Considering the dramatic decrease of TCR repertoire size from approximately 10 8 in young adults to 10 6 in frail older adults, the immune system may need a compensatory mechanism to meet constant pathogen challenges. The fact that T cells exhibit different gene expression profiles during aging and yet more additional ones in frailty, coupled with the fact that most of the shared TCRs were specific to cytomegalovirus (CMV), which is essentially ubiquitous in older adult populations in China, suggests significant pluripotency and resiliency of those T cells. This agrees with previous studies that observed cytotoxic function of CD4+ T cells in centenarians [10], suggesting that T cells in an immunosenescent state have the potential to be pluripotent.
Unlike T cells, B lymphocyte frequencies showed no significant difference between healthy aging and frailty. This is in Single-cell technology can help to discover rare cell subtypes or subtypes that have never been reported before. In this study, a subpopulation of frailty-specific monocytes was identified, which might be derived from the classical monocyte population. The gene expression pattern of this frailty-specific monocyte subset was different from any of the defined classical, intermediate, or non-classical monocytes, particularly with regard to high-level expression of the long non-coding RNA (lncRNA) NEAT1 and MALAT1, and transcription factors ZEB2, NFKB2 and REL, and their low-level expression of CST3, FTL and CCL4, and MHCII genes. However, no specific cell surface markers are known to identify this frailty-specific monocyte subset at the present time. Further studies are indicated to translate these intriguing finding to the protein and functional levels.
These highly expressed lncRNAs, NEAT1, and MALAT1 (otherwise known as NEAT2) in frailty-specific monocytes and exhausted T cells raise several interesting questions. Because their altered expression has also been observed in several types of tumor cells, it is important to understand the relationships between high NEAT1 expression in tumorigenesis and senescence. The question of why only monocytes and exhausted T cells express high levels of NEAT1 is also worth noting. Considering the role of NEAT1 in the formation of paraspeckle backbone in the nuclear interchromatin space, more studies are warranted to explore the potential role of paraspeckle in aging and cellular senescence.
In summary, the study of Luo et al. developed a comprehensive single-cell transcriptomic and TCR repertoire atlas of human immune cells from birth, young adults, and to older adults who are apparently healthy and those who are frail at a similar chronological age. It provides a basic resource for investigating the potential impact of the clinically defined syndrome of frailty on immune aging and vice versa. Additionally, considering the continuing challenge of COVID-19, which disproportionately impacts older adults with severe disease and mortality [7], such an atlas will be critically important in leveraging our understanding of immune aging for the design of improved interventions to optimize immune function in senior citizens. | 2022-06-30T15:07:36.824Z | 2022-06-28T00:00:00.000 | {
"year": 2022,
"sha1": "7187cda164f5b8150384462d89312bc2f59e6d68",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/lifemedi/advance-article-pdf/doi/10.1093/lifemedi/lnac013/44276681/lnac013.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1e89c905fc31cab8d75a7fbdcb724125f8cfc404",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236399885 | pes2o/s2orc | v3-fos-license | Air Ventilation Performance of School Classrooms with Respect to the Installation Positions of Return Duct
: For students, who spend most of their time in school classrooms, it is important to maintain indoor air quality (IAQ) to ensure a comfortable and healthy life. Recently, the ventilation performance for indoor air quality in elementary schools has emerged as an important social issue due to the increase in the number of days of continuous high concentrations of particulate matter. Three-dimensional numerical analysis has been introduced to evaluate the indoor airflow according to the installation location of return diffusers. Considering the possibility of the cross-infection of infectious diseases between students due to the direction of airflow in the classroom, the airflow angles of the average respiratory height range of elementary school students, between 1.0 and 1.5 m, are analyzed. Throughout the numerical analysis inside the classroom, it is found that the floor return system reduces the indoor horizontal airflow that causes cross-infection among students by 20% compared to the upper return systems. Air ventilation performance is also analyzed in detail using the results of numerical simulation, including streamlines, temperature and the age of air.
Introduction
Considering that many people spend 60 to 90 percent of their time indoors [1,2], indoor air quality is one of the most critical factors for a healthy life. The World Health Organization (WHO) reported that 3.8 million premature deaths could be attributed to poor indoor air quality annually [3]. In particular, young students spend more than 90 percent of their daily lives indoors, including time spent at home, in daycare centers, and school classrooms. As such, they can be harmed more than adults when exposed to the same level of air pollution [4].
Many researchers have performed ventilation evaluations on control systems, including those for carbon dioxide (CO 2 ) and temperature-based systems. Sun et al. [5] carried out CO 2 -based adaptive ventilation in a multi-zone office building in Hong Kong. They propose that a DCV system can save more than 52% of the energy consumed by a fan compared to that of a constant air volume (CAV) system. Mysen et al. [6] also reported that the CO 2 -based DCV system reduces energy consumption in Norwegian school buildings by 38% compared to a CAV. However, most of the research results are concerned with energy savings in buildings, and studies related to the health of building users are rare.
Indoor air quality (IAQ) affects both the comfort and the health of the building's users [7]. Up to now, indoor temperature and humidity conditions have been considered in the evaluation of IAQ. It is noted that carbon dioxide concentration is also an important IAQ evaluation variable, and many other variables can affect the health of occupants.
There are central and individual supply systems for air conditioning and heating systems in school classrooms. Central air conditioning systems control indoor airflow according to the positions of the air supply and return diffusers installed in the ceiling of the classroom. Central air conditioning systems have advantages, such as easy maintenance, low energy consumption, and high recycling potential [8]. Numerical analysis is widely used for airflow evaluation inside classrooms. Ahmed et al. [9] analyzed the temperature distribution inside rooms according to air supply methods and compared it to experimental data. They introduced a displacement diffuser, a slot diffuser, a square ceiling diffuser, and grille diffuser to evaluate the airflow and room ventilation. Holmberg et al. [10] calculated the ventilation efficiency with respect to the locations of the air supply and return diffusers. They showed that the positions of both the air supply and the return diffusers are critical to the overall ventilation effectiveness in a room. In many previous studies, the dependence of indoor ventilation efficiency according to the installation location of the diffusers was considered. However, it is difficult to find research on the interference and ventilation effects of airflow to cope with the spread of infectious diseases.
In the present study, the air ventilation performance of a school in Korea has been analyzed using airflow inside the classroom by numerical simulation. The school classroom is cooled by a central air conditioning system that simultaneously supplies 10 classrooms from an air handling unit installed on each floor. The distribution of temperature and the age of air inside the classroom are also analyzed according to the installation locations of the return diffusers, while the air supply diffusers are fixed to the ceiling of the classroom. Based on the height of the respiratory line of elementary school students, the indoor air flow direction is analyzed to prevent the spread of infectious diseases, such as COVID-19, in the classroom and cross-infection among students. Detailed airflow characteristics inside the classroom are analyzed and described using the results of three-dimensional numerical analysis.
The content of the paper is described as follows. Section 1 provides a comprehensive review about the studies conducted recently on the air ventilation and IAQ for school classroom. At the end of this section research gaps are identified and purpose of the present study is mentioned. Section 2 shows state of the art research. Section 3 describes the dimension of school classrooms and methodology of air conditioning. Section 4 shows the analysis method of airflow inside the classroom. Section 5 describes the numerical analysis method. Finally, results and discussion are mentioned in Section 6.
State of the Art Researches
Technologies on indoor air quality can be divided into three categories: monitoring indoor pollutants, purifying indoor pollutants, and smart IAQ control systems [11]. Development of air purification technologies and optimization of ventilation systems are regarded as the main approaches to improving IAQ. Among them, the investigation of novel technologies to filter and purify indoor air pollutants has attracted great attention over the last decade. Filter technologies such as carbon-based filters, electric filters, photocatalysts, and UV filters have been developed in response to various pollutants.
Among the important indoor health-threatening agents, ultrafine dust is emerging as a concern in the United States [11]. The behavior and interference of indoor airflow is another evaluation parameter for IAQ in terms of public health. The particles floating in a room have different airflow distributions depending on the location of the diffusers for ventilation and air conditioning. The airflow direction inside building rooms is important to reduce particulate matter and to prevent the spread of viruses, such as COVID-19. Yu et al. [12] reported the relationship between airflow direction and the number of infected people when SARS struck Hong Kong in 2003. Similarly, Li et al. [13] studied the spread of COVID-19 at a restaurant in China. They showed that infection occurred in the direction of airflow imparted by an air conditioner. Virus transmission through bioaerosols is known to occur by being in the space taken by breathing, coughing, talking of infected people [14]. It is essential to design an optimal airflow system that can reduce the mutual interference of airflow based on the occupant's breathing direction in the occupant space.
Many researchers also have studied the moving distance and direction of the droplets from breathing when exercising outdoors, such as running or cycling [15,16]. They found that the diffusion of fine droplets from breathing changes depending on the strength of the exercise and breathing, as well as the wind speed. Lu et al. [17] reported on cases of COVID-19 transmission through air conditioning in restaurants. They checked CCTV on the day of the virus infection and analyzed the virus transmission scenarios by considering the opening and closing of the entrance door and elevator, and air conditioning operation times. According to the report, nine people were infected from an asymptomatic patient eating in the same space. The location of the infection in the restaurant was related to the direction of the air conditioner wind. The outbreaks in a tour coach in Hunan province [18] and a call center in Seoul [19] also indicated the possibility of airborne transmission. It is increasingly clear and accepted that airborne transmission is an important contributor to the rapid and long-distance spreading of SARS-CoV-2 [20]. Therefore, ventilation efficiency and the direction of airflow can be important factors when evaluating IAQ when attempting to prevent the indoor spread of viruses, such as COVID-19.
Standard Classroom of Elementary Schools in Korea
Elementary schools in Korea have various geometric designs, and the most common "U-type" structure accounts for about 27% of all classrooms [21]. The number of students in traditional elementary schools is decreasing in Korea. As of 2020, the average number of elementary school students is 23.1, with the length, width, and height of the standard classroom being 8.4 m, 7.2 m, and 2.6 m, respectively. Figure 1 shows a schematic view of a standard classroom for elementary schools in Korea.
people [14]. It is essential to design an optimal airflow system that can reduce the mutual interference of airflow based on the occupant's breathing direction in the occupant space.
Many researchers also have studied the moving distance and direction of the droplets from breathing when exercising outdoors, such as running or cycling [15,16]. They found that the diffusion of fine droplets from breathing changes depending on the strength of the exercise and breathing, as well as the wind speed. Lu et al. [17] reported on cases of COVID-19 transmission through air conditioning in restaurants. They checked CCTV on the day of the virus infection and analyzed the virus transmission scenarios by considering the opening and closing of the entrance door and elevator, and air conditioning operation times. According to the report, nine people were infected from an asymptomatic patient eating in the same space. The location of the infection in the restaurant was related to the direction of the air conditioner wind. The outbreaks in a tour coach in Hunan province [18] and a call center in Seoul [19] also indicated the possibility of airborne transmission. It is increasingly clear and accepted that airborne transmission is an important contributor to the rapid and long-distance spreading of SARS-CoV-2 [20]. Therefore, ventilation efficiency and the direction of airflow can be important factors when evaluating IAQ when attempting to prevent the indoor spread of viruses, such as COVID-19.
Standard Classroom of Elementary Schools in Korea
Elementary schools in Korea have various geometric designs, and the most common "U-type" structure accounts for about 27% of all classrooms [21]. The number of students in traditional elementary schools is decreasing in Korea. As of 2020, the average number of elementary school students is 23.1, with the length, width, and height of the standard classroom being 8.4 m, 7.2 m, and 2.6 m, respectively. Figure 1 shows a schematic view of a standard classroom for elementary schools in Korea.
Central Air Conditioning System
Mechanical air conditioning systems in schools can be classified into central and individual air conditioning systems. Central air conditioning systems are installed mainly in classrooms and school offices, which are spaces where the demand is constant and that are commonly used. On the other hand, individual air conditioning systems are applied to laboratories, music rooms, art rooms, etc., which are used irregularly, and air conditioning and ventilation devices are installed in a distributed manner. A central air conditioning system consists of an air handling unit (AHU) installed for each floor, an air supply duct that supplies clean/cool air to each classroom, and a return duct that recovers the circulating air inside the classroom to the AHU, as shown in Figure 2. A central air condi-
Central Air Conditioning System
Mechanical air conditioning systems in schools can be classified into central and individual air conditioning systems. Central air conditioning systems are installed mainly in classrooms and school offices, which are spaces where the demand is constant and that are commonly used. On the other hand, individual air conditioning systems are applied to laboratories, music rooms, art rooms, etc., which are used irregularly, and air conditioning and ventilation devices are installed in a distributed manner. A central air conditioning system consists of an air handling unit (AHU) installed for each floor, an air supply duct that supplies clean/cool air to each classroom, and a return duct that recovers the circulating air inside the classroom to the AHU, as shown in Figure 2. A central air conditioning system has the advantages of effective power consumption and maintenance compared to individual systems [8]. Central air conditioning systems are also recommended as a flexible countermeasure for recent issues, such as energy consumption, fine dust, and viruses spread in public buildings such as schools. Due to relatively lower manufacturing costs, most buildings adopt upper air supply-upper air return systems that install both the air supply and return ducts on the ceiling. However, the installation of the upper supply and return diffusers causes the stratification of air and reduces the efficiency of cooling/heating and ventilation of the indoor space. As shown in Figure 2, an upper air supply-floor return system is an alternative to prevent air stratification. Although the duct system in Figure 2 is expected to increase the construction cost slightly, the system is worth considering because it can help resolve various recent issues, such as energy consumption, fine dust, and virus spread.
tioning system has the advantages of effective power consumption and maintenance compared to individual systems [8]. Central air conditioning systems are also recommended as a flexible countermeasure for recent issues, such as energy consumption, fine dust, and viruses spread in public buildings such as schools. Due to relatively lower manufacturing costs, most buildings adopt upper air supply-upper air return systems that install both the air supply and return ducts on the ceiling. However, the installation of the upper supply and return diffusers causes the stratification of air and reduces the efficiency of cooling/heating and ventilation of the indoor space. As shown in Figure 2, an upper air supply-floor return system is an alternative to prevent air stratification. Although the duct system in Figure 2 is expected to increase the construction cost slightly, the system is worth considering because it can help resolve various recent issues, such as energy consumption, fine dust, and virus spread.
Average Height of Students and Respiratory Line
Thermal comfort is closely related to the temperature and the speed of the airflow around the students, along with the metabolic rate of the person and the insulation factor imparted by clothing. ANSI/ASHRAE Standard 55-2013 recommends that the air temperature difference between the 0.1 m and 1.7 m height levels should be less than four degrees Celsius [22]. Table 1 shows the average height of elementary school students by grade in Korea [23]. Students aged 8 to 13 in elementary school generally have heights between 1.2 m and 1.5 m. Considering the influx of harmful substances, CO2, fine dust, and viruses into the oral cavity through the respiratory tract, the height of the respiratory line is particularly important for evaluating IAQ. It is important to evaluate the airflow at 1-1.5 m, the height of the students' breathing lines.
Average Height of Students and Respiratory Line
Thermal comfort is closely related to the temperature and the speed of the airflow around the students, along with the metabolic rate of the person and the insulation factor imparted by clothing. ANSI/ASHRAE Standard 55-2013 recommends that the air temperature difference between the 0.1 m and 1.7 m height levels should be less than four degrees Celsius [22]. Table 1 shows the average height of elementary school students by grade in Korea [23]. Students aged 8 to 13 in elementary school generally have heights between 1.2 m and 1.5 m. Considering the influx of harmful substances, CO 2 , fine dust, and viruses into the oral cavity through the respiratory tract, the height of the respiratory line is particularly important for evaluating IAQ. It is important to evaluate the airflow at 1-1.5 m, the height of the students' breathing lines.
Airflow Angle
With the recent COVID-19 pandemic, studies have found that the distance of water droplets spread from breathing out of the mouth is 1.5 m and coughs spread droplets up to 2 m [24,25]. These studies provided the basis for the social distance measures in Korea at 2 m, and many countries recommend wearing masks to prevent water droplets from splashing when people cough or sneeze. The importance of airflow direction and ventilation can be understood due to the cross-infection caused by horizontal airflow. To satisfy both ventilation and the prevention of cross-infection simultaneously, ventilation using vertical airflow is desirable. In the present study, three types of airflow directions are defined to evaluate the effects on crossinfection among students, as shown in Figure 3. Among the upward, downward, and horizontal airflows, the horizontal airflow is defined as flow within an angle of 45 degrees relative to the horizontal. Air flow angle is defined as the flow angle of the upper and lower limits of the air flow in relation to horizontal airflow. It is noted that indoor airflow in the horizontal direction is necessary for ventilation. However, to reduce cross-infection among students, it is desirable to generate vertical airflow at least in the range of 1-1.5 m, the respiratory line of students.
Airflow Angle
With the recent COVID-19 pandemic, studies have found that the distance of water droplets spread from breathing out of the mouth is 1.5 m and coughs spread droplets up to 2 m [24,25]. These studies provided the basis for the social distance measures in Korea at 2 m, and many countries recommend wearing masks to prevent water droplets from splashing when people cough or sneeze.
The importance of airflow direction and ventilation can be understood due to the cross-infection caused by horizontal airflow. To satisfy both ventilation and the prevention of cross-infection simultaneously, ventilation using vertical airflow is desirable. In the present study, three types of airflow directions are defined to evaluate the effects on cross-infection among students, as shown in Figure 3. Among the upward, downward, and horizontal airflows, the horizontal airflow is defined as flow within an angle of 45 degrees relative to the horizontal. Air flow angle is defined as the flow angle of the upper and lower limits of the air flow in relation to horizontal airflow. It is noted that indoor airflow in the horizontal direction is necessary for ventilation. However, to reduce crossinfection among students, it is desirable to generate vertical airflow at least in the range of 1-1.5 m, the respiratory line of students.
Age of Air
The age of air is defined as the duration of the incoming air from the supply diffusers not replaced with fresh air, despite air circulation in the classroom. The higher the age of air, the more stagnant airflow is. This also suggests partial circulation. It is desirable to reduce the age of air inside the space to allow for uniform ventilation inside the classroom.
Numerical Analysis Method
Indoor air is affected by inside and outside air temperatures and air flowrates supplied by diffusers. To analyze indoor air characteristics, considering both airflow and temperatures, numerical simulation has been introduced using the commercial software FLU-ENT V.18.2. It solves steady Navier-Stokes equations and is discretized in space using a second-order upwind scheme. Velocity-pressure combined flow is analyzed by the SIMPLE scheme. As a turbulence model, the standard k-epsilon model is used.
Computational Domain and Boundary Conditions
The computational domain is determined based on the standard size of Korean elementary school classrooms described in Section 3.1. As shown in Figure 4, it consists of two outer windows, two doors, and an inner window. For indoor air ventilation and cooling and heating, four supply diffusers are placed in the ceiling, and return diffusers are installed in the classroom ceiling and classroom floor.
Age of Air
The age of air is defined as the duration of the incoming air from the supply diffusers not replaced with fresh air, despite air circulation in the classroom. The higher the age of air, the more stagnant airflow is. This also suggests partial circulation. It is desirable to reduce the age of air inside the space to allow for uniform ventilation inside the classroom.
Numerical Analysis Method
Indoor air is affected by inside and outside air temperatures and air flowrates supplied by diffusers. To analyze indoor air characteristics, considering both airflow and temperatures, numerical simulation has been introduced using the commercial software FLUENT V.18.2. It solves steady Navier-Stokes equations and is discretized in space using a second-order upwind scheme. Velocity-pressure combined flow is analyzed by the SIMPLE scheme. As a turbulence model, the standard k-epsilon model is used.
Computational Domain and Boundary Conditions
The computational domain is determined based on the standard size of Korean elementary school classrooms described in Section 3.1. As shown in Figure 4, it consists of two outer windows, two doors, and an inner window. For indoor air ventilation and cooling and heating, four supply diffusers are placed in the ceiling, and return diffusers are installed in the classroom ceiling and classroom floor.
The boundary conditions are based on the airflow rate and temperature supplied through the air conditioning duct of the central air conditioning system and the insulation conditions of the outer wall and windows according to the outside temperature. The total supply airflow rate is 800 CMH, determined by the ventilation rate and cooling load of the elementary school in Korea. In the present study, 31 • C outdoor air is applied in accordance with the conditions of summer outdoor air in Seoul, specified in the building energy conservation design standards. The supplied air temperature for cooling is set at 18 • C and is evenly supplied by the four supply diffusers. The heat transmission coefficients of the outer windows and walls are applied as 1.7 W/m 2 K and 0.18 W/m 2 K, respectively. The effect of wall thickness is simulated using shell conduction. Detailed boundary conditions are summarized in Table 2. The boundary conditions are based on the airflow rate and temperature supplied through the air conditioning duct of the central air conditioning system and the insulation conditions of the outer wall and windows according to the outside temperature. The total supply airflow rate is 800 CMH, determined by the ventilation rate and cooling load of the elementary school in Korea. In the present study, 31 °C outdoor air is applied in accordance with the conditions of summer outdoor air in Seoul, specified in the building energy conservation design standards. The supplied air temperature for cooling is set at 18 °C and is evenly supplied by the four supply diffusers. The heat transmission coefficients of the outer windows and walls are applied as 1.7 W/m 2 K and 0.18 W/m 2 K, respectively. The effect of wall thickness is simulated using shell conduction. Detailed boundary conditions are summarized in Table 2.
Computational Grids and Grid Dependence Test
Based on the computational domain, computational grids are constructed using the ANSYS meshing program. As shown in Figure 5, a tetrahedral grid is mainly introduced. Three prism layers are applied to increase the calculation accuracy on the walls.
Computational Grids and Grid Dependence Test
Based on the computational domain, computational grids are constructed using the ANSYS meshing program. As shown in Figure 5, a tetrahedral grid is mainly introduced. Three prism layers are applied to increase the calculation accuracy on the walls. Grid dependence evaluation is performed based on the temperature, which is the main property inside the classroom. The temperature measurement location is 1 m above the floor, and as described in Chapter 3, this corresponds to the height of the respiratory line of elementary school students. As shown in Figure 6, the average temperature has a constant value when the number of grids is about 4,000,000 elements or more. Therefore, in the present study, the numerical simulation inside the school classroom has been performed using the grid numbers of 4,000,000 elements for both conditions. Grid dependence evaluation is performed based on the temperature, which is the main property inside the classroom. The temperature measurement location is 1 m above the floor, and as described in Chapter 3, this corresponds to the height of the respiratory line of elementary school students. As shown in Figure 6, the average temperature has a constant value when the number of grids is about 4,000,000 elements or more. Therefore, in the present study, the numerical simulation inside the school classroom has been performed using the grid numbers of 4,000,000 elements for both conditions. Grid dependence evaluation is performed based on the temperature, which is the main property inside the classroom. The temperature measurement location is 1 m above the floor, and as described in Chapter 3, this corresponds to the height of the respiratory line of elementary school students. As shown in Figure 6, the average temperature has a constant value when the number of grids is about 4,000,000 elements or more. Therefore, in the present study, the numerical simulation inside the school classroom has been performed using the grid numbers of 4,000,000 elements for both conditions.
Results and Discussion
In the present study, the internal airflow characteristics are analyzed using numerical simulation by introducing additional floor returns along with the upper (ceiling) air supply and return diffusers that are generally applied in school classrooms. Using the results of numerical simulation, distributions of streamlines, temperature, age of air, and airflow angle inside the classroom are compared with respect to the two locations of return flow, and upper and floor return diffusers. Figure 7 shows the distributions of streamlines for upper and floor return diffusers. In the figure, the color of the streamlines is marked differently for each air supply diffuser. It is understood that the distribution and location of the airflow differ greatly depending on the location of the two types of return diffusers. The airflow discharged from the supply diffuser is relatively evenly distributed on the floor for the floor return in Figure 7b, while the airflow does not reach the floor and circulates only on the upper side for upper
Results and Discussion
In the present study, the internal airflow characteristics are analyzed using numerical simulation by introducing additional floor returns along with the upper (ceiling) air supply and return diffusers that are generally applied in school classrooms. Using the results of numerical simulation, distributions of streamlines, temperature, age of air, and airflow angle inside the classroom are compared with respect to the two locations of return flow, and upper and floor return diffusers. Figure 7 shows the distributions of streamlines for upper and floor return diffusers. In the figure, the color of the streamlines is marked differently for each air supply diffuser. It is understood that the distribution and location of the airflow differ greatly depending on the location of the two types of return diffusers. The airflow discharged from the supply diffuser is relatively evenly distributed on the floor for the floor return in Figure 7b, while the airflow does not reach the floor and circulates only on the upper side for upper returns in Figure 7a. Under the cooling conditions of the classroom, it is noted that the floor return is more effective than the upper return for uniform flow distribution. Figure 8 shows the iso-temperature for upper and floor return diffusers. In the upper return, the cold air supplied from the supply diffusers is mainly distributed above the middle height of the classroom. The temperature distribution at the upper return corresponds well to the streamline distribution in Figure 7a. As for the temperature at the floor return, it is noted that the temperature is relatively low compared to the upper return due to the airflow reaching the floor, as shown in the streamline distribution in Figure 7b. Figure 8 shows the iso-temperature for upper and floor return diffusers. In the upper return, the cold air supplied from the supply diffusers is mainly distributed above the middle height of the classroom. The temperature distribution at the upper return corresponds well to the streamline distribution in Figure 7a. As for the temperature at the floor return, it is noted that the temperature is relatively low compared to the upper return due to the airflow reaching the floor, as shown in the streamline distribution in Figure 7b. Figure 8 shows the iso-temperature for upper and floor return diffusers. In the upper return, the cold air supplied from the supply diffusers is mainly distributed above the middle height of the classroom. The temperature distribution at the upper return corresponds well to the streamline distribution in Figure 7a. As for the temperature at the floor return, it is noted that the temperature is relatively low compared to the upper return due to the airflow reaching the floor, as shown in the streamline distribution in Figure 7b. It is found that there is a temperature change in the hallway direction due to the high outside temperature and the heat transfer through the insulating outer wall and windows. The temperature change of the upper return on the window side of the outer wall is relatively large compared to that of the floor return due to the difference in airflow distribution according to the installation location of the return diffuser, as shown in Figure 7. In Figure 9b, the cooling effect of the cold air supplied by the diffuser is most dominant between heights 1. Figure 10 shows the maximum, minimum, and average temperature according to height for an upper and a floor return diffuser system. The average temperature is determined by averaging the values in the width direction at each height. In both flow return conditions, the temperature rises from the ceiling of the classroom toward the floor, while the absolute average temperature shows that the floor return system performs better. Considering the average height of 1.2 = 1.5 m of elementary school students in Korea, shown in Table 1, the proper temperature below 1.5 m is an important aspect for the student's comfort. It is found that there is a temperature change in the hallway direction due to the high outside temperature and the heat transfer through the insulating outer wall and windows. The temperature change of the upper return on the window side of the outer Sustainability 2021, 13, 6188 9 of 14 wall is relatively large compared to that of the floor return due to the difference in airflow distribution according to the installation location of the return diffuser, as shown in Figure 7. In Figure 9b, the cooling effect of the cold air supplied by the diffuser is most dominant between heights 1.5 m and 2.5 m, which comes from the circulating flow formed between the upper supply diffusers and the upper return diffusers. The cooling effect decreases rapidly when the height is 1.0 m and 0.5 m above the floor as the rotational flow disappears, and the airflow from the upper diffuser hardly reaches. For the floor return in Figure 9c, the temperature difference according to the height is relatively small compared to the upper return, and the temperature is lower even at the height of 0.5 m. It seems apparent that this is caused by cold air reaching the floor due to the floor return. Figure 10 shows the maximum, minimum, and average temperature according to height for an upper and a floor return diffuser system. The average temperature is determined by averaging the values in the width direction at each height. In both flow return conditions, the temperature rises from the ceiling of the classroom toward the floor, while the absolute average temperature shows that the floor return system performs better. Considering the average height of 1.2 = 1.5 m of elementary school students in Korea, shown in Table 1, the proper temperature below 1.5 m is an important aspect for the student's comfort. Figures 11 and 12 show the age of air at Planes 2 and 4 in Figure 9a for upper and floor return diffuser systems. The age of air is lower in the floor return system than in the upper return system for both Planes. This means that the floor return allows for relatively less congestion in the classroom due to the indoor air circulation. At the outer window side, where the diffusers supply cold air, the difference in the age of air between the upper return and the floor return is not large. However, the age of air is relatively high near the floor, where airflow is small, as shown in Figure 7 for the upper return. Figure 13 shows the quantitative distribution of the average age of air according to the height from the classroom floor for Planes 2 and 4. The average age of air is determined by averaging the values in the width direction at each height. The age of the air shows little difference depending on the height for the floor return system, while the age of air decreases with height for the upper return. It is noted that the age of the air in the upper return system is significantly higher than that of the floor return system at the heights of 0.5 and 1 m, where the airflow velocity is low. The age of the air dramatically increases due to the lower speed of the airflow and lessened air circulation, which has a great influence on indoor ventilation performance. In the floor return, the average age of air is significantly reduced compared to the upper return. Considering the 800 CMH of ventilation supply flow rate in school classrooms, indoor air circulates about 5.1 times per hour, which corresponds to 708 s in terms of the age of air. As shown in Figure 13, the floor return maintains about 708 s of air age for all heights. The average age of air inside the classroom is an important parameter for the optimal design of air ventilation for class- Figures 11 and 12 show the age of air at Planes 2 and 4 in Figure 9a for upper and floor return diffuser systems. The age of air is lower in the floor return system than in the upper return system for both Planes. This means that the floor return allows for relatively less congestion in the classroom due to the indoor air circulation. At the outer window side, where the diffusers supply cold air, the difference in the age of air between the upper return and the floor return is not large. However, the age of air is relatively high near the floor, where airflow is small, as shown in Figure 7 for the upper return. Figure 13 shows the quantitative distribution of the average age of air according to the height from the classroom floor for Planes 2 and 4. The average age of air is determined by averaging the values in the width direction at each height. The age of the air shows little difference depending on the height for the floor return system, while the age of air decreases with height for the upper return. It is noted that the age of the air in the upper return system is significantly higher than that of the floor return system at the heights of 0.5 and 1 m, where the airflow velocity is low. The age of the air dramatically increases due to the lower speed of the airflow and lessened air circulation, which has a great influence on indoor ventilation performance. In the floor return, the average age of air is significantly reduced compared to the upper return. Considering the 800 CMH of ventilation supply flow rate in school classrooms, indoor air circulates about 5.1 times per hour, which corresponds to 708 s in terms of the age of air. As shown in Figure 13, the floor return maintains about 708 s of air age for all heights. The average age of air inside the classroom is an important parameter for the optimal design of air ventilation for classrooms with the ventilation supply flow rate.
Age of Air
by averaging the values in the width direction at each height. The age of the air shows little difference depending on the height for the floor return system, while the age of air decreases with height for the upper return. It is noted that the age of the air in the upper return system is significantly higher than that of the floor return system at the heights of 0.5 and 1 m, where the airflow velocity is low. The age of the air dramatically increases due to the lower speed of the airflow and lessened air circulation, which has a great influence on indoor ventilation performance. In the floor return, the average age of air is significantly reduced compared to the upper return. Considering the 800 CMH of ventilation supply flow rate in school classrooms, indoor air circulates about 5.1 times per hour, which corresponds to 708 s in terms of the age of air. As shown in Figure 13, the floor return maintains about 708 s of air age for all heights. The average age of air inside the classroom is an important parameter for the optimal design of air ventilation for classrooms with the ventilation supply flow rate.
Airflow Angle
As described in Section 3.2, the airflow angle is determined by the velocity components obtained by numerical simulation. In the present study, the direction of airflow is classified into three zones to analyze the effects of the ventilation by vertical airflow without cross-infection between humans in classrooms with respect to the positions of the supply and return diffusers. Figure 14 shows the contour of the airflow angle at the horizontal plane located at the height of 2.0 m with an upper supply and an upper return. In the horizontal plane adjacent to the classroom ceiling, the airflow discharged from the supply diffusers represents a downward airflow toward the outer wall and the outer window of the classroom, while an upward airflow appears in the vicinity of the return diffuser. In addition to the upward and downward airflow, the airflow distribution with various angles is shown in the figure. Considering the goal of preventing cross-infection between students for cases such as COVID-19, it is of no use to say that ventilation due to vertical downward flow is desirable.
Airflow Angle
As described in Section 3.2, the airflow angle is determined by the velocity components obtained by numerical simulation. In the present study, the direction of airflow is classified into three zones to analyze the effects of the ventilation by vertical airflow without crossinfection between humans in classrooms with respect to the positions of the supply and return diffusers. Figure 14 shows the contour of the airflow angle at the horizontal plane located at the height of 2.0 m with an upper supply and an upper return. In the horizontal plane adjacent to the classroom ceiling, the airflow discharged from the supply diffusers represents a downward airflow toward the outer wall and the outer window of the classroom, while an upward airflow appears in the vicinity of the return diffuser. In addition to the upward and downward airflow, the airflow distribution with various angles is shown in the figure.
Considering the goal of preventing cross-infection between students for cases such as COVID-19, it is of no use to say that ventilation due to vertical downward flow is desirable.
Airflow Angle
As described in Section 3.2, the airflow angle is determined by the velocity components obtained by numerical simulation. In the present study, the direction of airflow is classified into three zones to analyze the effects of the ventilation by vertical airflow without cross-infection between humans in classrooms with respect to the positions of the supply and return diffusers. Figure 14 shows the contour of the airflow angle at the horizontal plane located at the height of 2.0 m with an upper supply and an upper return. In the horizontal plane adjacent to the classroom ceiling, the airflow discharged from the supply diffusers represents a downward airflow toward the outer wall and the outer window of the classroom, while an upward airflow appears in the vicinity of the return diffuser. In addition to the upward and downward airflow, the airflow distribution with various angles is shown in the figure. Considering the goal of preventing cross-infection between students for cases such as COVID-19, it is of no use to say that ventilation due to vertical downward flow is desirable. Figure 15a shows the area by flow zone classification based on airflow angle according to height for an upper return and a floor return system. The zone classification with the three types of airflow directions is defined and determined at each height, as described in Figure 3. In the ventilation method of the upper return and the floor return, the horizontal airflow, which affects cross-infection among students, shows a remarkable difference. Based on the height of the breathing line of elementary school students, between 1.3 and 1.5 m, the area of the horizontal airflow represents 80% and 60% of the upper return and the floor return, respectively. In other words, it is noted that the floor return is more effective in reducing the cross-infection of infectious diseases among students compared to the upper return. The horizontal airflow in the breathing line may transmit viruses from neighboring students. Thus, the proportion of horizontal airflow calculated at the breathing line may be an important indicator for evaluating the safety of the indoor airflow. the three types of airflow directions is defined and determined at each height, as described in Figure 3. In the ventilation method of the upper return and the floor return, the horizontal airflow, which affects cross-infection among students, shows a remarkable difference. Based on the height of the breathing line of elementary school students, between 1.3 and 1.5 m, the area of the horizontal airflow represents 80% and 60% of the upper return and the floor return, respectively. In other words, it is noted that the floor return is more effective in reducing the cross-infection of infectious diseases among students compared to the upper return. The horizontal airflow in the breathing line may transmit viruses from neighboring students. Thus, the proportion of horizontal airflow calculated at the breathing line may be an important indicator for evaluating the safety of the indoor airflow. Figure 16 shows the zone classification of airflow angle at the height of 1.4 m for an upper and a floor return system. In the upper return ventilation in Figure 16a, the area of upward, downward, and horizontal flow are mixed broadly on the outer wall compared to the floor return. In the floor return, the area of horizontal airflow, which concerns students' cross-infection, is decreased by about 20%, and the area of the three directions according to the height from the floor is shown in Table 3. Throughout the analysis of airflow inside classrooms, it is found that the floor return ventilation is more effective in preventing cross-infection as well as in ventilation performance during cooling compared to upper return ventilation. Figure 16a, the area of upward, downward, and horizontal flow are mixed broadly on the outer wall compared to the floor return. In the floor return, the area of horizontal airflow, which concerns students' cross-infection, is decreased by about 20%, and the area of the three directions according to the height from the floor is shown in Table 3. Throughout the analysis of airflow inside classrooms, it is found that the floor return ventilation is more effective in preventing cross-infection as well as in ventilation performance during cooling compared to upper return ventilation. the three types of airflow directions is defined and determined at each height, as described in Figure 3. In the ventilation method of the upper return and the floor return, the horizontal airflow, which affects cross-infection among students, shows a remarkable difference. Based on the height of the breathing line of elementary school students, between 1.3 and 1.5 m, the area of the horizontal airflow represents 80% and 60% of the upper return and the floor return, respectively. In other words, it is noted that the floor return is more effective in reducing the cross-infection of infectious diseases among students compared to the upper return. The horizontal airflow in the breathing line may transmit viruses from neighboring students. Thus, the proportion of horizontal airflow calculated at the breathing line may be an important indicator for evaluating the safety of the indoor airflow. Figure 16 shows the zone classification of airflow angle at the height of 1.4 m for an upper and a floor return system. In the upper return ventilation in Figure 16a, the area of upward, downward, and horizontal flow are mixed broadly on the outer wall compared to the floor return. In the floor return, the area of horizontal airflow, which concerns students' cross-infection, is decreased by about 20%, and the area of the three directions according to the height from the floor is shown in Table 3. Throughout the analysis of airflow inside classrooms, it is found that the floor return ventilation is more effective in preventing cross-infection as well as in ventilation performance during cooling compared to upper return ventilation.
Conclusions
In the present study, the indoor ventilation performance of school classrooms during cooling has been analyzed using numerical simulation. Distributions of temperature and the age of air inside the classroom have been analyzed according to the installation locations of return diffusers. Based on the height of the respiratory line of elementary school students, the indoor air flow direction has been evaluated to indicate the risk of cross-infection among students in the classroom. The results are summarized as follows: (1) The distribution and location of the airflow differ greatly depending on the location of the return diffusers. The airflow discharged from the supply diffuser is relatively evenly distributed for the floor return, while the airflow does not reach the floor with an upper return. It is noted that the floor return is more effective than the upper return for uniform flow distribution. (2) The temperature change of the upper return on the window side of the outer wall is relatively large compared to that of the floor return due to the difference in airflow distribution according to the installation location of the return diffuser. In both flow return conditions, the temperature rises from the ceiling of the classroom toward the floor, while the absolute average temperature shows that the floor return system performs better. (3) The floor return is relatively less congested in the classroom due to indoor air circulation. In terms of ventilation, the floor return shows relatively better performance than the upper return. The age of air with a floor return is significantly reduced compared to an upper return in low-height areas, which is important to occupants. (4) Based on the height of the breathing line of elementary school students, between 1.3 and 1.5 m, the area of the horizontal airflow represents 80% and 60% of the upper return and the floor return, respectively. In the floor return, the area of horizontal airflow, which concerns students' cross-infection, is decreased by about 20%. It is found that floor return system is more effective in reducing cross-infection among students compared to upper return system. (5) The present study can be extended to design for optimization of airflow and prevention of cross-infection in office buildings that can adopt a central air conditioning system. (6) Although the floor return in space is more effective for ventilation than the upper return, additional optimal design is necessary to determine the proper position of the return diffuser. | 2021-07-27T00:05:07.381Z | 2021-05-31T00:00:00.000 | {
"year": 2021,
"sha1": "38021dc99f1dfaeb15633e7ec5fb5edb1d52df0e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/11/6188/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "111295d624b1034ec35779f0923f25782841caf9",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
244675984 | pes2o/s2orc | v3-fos-license | BFR-SE: A Blockchain-Based Fair and Reliable Searchable Encryption Scheme for IoT with Fine-Grained Access Control in Cloud Environment
Due to capacity limitations, large amounts of data generated by IoT devices are often stored on cloud servers. These data are usually encrypted to prevent the disclosure, which significantly affects the availability of this data. Searchable encryption (SE) allows a party to store his data created by his IoT devices or mobile in encryption on the cloud server to protect his privacy while retaining his ability to search for data. However, the general SE techniques are all pay-then-use. The searchable encryption service providers (SESP) are considered curious but honest, making it unfair and unreliable. To address these problems, we combined ciphertext-policy attribute-based encryption, Bloom filter, and blockchain to propose a blockchainbased fair and reliable searchable encryption scheme (BFR-SE) in this paper. In BFR-SE, we constructed an attribute-based searchable encryption model that can provide fine-grained access control. The data owner stores the indices on SESP and stores some additional auxiliary information on the blockchain. After a data user initiates a request, SESP must return the correct and integral search results before the deadline. Otherwise, the data user can send an arbitration request, and the blockchain will make a ruling. The blockchain will only perform arbitrations based on auxiliary information when disputes arise, saving the computing resources on-chain. We analyzed the security and privacy of BFR-SE and simulated our scheme on the EOS blockchain, which proves that BFR-SE is feasible. Meanwhile, we provided a thorough analysis of storage and computing overhead, proving that BFR-SE is practical and has good performance.
Introduction
With the continuous development of Mobile Internet, 5G, and some other advanced technologies, especially the Internet of Things, people and machines are always generating massive amounts of data. Most IoT devices produce large amounts of data with a limited storage capacity, so the data owners like to use cloud storage services to reduce the burden of maintenance costs and local storage overhead. Cloud services provide users with great convenience, enabling users to access their data anytime and anywhere, instead of using a specific machine. But these data, especially the data generated by specific IoT devices such as smart homes and intelligent wristbands, often contains sensitive information to the user. To prevent the disclosure, users encrypt their data before uploading it to the cloud server [1][2][3][4][5][6][7][8]. However, encryption will weaken the ability of users to search for data.
Searchable encryption technology was first proposed by Song et al. [9], which allows a party to store his data in encryption on the cloud server to protect his privacy while retaining his ability to search for data. A searchable encryption scheme typically includes three participants: the data owner (DO), the data user (DU), and the cloud server. The DO encrypts his data together with the corresponding keywords and uploads them to the cloud server. The cloud server maintains these ciphertexts and provides search services for data users. A data user will initiate a search request using a search token generated based on the keywords, and the matching search results will be sent to him by the cloud server. Finally, the data user can decrypt the ciphertext locally to obtain the data. The whole process will not expose any information related to the data itself. Nowadays, many researchers have proposed various searchable encryption algorithms, such as asymmetric searchable encryption [10,11], multikeyword searchable encryption [12,13], and fuzzy keyword searchable encryption [14,15]. Most of the above studies focus on searchable encryption's privacy and performance in different scenarios and assume that the cloud server is curious but honest. However, this is not the case, which will cause problems in the fairness and reliability of searchable encryption: (1) On the one hand, after the user pays, the cloud server cannot provide satisfactory search services, resulting in the user's economic losses. On the other hand, after the user obtains the desired search results, he will slander and deny the cloud server's service and deceptively refuse to pay the service fee (2) The cloud server is not always reliable. To save costs, it may delete data that is not often used at ordinary times to save space. When users search, it will send part of the search results or even send fake data to users According to the above point of view, in addition to the privacy of keywords and search algorithms' efficiency, practical searchable encryption is highly expected to be fair and reliable. To solve these problems, we urgently need such a searchable encryption scheme, in which the service provider is always to provide reliable search service, and the users pay for it. There is no credible third party in this scheme but will not cause any economic disputes. Fortunately, with the emergence and development of Bitcoin [16], as a decentralized cryptocurrency, its underlying technology blockchain can gracefully help us to achieve this goal. In this paper, we proposed a fair and reliable searchable encryption scheme (BFR-SE) based on blockchain. The main contributions of our research are as follows: (1) We constructed an attribute-based searchable encryption algorithm (ABSE) and combined it with blockchain and Bloom filter to propose a fair and reliable searchable encryption scheme. While the DO stores the data indices in the SESP, some additional auxiliary information used for verification is uploaded to the blockchain. In the event of disputes between DU and SESP, the blockchain will arbitrate, and the dishonest participant will be punished financially (2) BFR-SE supports users' multikeyword search for ciphertexts. By utilizing ABSE, the DO realizes finegrained access control for their data search, which means that only the users whose attributes satisfy the specific policy can search and obtain the correct search results (3) Not the same as other blockchain-based searchable encryption schemes, BFR-SE only stores a small amount of auxiliary information on-chain and performs possible arbitration when disputes occur, which dramatically saves storage and computing resources on-chain (4) We simulated and implemented BFR-SE on the EOS blockchain and showed implementation details of smart contracts and algorithms. Together with the security analysis, it proves that our scheme is feasible (5) We used 6 MacBook Pros to build an EOS private chain in a laboratory environment and simulated our scheme. The storage and computing overhead proves that BFR-SE is practical and has good performance The rest of this paper is organized as follows: Section 2 consists of related works. Section 3 reviews some preliminary knowledge used throughout this paper. In section 4, we have an overview of our scheme. Section 5 describes specific implementation details. In Section 6, we analyze the security and performance. Finally, we present the conclusion and future direction.
Verifiable Searchable Encryption.
To ensure the reliability of searchable encryption and prevent the cloud server from returning partial or even wrong search results, users need to have the ability to verify the correctness of search results. Earliest in 2012, Chai and Gong [17] proposed a verifiable keyword search scheme, in which the cloud server needs to prove that the returned results are correct. Kurosawa and Ohtaki [18] proposed the first UC-secure verifiable symmetric searchable encryption, which can verify whether the search results are modified or deleted, and the computational cost of verification has a linear relationship with the number of files. Zhu et al. [19] constructed a verifiable fuzzy keyword search scheme to support dynamic data using Bloom filter and locality sensitive hash function. The single-keyword verifiable searchable encryption will return many irrelevant results that cause the waste of transmission bandwidth and computing resources, so the verifiable searchable encryption proposed by Azraoui et al. [20] supports multikeyword search or combined search. However, the above verifiable searchable encryption schemes are only suitable for a small number of users, and it is challenging to meet the user's dynamic requirements in the cloud environment. The number of users growing will cause the burden of key management and cannot achieve fine-grained access control. In 2014, Zheng et al. [21] proposed a novel cryptographic primitive named verifiable attribute-based keyword search. This primitive allows DO to control the search and outsource his encrypted data to the cloud server based on an access policy. Simultaneously, it allows legitimate users to outsource the search operation (usually expensive) to the cloud server and verify whether the cloud server loyalty performs it. Ameri et al. [22] combined hierarchical identity-based multidesignated verifier signature (HIB-MDVS), hierarchical identity-based broadcast encryption (HIBBE) and Bloom filter to propose a generic construction for verifiable attribute-based keyword search. The VBKS scheme proposed by Sun et al. [23] can realize the revocation of user attributes and utilize proxy reencryption and lazy reencryption to transfer the heavy update work during attribute revocation to a semitrusted cloud server while supporting the multikeyword search.
As mentioned above on verifiable searchable encryption, the research enables users to verify the search results' correctness and integrity. However, as a more practical searchable encryption scheme, this is far from enough because when a dishonest server is detected, it cannot continue to punish the dishonest server without a third-party trusted organization, which cannot be genuinely reliable.
Blockchain-Based Searchable Encryption.
In recent years, some researchers utilized blockchain to solve the fairness problem in searchable encryption. In 2017, Li et al. [24] used blockchain to construct symmetric searchable encryption (SSE-using-BC). In their scheme, users publicly store all data on the Bitcoin through transactions. As long as the participant does not execute honestly, he will lose his BTC. In their subsequent work [25], they also improved the scheme and adopted the Fabric blockchain, which significantly improved the performance. Hu et al. [26] explored the potential capabilities of the Ethereum blockchain and constructed a decentralized, privacy-protected search model. The scheme designed a financially fair smart contract to replace the centralized server so that all participants are treated equally and motivated to perform correct operations. Cai et al. [27] also used a smart contract to record encrypted search records on the blockchain and designed a fair protocol to deal with disputes and payment issues. They used a dynamic, efficient searchable encryption scheme, which retained the search capability and inspired the service provider to make a real effort. Tang [28] extended the original searchable encryption, storing some necessary information on-chain, in which blockchain only serves as a proper judicial function. If there is no dispute, it will perform little operations on-chain, reducing the blockchain's burden. Chen et al. [29] stored the indices and complex logical structure of EHRs on the blockchain. They believed that only utilizing blockchain for propagation can the data owner have complete control over their data. Blockchain ensures the integrity, unforgeability, and traceability of the indices. Jiang et al. [30] proposed a Bloom filter-enabled multikeyword search protocol with enhanced efficiency and privacy preservation. In the protocol, a low-frequency keyword is selected using the Bloom filter to filter the database when performing a multikeyword search.
In summary, although the above searchable encryption based on the blockchain can solve the fairness problem in the payment process, there are still some shortcomings: (1) These schemes are products of the combination of blockchain and symmetric searchable encryption that can only achieve a single one-to-one scenario that is difficult for a large number of users and meeting dynamic requirements in a cloud environment, not to mention fine-grained access control (2) The main idea of these schemes is to store index information of the encrypted data on-chain. Although encrypted, the symmetric searchable encryption is generally a deterministic function. It will be noticed when the user searches for the same keyword multiple times. It will lead to the establishment of some statistics, making it possible to infer some private information (3) Both file storage and search algorithm execution are all processed on-chain, which increases the storage and computing overhead of blockchain. Compared with the traditional way, because the blockchain requires parallel storage and calculation of multiple miners, resource waste is bound to become noticeable. Individual schemes put this part off-chain but caused functional defects, such as the participants need a lot of offline communications
Preliminary
3.1. Bilinear Pairing. Let G 0 and G 1 be cyclic groups of order p and g be a generator of G 0 . We call e : G 0 × G 0 ⟶ G 1 is a bilinear paring if it is a map with the following properties: (1) Bilinear: for all g 1 , g 2 ∈ G 0 and a, b ∈ Z p , there will be eðg a 1 , g b 2 Þ = eðg 1 , g 2 Þ ab (2) Nondegenerate: there exists g 0 ∈ G 0 , such that eðg 0 , g 0 Þ ≠ 1 (3) Computable: there is an efficient algorithm to compute eðg 1 , g 2 Þ for all g 1 and g 2 3.2. Linear Secret Sharing Scheme (LSSS). Let P = fP 1 , ⋯, P n g be a set of participants, and ðA, ρÞ be an access structure. In the structure, A is an l × k matrix with ρ mapping its rows. An LSSS is composed of two polynomial-time algorithms: (1) shareððA, ρÞ, sÞ: to share a secret value s, it selects and A i be the vector as the ith row of matrix A, then, the secret share σ i = A i v ! belongs to party ρðiÞ (2) recoverðω, fσ i g i∈ω Þ: it takes ω ∈ A and corresponding secret shares as inputs. If any L ∈ fi | ρðiÞ ∈ ωg satisfies the access structure, a set of recovery coefficients fμ i g i∈L can be calculated so that ∑ i∈L u i σ i = s 3.3. Bloom Filter. Bloom filter is a space-efficient probabilistic data structure, proposed by Burton Howard Bloom in 1970 [31] that through an individual error rate in exchange for space-saving and query efficiency. A standard l BF -bit Bloom filter includes a vector V with a length of l BF , all bits of which are initialized to 0, k independent hash functions 3 Wireless Communications and Mobile Computing fh 1 , ⋯, h k g, and each hash function has a uniform value in the range of ½0, l BF − 1. For each element w i ð1 ≤ i ≤ nÞ in set W = fw 1 , ⋯, w n g, set the corresponding position of H j ð w i Þð1 ≤ j ≤ kÞ in the vector to 1. It is only necessary to determine whether an element all H j ðw i Þ in V are 1 or not to judge whether an element w is in the set W. If not, there must be w ∉ W; otherwise, there is a high probability w ∈ W. (It should be noted that a high probability means that there is a false-positive rate of nonzero, but this possibility can be minimized by appropriate setting the value of l BF and k.) A Bloom filter is composed of two algorithms: (1) BFGenðfh 1 ′, ⋯, h k ′g, fw 1 , ⋯, w n gÞ ⟶ BF: this algorithm generates an l BF -bit Bloom filter by hashing fh 1 , ⋯, h k g the data set W = fw 1 , ⋯, w n g (2) BFVerifyðBF, w, fh 1 ′, ⋯, h k ′gÞ ⟶ ð0, 1Þ: this algorithm verifies whether the element w is in the set W. If it returns 1, w ∈ W. Otherwise, w ∉ W 3.4. Blockchain . The blockchain concept originated from Nakamoto's Bitcoin white paper [16], whose foundation is cryptography and P2P networks. Then, it organizes the data with a specific structure into blocks in a certain way and links these blocks into a chain in chronological order. Cryptography and consensus mechanisms together ensure the security and unforgeability of data. In short, as the underlying technology of cryptocurrency like Bitcoin, the blockchain is a trusted ledger with distributed computing capabilities that can process business credibly without a third-party organization.
3.5. Smart Contract. Initially, when it comes to blockchain, the only well-known applications are cryptocurrencies such as Bitcoin and Litecoin. What brings a qualitative change to the blockchain is that in 2013 Vitalik Buterin established the first public chain platform named Ethereum with a built-in Turing complete language [32] and inaugurated smart contract for the blockchain. Szabo defined smart contract as "a computerized transaction protocol that executes the terms of a contract" [33]. The smart contract in the blockchain is a piece of program code stored on the chain, which can be executed securely and reliably. On the one hand, the blockchain can utilize a programmable smart contract to implement more complex business logic. On the other hand, the blockchain can provide a trusted environment for executing a smart contract. The operating mechanism of the smart contract in the blockchain is shown in Figure 1. As shown in the figure, the blockchain can be seen as a state machine triggered by transactions, and the ledger is a public world state starting from the Genesis Block. Users can create a transaction and broadcast it to the blockchain network from any node. All block producers will perform corresponding operations after receiving the transaction, and the consensus mechanism makes all nodes finally get a consistent result and update the world state.
Blockchain provides the following support for the execution of smart contract on-chain: (i) Public status: every participant can inspect the smart contract's current world state on the public ledger (ii) Timestamp server: the block height can be seen as a trusted timestamp that never stops (iii) Trusted propagation channel: the sender can utilize the blockchain to spread the message, and the receiver will reliably receive the message shortly.
The delivery traces will be recorded on-chain for auditing, and the records are credible and cannot be tampered with by anyone 3.6. Transactions of EOS. Account, address, and transaction are three essential components in the EOS blockchain [34]. Each user has an account that corresponds to multiple ECDSA key pairs expressed as ðpk, skÞ, and each key pair represents different operation permission of the account. The private and public keys are used by users to sign and verify a transaction. Our definition of a transaction is consistent with our previous work [35,36]: where Ref block refers to the height and id of a recently generated block, which prevents the transaction from being packaged on the fork chain and t is the expiration time of the transaction. Sig U ðChain ID, TxÞ is the signature signed by the sender. Action is the operation performed by the transaction in which Code is the name of the smart contract, Name is the method to be invoked in the smart contract, Auth U is used to verify whether the sender has the authority to call the method, and Data are the parameters. There may be multiple actions in one transaction. Smart contracts can also send actions to each other to call methods of other contracts, which is called inline communication, and the corresponding execution authority is the same as the original transaction.
Data Persistence of EOS.
After the smart contract is executed, the occupied memory will be released, and all variable data in the program will be lost, so it is necessary to persist the data in smart contract. In the smart contact of Ethereum, data can only be stored in key-value pairs, which is difficult to meet more complex requirements. EOS imitates multiindex containers in Boost library and develops a C++ class: eosio::multi_index (from now on referred to as multi_index). Each multi_index can be regarded as a table in the traditional database. Each row of the table can store an object, and the object's attributes can be any C++ data type. Therefore, the table constructed by multi_index in EOS is no less flexible than traditional databases. A significant feature of multi_index is that a primary key can be set as the main index and 16 secondary indices. Users can obtain any of these indices and use the emplace, erase, modify, and find functions of the index to insert, delete, update, and select data.
Overview of Proposed Scheme
This section will give an overview of our proposed scheme, including the system model and scheme design. The meanings of the symbols and abbreviations used in this paper are shown in Table 1.
System Model of BFR-SE.
The scheme proposed in this paper is composed of four components: data owner (DO), data user (DU), searchable encryption service provider (SESP), and blockchain. The keywords and their corresponding index structure are encrypted and uploaded to the SESP after DO extracts the keyword set from the outsourced data set to prevent privacy disclosure. DO distribute keys for DUs through blockchain, and only the DUs whose attributes satisfy the access policy can search and obtain the original data. DU uses his private key to generate a search token according to the keywords he wants to retrieve. According to the search token provided by DU, SESP performs complex search calculation operations, then returns the search results to DU and obtains the revenue. The traces and additional evidence of each participant will be recorded on the blockchain, which cannot be destroyed or denied. DUs need to pay SESP for the service they use. If the SESP does not provide the correct result before the predetermined block height, DU can apply for arbitration, and the blockchain will make a judgment according to the auxiliary information and the additional evidence on-chain during the search. Then, the charge fee will be returned to the DU as compensation, together with a penalty on SESP. The specific functions and responsibilities of the four components are as follows: (1) DO: the owner of the IoT device is also the owner of the data. Responsible for the system's initialization, including creating and deploying smart contracts in the scheme. DO needs to generate and distribute private keys for registered DUs according to their attributes. Besides, DO extracts keywords from the outsourced data files, generates the corresponding indices, and sets a reasonable access policy for the indices. DO is honest by default (2) In the event of a dispute, arbitration can be conducted according to this information, and the malicious parties can be punished economically. In the absence of a third-party authoritative and trusted organization, the blockchain is the cornerstone of trust in the scheme. Additionally, blockchain can provide a reliable broadcast channel for each participant, which can be used by each party for information dissemination The system model of our proposed scheme is shown in Figure 2.
Our scheme's searchable encryption algorithm is inspired by the scheme named VABKS (verifiable attribute-based keyword search) proposed by Zheng et al. [21]. We have optimized and extended it to support a multikeyword search. The detailed description of each step in the flowchart is as follows: (i) DO creates and deploys smart contracts on the blockchain. BFR-SE includes two smart contracts: PMContract and SEContract (ii) DO generates the system master key and public parameters, as well as a pair of signature keys. Then, DO publishes the public parameters and public key for the signature to the smart contract, while the system master key and the private key for signature keep secret (iii) SESP is registered in the SEContract, and a definite amount of deposit is required when registering. If SESP has fraudulent behavior, it will deduct part of the deposit as punishment (iv) DU applies for registration in the PMContract, and he needs to provide his EOS account and a public key of ECC in which the EOS account is used for receiving compensation when SESP is dishonest (v) DO generates the attribute key for DU according to his attributes set, then uses his public key to encrypt it and broadcast it to the blockchain. The ciphertext of the attribute key will be stored in the PMContract (vi) DU obtains the ciphertext of his attribute key from PMContract and decrypts it locally by the corresponding private key (vii) DO encrypts his data files and outsources them. (It can be uploaded to the cloud server, or IPFS, which is beyond the scope of this paper.) The returned address and the corresponding decryption key The main work of the initialization phase is that DO creates smart contracts and deploys them on the blockchain. Then, DO generates the system master key MSK and public parameters PK locally. The core algorithm of this phase is Setupð1 λ Þ ⟶ ðMSK, PK, Sk sig , Pk sig Þ, which is run by DO locally. The algorithm's input is a security parameter 1 λ , and the outputs are MSK, PK, and a pair of keys for signature. After that, the DO publishes PK and Pk sig to smart contracts and keep MSK, and Sk sig secret locally. The corresponding steps in the system flowchart are ① and ②.
(2) Apply and Register phase The apply and register phase's primary work is to complete the registration of SESP and DUs, including that DO distributes private attribute key for each DU. SESP needs to transfer a certain amount of system tokens to SEContract when applying for registration. If the amount of deposit is less than fines when doing evil, DU can choose not to use the search service. When DU applies for registration, it needs to provide a public key of ECC. After that, the DO generates a private attribute key for the DU according to his attributes set. The core algorithm is KeyGenðMSK, PK, ωÞ ⟶ Sk ω , which is run by DO locally. The algorithm inputs MSK, PK, the attribute set ω of DU, and DU's public key Pk com and outputs the private attribute key Sk ω of DU. Then, DO uses the public key provided by DU when applying for registration to encrypt Sk ω and obtain the ciphertext of Sk ω , CSk ω = ε:Enc Pk com ðSk ω Þ. DO uploads CSk ω to the SEContracts, so that DU can securely obtain and decrypt it to get his private attribute key Sk ω . The corresponding steps in the system flowchart are ③, ④, ⑤, and ⑥.
(3) Build index phase
The main work of the build index phase is that DO encrypts the sharing data and outsource it. Take IPFS as an example, we can use the returned address and key to identify the data. After that, DO extracts the keywords set from the sharing data, builds indices for all the sharing data with the same keyword set, and generates the auxiliary information. DO sends the indices to SESP and uploads the auxiliary information to SEContract. The corresponding steps in the system flowchart are ⑦ and ⑧. It consists of the following three subalgorithms: (a) EncryFileðfF η g 1≤η≤d Þ ⟶ ðfFID η g 1≤η≤d Þ This algorithm is run by DO. For each element in fF η g 1≤η≤d where d denotes the number of the sharing data, DO encrypts it by the key key η and outsource it. Taking IPFS as an example, the returned address of F η is href F η , and the sharing data's identity is FID η = IDGenðkey η , href η Þ. The algorithm's final output is the identity set FID = fFID η g 1≤η≤d . IDGen is an encryption function module defined by DO, which is not the focus of this paper, so the flowchart does not present it.
(b) IndexGenðKW, FIDÞ ⟶ I DO runs this algorithm, and the main work is to establish the corresponding indices based on sharing data and the relevant keywords. At first, DO need to extract the keywords KW η for each F η in fF η g 1≤η≤d , then KW = KW 1 ∪ KW 2 ⋯ ∪KW d . For ∀KW τ = fkw 1 , ⋯, kw m g and KW τ ∈ KW ðKW τ ≠ ∅,1 ≤ τ ≤ nÞ, if the corresponding data set is FID τ , then use all the elements in FID τ as leaf nodes to generate a Merkle Tree, and the root is MerkleRoot τ . We defined that I τ = ðKW τ , MerkleRoot τ , FID τ Þ, and the final indices of the keywords set KW matching the sharing data FID is I = fI τ g 1≤τ≤n in which n is the number of indices.
(c) EncryptðI, P , PKÞ ⟶ ðCI, AIÞ DO runs this algorithm and the primary work is to encrypt the indices by a specific access policy to generate the ciphertext of indices CI and the auxiliary information AI. The inputs are the indices I, the access policy P , and the public system parameters PK, while the outputs are CI and AI. DO encrypts the keywords of each I τ in I to get the ciphertext of the keywords CKW τ . DO signs CKW τ and MerkleRoot τ by his private key to ensure the integrity of the index. DO uploads the ciphertext CI = fCI τ g 1≤τ≤n to SESP, and the corresponding auxiliary information AI is uploaded to SEContract. For example, with four files, the data structure of an index is shown in Figure 3.
(4) Token generation phase
The main work of the token generation phase is that DU uses his private attribute key to call the trapdoor function to generate a search token and commitment for the searching keywords. TokenGenðSk ω , KW search Þ ⟶ ðTOK, COMMÞ is the core algorithm whose inputs are the private attribute key Sk ω of DU, and the keywords set KW search retrieved, and outputs are search token TOK and the commitment COMM. The commitment is to prove that DU did search for the keywords set provided by him when arbitrating. After that, DU uploads TOK and COMM to SEContract and pays the fee simultaneously. The corresponding steps in the system flowchart is ⑨.
(5) Search phase
The search phase's primary work is to use the search token to retrieve the ciphertext of indices uploaded by DO and return the successful matching results. The core algorithm of this phase is TESTðTOK, CI τ Þ ⟶ f0, 1g, which is run by SESP locally. This algorithm's inputs are the search token TOK and the ciphertext of index CI τ , and the output is 0 or 1. If the output is 1, then the match is booming, and the search result is CI result . SESP needs to upload CI result to SEContract before the preagreed time. Otherwise, DU can claim back the charge fee. The corresponding steps in the system flowchart are ⑩, ⑪, ⑫, and ⑬. The verification phase's primary work is to verify the search results returned by SESP according to the results and the auxiliary information uploaded by DO, including the verification of existence, integrity, and correctness. If the verification result is that the SESP has done evil, economic punishment will be imposed on SESP. This algorithm is executed by the blockchain, corresponding to step ⑭ in the flowchart. It can be subdivided into three subphases as follows: (a) ExistenceVerifyðAI, random, fH 2 ðW j ′ Þg 1≤j≤m , When the search result returned by SESP is null, the algorithm can verify the existence of the sharing data searched by DU. The algorithm's inputs are the auxiliary information AI, a random number related to the search token and fH 2 ðW j ′ Þg 1≤j≤m , and the commitment corresponding to the search request. The output is 0 or 1.
This algorithm can verify the integrity of the search results returned by SESP and prevent SESP from returning only partial or even forged results. The input of this algorithm is the search result CI result , and the output is 0 or 1.
This algorithm can verify the correctness of the search results returned by SESP and prevent SESP from returning the wrong results. This algorithm's inputs are the search token TOK and the search result CI result . The output is 0 or 1.
(7) Withdraw phase
The main work of this phase is that each participant withdraws their coins from smart contract. The SESP's coin includes the deposit at the time of registration and DU's payment for using the search service. The coin of DU is mainly compensation, which comes from the penalty of SESP. It should be noted that each fee needs a freeze period, during which DU can apply for arbitration on the blockchain. Only after the freezing period has passed can SESP withdraw this fee from the contract. The corresponding steps in the system flowchart are ⑮.
Implementation Details of Proposed Scheme
To achieve our goal, we constructed an attribute-based searchable encryption algorithm that supports multikeyword search and combined with the EOS blockchain platform to realize our fair and reliable scheme. This section will elaborate on the details of our smart contracts deployed on EOS and the concrete construction of BFR-SE.
Smart Contract Design.
In order to make the logic clearer, we divide the smart contract in the scheme into two parts, PMContract and SEContract. We use _self to represent the account of smart contract itself and _self.asset to represent the balance in the contract. Let require_auth be a function that represents which account's permission is needed to continue. We will describe the two smart contracts in detail in this section.
Participant Management Contract (PMContract). The
PMContract is composed of five interfaces: SetSPK, Register, GetPK, SetSK, and GetSK. We initialize PMContract as follows: Let three-tuple ðAccount user , Pk com , CSk ω Þ denote a DU and create a multi_index named table_user, in which Account user is an EOS account of DU, Pk com is a public key of DU, and CSk ω is the private attribute key of DU. Let Account user be the primary key of table_user, whose corresponding index is account_idx. Let PK denote the system public parameters.
(1) SetSPK: when PMContract receives action (PMContract, SetSPK, Auth, (pk)), this function will be triggered to execute. It can only be invoked by DO to set and update the public system parameters Wireless Communications and Mobile Computing (2) Register: when PMContract receives action (PMContract, Register, Auth, (A user , Pk com )), this function will be triggered to execute. It is invoked by DU to apply for registration in the system. The detail of this function can be seen in Algorithm 1 (3) GetPK: when PMContract receives action (PMContract, GetPK, Auth, (A user )), this function will be triggered to execute (4) SetSK: when PMContract receives action (PMContract, SetSK, Auth, (A user )), this function will be triggered to execute. It is used for DO distributing private attribute keys to DUs. The detail of this function can be seen in Algorithm 2 (5) GetSK: when PMContract receives action (PMContract, GetSK, Auth, (A user )), this function will be triggered to execute Let six-tuple (Account user , SerialNum, TOK, COMM, Height, Coin) be a search request initiated from DU and create a multi_index named search_table, in which Account user is the account of DU, SerialNum is the serial number of the search request, TOK is the search token, COMM is the commitment of DU for the request, Height is the block height when the request is initiated, and Coin is the charge fee paid by the user for the service. Let Account user be the primary key of search_table, and the corresponding index is search_ idx. Let three-tuple (Account user , SerialNum,CI result ) be a result returned from SESP and create a multi_index named result_table, in which Account user and SerialNum are to match the search request in search_table, and CI result denotes the search result. Let Account user be the primary key of result_table, and the corresponding index is result_idx. Let Account sesp be the account of SESP and Deposit be the balance of SESP in the contract. Let d represent the fee that the user needs to pay for each search and the amount of penalty when SESP does evil. Let round_height represent the time required for search round and verification round, BF be the auxiliary information which is a Bloom filter, and PK Sig be the public key for the signature of DO.
Searchable Encryption Contract (SEContract
(1) SetPK: when SEContract receives action (SEContract, SetPK, Auth, (pk)), this function will be triggered to execute (2) SetAI: when SEContract receives action (SEContract, SetAI, Auth, (AI)), this function will be triggered to execute (3) Deposit: when SEContract receives action (SEContract, Deposit, Auth, (A sesp , coin)), this function will be triggered to execute. The detail of this function can be seen in Algorithm 3 (4) SearchRequest: when SEContract receives action (SEContract,SearchRequest,Auth, (A user ,Sn,TOK, COMM)), this function will be triggered to execute. The detail of this function can be seen in Algorithm 4 (5) SendResult: when SEContract receives action (SEContract, SendResult, Auth, (A user , Sn, CI result )), this function will be triggered to execute. It can only be invoked by SESP. The detail of this function can be seen in Algorithm 5 (6) ExistenceVerify: when SEContract receives action (SEContract, ExistenceVerify, Auth, (A user ,random, fH 2 ðW j ′ Þg 1≤j≤m )), this function will be triggered to execute. The detail of this function can be seen in Algorithm 6 (7) IntegrityVerify: when SEContract receives action (SEContract, IntegrityVerify, Auth, (A user )), this function will be triggered to execute. The detail of this function can be seen in Algorithm 7 12 end (8) CorrectnessVerify: when SEContract receives action (SEContract, CorrectnessVerify, Auth, (A user )), this function will be triggered to execute. The detail of this function can be seen in Algorithm 8 (9) GetFeeSESP: when SEContract receives action (SEContract, GetFeeSESP, Auth, (A user )), this function will be triggered to execute. The detail of this function can be seen in Algorithm 9 Input: A user , Pk com Output: void 1 require_auth(A user ) 2 u = account_idx.find(A user ) 3 if (u == null) then 4 u.Pk com = Pk com 5 account_idx.modify(u) 6 else 7 u.Account user = A user 8 u.Pk com = Pk com 9 account_idx.emplace(u) 10 end Algorithm 1: Register. 10 Wireless Communications and Mobile Computing (10) IsVerifyRound: this is a private function and can only be called internally by the contract itself. The detail of this function can be seen in Algorithm 10 (11) IsResultReady: this is a private function and can only be called internally by the contract itself. The detail of this function can be seen in Algorithm 11 (12) Compensate: this is a private function and can only be called internally by the contract itself. The detail of this function can be seen in Algorithm 12 (13) CommitVerify: this is a private function and can only be called internally by the contract itself. The core of this function is the algorithm TEST in the search phase. For detailed implementation, see the concrete construction of BFR-SE in the next section
Concrete Construction of BFR-SE.
This section shows the concrete construction of BFR-SE, including the algorithms to be executed at each phase and how each participant interacts with the EOS blockchain. Our initialization is as follows: Let G 0 and G 1 be cyclic groups of order p, and g be a generator of G 0 . Let e : G 0 × G 0 ⟶ G 1 be a bilinear pairing, S = f1, ⋯, lg be the set of all attributes, fh 1 ′ , ⋯, h k ′ g be k general and distinct hash functions. H 1 : f0, 1g * ⟶G and H 2 : f0, 1g * ⟶Z p are also two hash functions.
The public system parameters are PK: The system master key is MSK: DO randomly selects a key pair of ECDSA which be denoted ðSk sig , Pk sig Þ, then keeps MSK and Sk sig secret and sends the following two transactions to the blockchain: (2) KeyGenðMSK, ωÞ ⟶ Sk ω At first, DO sends the following transaction to the blockchain to obtain the DU's public key: Let the attribute set of DU be ω and ω ∈ S, then randomly pick t ⟵ Z p and compute K 1 = g ðac−tÞ/b , K 2 = g t . For each i ∈ ω, it computes that K 3:i = h t i . The private attribute key of DU is Sk ω : Then, encrypt it with the public key of DU: CSk ω = ε:Enc Pk com Sk ω ð Þ: ð8Þ DO sends the following transaction to the blockchain: (3) EncryptðI, ðA l×k , ρÞ, PKÞ ⟶ ðCI, AIÞ For ∀I τ ∈ I, I τ = ðKW τ , MerkleRoot τ , FID τ Þ, randomly picks r, s ⟵ Z p and computes C 0 = g cr , C 1 = g bs . Let ðA l×k , ρÞ be an access structure. It randomly chooses y 2 , y 3 , ⋯, y k ∈ Z p and sets v ! = ðs, v 2 , ⋯, v k Þ T . For each i = 1 to l, it cal- Then, randomly picks r 1 , ⋯, r l ∈ Z p and performs the following calculations for each attribute: The ciphertext of KW τ will be DO signs the ciphertext of the index by his private key: Let CI τ be the ciphertext corresponding to KW τ : Set HKW τ = SHA256ðH 2 ðW 1 Þk, ⋯,kH 2 ðW m ÞÞ, W j ∈ KW τ ,1 ≤ j ≤ m. 12 Wireless Communications and Mobile Computing Finally, the ciphertext of the indices and the auxiliary information will be as follows: After that, DO uploads CI to SESP, and the auxiliary information AI is uploaded to the blockchain by sending the following transaction: if CommitVerify(A user ,random, fH 2 ðW j ′Þg 1≤j≤m ) == true then 7 bool isExist = BFVerify(BF, SHA256(fH 2 ðW j ′Þg 1≤j≤m )) 8 if isExist == false then 9 Compensate(A user ) 10 end 11 end DU decrypts it to get the private attribute key: Then, DU randomly picks π ⟵ Z p . Let m be the size of KW search and compute it as follows: For each i ∈ ω, it computes H i = K π 3,i = h tπ i .Set tok 5 = fK π 3,i g i∈ω = fh πt i g i∈ω , and the search token will be TOK: It calculates the commitment of the search request as follows: Finally, DU picks Sn ⟵ Z q randomly as the serial number of the search request and sends the following transac-tion: (5) TESTðTOK, CI τ Þ ⟶ f0, 1g After SESP receives the TOK from DU, it will compare each row of CI. Firstly, it is judged whether the number of keywords m in CI τ is the same as that in TOK. If different, compare the next row.
Assuming that the attribute set of DU satisfies the access policy, set μ i be the recovery coefficient of the ith row in A l×k and calculate as follows: Then, determine whether the following two formulas (4) and (5) are equal. If it is, it returns 1. Otherwise, it returns 0.
If the result is found successfully, then CI result = CI τ ; otherwise, CI result = null. SESP sends the following transaction to the blockchain.
After the contract receives the transaction, it will first verify the DU's previous commitment to prevent DU from [21]. Furthermore, we have expanded it to support the multikeyword search. VABKS is proved to be secure, and the complete proof process can refer to the security analysis in [21]. The security of VABKS relies on the decisional linear assumption. We focus on the fairness and reliability of searchable encryption, and the security of ABSE is not the main work of this paper. So, we will mainly analyze the correctness of ABSE and briefly discuss its security and privacy.
(1) Correctness. Let μ i be the recovery coefficient of the ith row in A l×k and calculate as follows: If the attribute set ω of DU satisfies the access policy ð A l×k , ρÞ, it will get s by calculating ∑ i∈ω μ i σ i , and E = e ðg, gÞ πts .
Then, calculate as follows: If the keywords set KW′ in the search token is the same as the keywords set KW in the index, it will have e C 0 , tok 1 ð Þ e tok 3 , (2) Security and Privacy. From a security point of view, all attribute-based cryptographic algorithms need to resist collusion attacks. In the ABSE used in BFR-SE, we pick a random number t for each DU at the key generation phase, and the attribute-based part of the private key fK 3,i g i∈ω is related to it. So, different DUs cannot combine their respective attributes to launch a collision attack.
From the perspective of privacy, DO encrypts his indices and stores them on SESP without revealing any information. Moreover, DO has fine-grained access control on the search function. For DU, a random number π is used every time generating a search token, making the search token generated will not be the same even if DU searches for the same keywords multiple times. The adversary cannot analyze DUs' privacy by collecting the traces of search requests.
Fairness and Reliability of BFR-SE
(1) Fairness. In this paper, we proposed a pay-per-use searchable encryption scheme. We believe that both SESP and DU are not always credible, and the dishonest behavior of either party may cause economic disputes. In our scheme, the participant with substantial computing power needs to pay a certain amount of deposit before becoming SESP. We divide each search of DU into two rounds: search round and verification round. When a DU initiates a search request, he needs to transfer the fee to smart contract.
DUs can initiate a request for arbitration in the verification round when SESP does not return any result, return partial results, or return incorrect results. The blockchain will arbitrate the search results. If it determines that SESP has acted dishonestly, SESP will be subject to a financial penalty, and the fine will compensate DU. Although DU may expose a little bit of their information when applying for arbitration, they will get financial compensation.
For SESP, if he can provide the correct results before the pre-agreed time, he can take away his profits after the freezing period. When DU initiates a search request, he promises the set of keywords retrieved and stores it on-chain, ensuring that DU will not submit a different set of keywords during the verification round to defraud the SESP deposit. Moreover, a particular fine can be introduced to DU so that DU cannot apply for arbitration without any certainty.
In summary, BFR-SE is fair to both SESP and DU.
(2) Reliability. In our scheme, we draw on the idea of verifiable searchable encryption in which DU can verify the results return by SESP from three aspects, including existence, integrity, and reliability. Therefore, the reliability of verifiable searchable encryption is also available in our scheme. Unlike the verifiable searchable encryption, we utilize the blockchain to introduce a reward and punishment mechanism for the scheme. When DU finds any problem with the results returned by SESP, they apply for arbitration during the verification round. If the SESP is indeed dishonest, the blockchain will punish SESP financially and compensate DU.
Wireless Communications and Mobile Computing
Therefore, BFR-SE is more reliable than previous schemes.
Verifiability of Search Results
(1) Existence. BFR-SE uses Bloom filter to verify the existence of the search result. DO stores all the information corresponding to the keyword set in the indices into the Bloom filter. If the result returned by SESP is null, which means that there is no matching data for the search request, DU could verify it using the keywords searched for and Bloom Filter. If the verification result is 1, the keywords searched for having a high probability of existence. Although there is a specific false-positive rate, the research in Ref. [37] shows that the calculation method of this false-positive rate is as follows: It can be seen that by setting the values of l BF and k, the false-positive rate can be reduced. For example, when k = ð ln 2Þl BF /n, it can get a minimum false-positive rate of ð0:6185Þ l BF /n . Therefore, DUs can verify the existence of search results in BFR-SE.
(2) Integrity. For each row of indices in our scheme, DO uses his private key to sign the keywords and the MerkleRoot obtained from the data-related information as leaf nodes. Each row of the indices uploaded by DO contains the signature. DUs can use the public key of DO to verify whether the keywords and MerkleRoot are damaged. For the returned search results, DU can verify the data's integrity according to whether the MerkleRoot can be constructed. Once the data-related information as leaf nodes are destroyed, or the SESP only returns partial results, the MerkleRoot cannot be constructed. Therefore, DUs can verify the integrity of search results in BFR-SE.
(3) Correctness. If the existence and integrity are verified, then DU can get the ciphertext of the keyword set of the search results. DU only needs to use this ciphertext and his search token as inputs and repeatedly execute the TEST function to verify the search results. Therefore, DUs can verify the correctness of search results in BFR-SE.
6.2. Security and Privacy Analysis of BFR-SE 6.2.1. Functional Comparison. We compared BFR-SE with previous verifiable searchable encryption and blockchainbased searchable encryption schemes from the following aspects: fairness, reliability, privacy protection, whether it supports multikeyword search, whether it is suitable for multi-user situation, whether it supports fine-grained access control for the search function, and practicability.
From the comparison in Table 2, comparing verifiable searchable encryption and previous blockchain-based searchable encryption schemes, the following conclusions can be drawn: (1) The former and earlier related studies did not consider the fairness of searchable encryption. With the emergence of blockchain, these blockchainbased schemes all meet the requirements of fairness (2) The former supports DUs to verify search results and has a certain degree of reliability, but because there is no subsequent sufficient punishment, the reliability is weak. However, the recent searchable encryption schemes based on blockchain have not been considered reliable (3) The former stores the indices and DU's search records on SESP, which can better protect DU's privacy on the premise that SESP is credible. The latter stores the indices and search records on blockchain and uses a deterministic encryption algorithm. Since the information on-chain is public, even if the keywords are encrypted, when the search records are large enough, it is not impossible to analyze some privacy of DU (4) The performance of the former depends on the capabilities of SESP and has high practicability. The latter is mostly based on low-performance blockchain platforms such as Bitcoin and Ethereum, and the design is not particularly perfect, so there are still problems in performance and security. In real enterprise applications, it does not have practicality Compared with these two types of schemes, BFR-SE designs a relatively perfect reward and punishment mechanism with blockchain. If SESP is dishonest, it will pay the price. Therefore, Table 2: Functional comparison between BFR-SE and other related searchable encryption schemes.
BFR-SE
Ref. [20] Ref. [21] Ref. [29] Ref. [ our scheme has both fairness and reliability. The indices are still stored on SESP in our scheme, and the blockchain only plays the role of arbitration when there are disputes, which makes BFR-SE more efficient than the previous blockchain-based schemes. Our constructed ABSE is not a deterministic encryption algorithm. The random number makes the search token different, even for the same keyword set. Therefore, BFR-SE has strong privacy protection capabilities. We have extended the work of Ref. [21] to support a multikeyword search. The combination of attribute-based searchable encryption and blockchain makes BFR-SE meet multiuser scenarios in a distributed environment quickly and enables DO to have fine-grained access control on his sharing data. BFR-SE uses EOS blockchain, which is the current high-performance public chain. Moreover, it considers more practical scenarios, such as SESP and DU may be dishonest. So, our scheme has better practicality.
6.2.2. Storage Analysis. BFR-SE is a fair and reliable searchable encryption scheme based on the EOS blockchain. Since the storage resource on the blockchain is very valuable, and the acquisition of RAM in the EOS blockchain requires the user to mortgage the system token, it is necessary to analyze the size of the data stored on-chain.
In the beginning, we define some symbols. We set jG 0 j, jG 1 j to represent the bit length of an element in group G 0 and G 1 , respectively. Let jZ p j be the bit length of an element in field Z p , jSk sig j be the bit length of the signature, |S| be the number of all attributes, |U| be the number of DUs, |TOK| be the bit length of search token, |COMM| be the bit length of commitment, jCI result j be the bit length of the result returned by SESP. Let m be the size of the keyword set.
According to the experiment simulation in our scheme, we set jG 0 j = jG 1 j = 1024 bits, jZ p j = 128 bits, jSk sig j = 576 bits, j Sk com j = jSk sig j = 256 bits jPk com j = jPk sig j = 272 bits. The length of Account, SerialNum, blockheight, Deposit, and Coin are all 64 bits. The implementation of our Bloom filter refers to the C++ code on Github, which address is https://github .com/bbondy/bloom-filter-cpp.git. We set the length of the bloom filter to 20 KB and there are 5000 rows of ciphertext indices, so l BF /n = ð20 × 8 × 1024Þ/5000 = 32:768 and the number of hash functions in Bloom Filter will be k = ðln 2Þ l BF /n = 22. Then, the false-positive will be 1:45 × 10 −7 . In our scheme, three operations that will interact with blockchain to store data in the smart contract, which are as follows: (1) Initialization DO uploads the public system parameters and his public key for the signature to smart contract. The storage cost is as follows: (2) Registration The information on-chain mainly includes registrationrelated information uploaded by DU and SESP. The DU reg-istration includes the information submitted by DU and the private key distributed by DO. The SESP registration includes the account and the deposit. The specific storage overhead is as follows: (3) Search The information on-chain mainly includes auxiliary information uploaded by DO, search requests initiated by DUs, and search results returned by SESP. The specific storage overhead is as follows: The storage overhead of BFR-SE varies with the number of attributes is shown in Figure 4.
For simplicity, the figure only shows that the storage overhead varies with the number of attributes when there are only 10 DUs and 50 keywords. From the figure, we can see that the storage overhead is mainly spent in the search phase, while the initialization and registration phase are negligible. As the number of DUs and keywords grows, storage overhead will also increase linearly. However, since we only store some information necessary for verification on-chain, compared with other blockchain-based schemes [24,[26][27][28][29][30] that store all index information on-chain, the storage overhead is reduced a lot.
Users can obtain the RAM in EOS by collateralizing system tokens, and the current price is 42EOS/MB. DO can purchase RAM according to the scale of his system. Unlike Ethereum transactions that need to consume ETH as gas, the tokens mortgaged when acquiring RAM in EOS can still be redeemed at the original price. Above all, BFR-SE is feasible and practical.
Performance Analysis.
Before analyzing the performance, we define two primary operations' computational cost: P for bilinear pairing and E for power exponent operation. Here we ignore the computational overhead of operations such as hash functions because they are very efficient than the above two. In our proposed scheme, the computational overhead of the primary operations is shown in Table 3.
There are many studies on the analysis mentioned above [38][39][40][41][42][43], so we will not repeat them too much. Like storage resource, the computing resource on-chain is also very valuable. If the interaction with the blockchain is too frequent or the computational overhead is too large, it will have a terrible impact on system performance. So, we mainly focus on the execution time of BFR-SE on-chain.
We used 6 nodes to build an EOS private chain in a laboratory environment. The 6 nodes we chose were all 18 Wireless Communications and Mobile Computing MacBook Pro (2017) with Intel (R) Core (TM) i5 CPU that clocks at 3.1 GHz and has 8 GB of RAM. The version of the EOS blockchain we chose is v2.0.7. The computational overhead of other blockchain-based schemes [24,26,27,30] is all second level, which is obviously not better than ours, and will not be analyzed anymore. We compared BFR-SE with the scheme in Ref. [25], as shown in Figure 5.
In BFR-SE, most of the interactions with the blockchain are to upload data to smart contract, such as initialization and registration. The computational overhead of this part can be ignored. The main computational overhead of BFR-SE occurs when the blockchain arbitrates. It can be seen from the figure that as the number of indices to increase, the computational overhead on-chain of BFR-SE has always remained at a stable level, about 40 ms, while the scheme in Ref. [25] will grow. It is because, in our scheme, all the timeconsuming operations are executed off-chain. The EOS block producers' configuration in the Mainnet is much higher than that of the MacBook used in our simulation environment. When our contracts are deployed on the Mainnet, the performance will be even more outstanding. The EOS blockchain generates a block in 0.5 seconds, and the transaction will be confirmed soon after execution.
Conclusion
To achieve a fair and reliable searchable encryption scheme, we constructed an attribute-based searchable encryption ABSE that supports multiple keywords search and designed an exclusive reward and punishment mechanism by using blockchain. In our scheme, DO sends the ciphertext of indices to SESP and uploads the auxiliary information to the blockchain. SESP must return the correct search results before a preagreed block height, and the charge fee paid from DU will be frozen for a period during which DU could initiate an arbitration request to the blockchain if he disagrees with the results. As the cornerstone of trust, blockchain will punish the dishonest party economically, ensuring the scheme's absolute fairness and reliability [44][45][46][47][48][49][50][51][52][53]. Besides, ABSE can be used by DO to have finegrained access control on the search function. Experiments and analyses show that our scheme is feasible and has better performance. However, our scheme still has many shortcomings. For example, our scheme uses an index structure, and the signature guarantees the integrity of the indices, but this significantly reduces the flexibility of the scheme, especially when adding or updating the index of sharing data. Simultaneously, due to an attribute-based encryption algorithm, topics such as the revocation or update of permission are also one of the directions that need to be studied in the future. We will continue to refine our approach in conjunction with some other research [37,[54][55][56][57].
Data Availability
The raw/processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study. Figure 5: The computational overhead of BFR-SE and Ref. [25] varies with the number of indices. | 2021-11-27T16:43:20.370Z | 2021-11-24T00:00:00.000 | {
"year": 2021,
"sha1": "f913d1c2160a093ee328def354d57906e04e1b1f",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/wcmc/2021/5340116.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3dd208aa986f901dbd4f099827558e19104e610e",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": []
} |
231635132 | pes2o/s2orc | v3-fos-license | SP-8356, a (1S)-(-)-Verbenone Derivative, Inhibits the Growth and Motility of Liver Cancer Cells by Regulating NF-κB and ERK Signaling
Liver cancer is a common tumor and currently the second leading cause of cancer-related mortality globally. Liver cancer is highly related to inflammation as more than 90% of liver cancer arises in the context of hepatic inflammation, such as hepatitis B virus and hepatitis C virus infection. Despite significant improvements in the therapeutic modalities for liver cancer, patient prognosis is not satisfactory due to the limited efficacy of current drug therapies in anti-metastatic activity. Therefore, developing new effective anti-cancer agents with anti-metastatic activity is important for the treatment of liver cancer. In this study, SP-8356, a verbenone derivative with anti-inflammatory activity, was investigated for its effect on the growth and migration of liver cancer cells. Our findings demonstrated that SP-8356 inhibits the proliferation of liver cancer cells by inducing apoptosis and suppressing the mobility and invasion ability of liver cancer cells. Functional studies revealed that SP-8356 inhibits the mitogen-activated protein kinase and nuclear factor-kappa B signaling pathways, which are related to cell proliferation and metastasis, resulting in the downregulation of metastasis-related genes. Moreover, using an orthotopic liver cancer model, tumor growth was significantly decreased following treatment with SP-8356. Thus, this study suggests that SP-8356 may be a potential agent for the treatment of liver cancer with multimodal regulation.
INTRODUCTION rect extension of the tumor, hematogenous spread, and/or lymphatic invasion (Poddar et al., 2017). Because of its systemic nature and the resistance of dispersed tumor cells to existing therapeutic agents, metastasis accounts for 90% of cancer mortality (Okusaka et al., 1997;Chaffer and Weinberg, 2011;Valastyan and Weinberg, 2011). Moreover, metastatic liver cancer often leads to the recurrence of liver cancer after surgical resection (Tung-Ping Poon et al., 2000). Therefore, identifying novel and effective systemic agents with anti-metastasis activity is urgent for the treatment of HCC.
In cancer progression, inflammation has been considered an important component of the development of various cancers (Coussens and Werb, 2002). Among them, HCC is highly related to inflammation as more than 90% of HCCs arise in the context of chronic hepatic injury and inflammation (Nakagawa and Maeda, 2012). Nuclear factor-kappa B (NF-κB) is an important transcription factor that functions as a regulator of inflammation. Because inflammation predisposes cancer progression, it seems logical to speculate the link between NF-κB and cancer. NF-κB is also involved in cancer proliferation, apoptosis, metastasis, and angiogenesis (Naugler and Karin, 2008). In HCC, NF-κB is constitutively activated to promote tumor growth, indicating that NF-κB plays a pivotal role in HCC pathogenesis (Wang et al., 2003;Pikarsky et al., 2004;Li et al., 2009). Thus, the inhibition of NF-κB activation is a potential therapeutic target for liver cancer treatment.
In previous studies, essential oils containing (1S)-(-)-verbenone were identified to possess anti-inflammatory activity through the inhibition of NF-κB signaling (Choi et al., 2010;Kuo et al., 2011). Since then, a series of (1S)-(-)-verbenone derivatives has been synthesized by adding functional moieties to improve their cytoprotective effects with stronger anti-inflammatory and anti-oxidant activities (Ju et al., 2013). Given the roles of NF-κB signaling in liver cancer progression, synthesized (1S)-(-)-verbenone derivatives could be a new therapeutic agent for liver cancer.
In the present study, the effect of (1S)-(-)-verbenone derivatives on liver cancer cells was investigated. Among various (1S)-(-)-verbenone derivatives, SP-8356 demonstrated the most significant anti-proliferative effect on liver cancer cells by inducing apoptosis. In addition, SP-8356 inhibited liver cancer cell motility by regulating metastasis-related genes. Functional studies suggested that these anti-cancer activities of SP-8356 are mediated by its inhibitory effect on the mitogen-activated protein kinase (MAPK) and NF-κB pathways.
Reagents, culture media, and antibodies
(1S)-(-)-verbenone derivatives were synthesized as previously reported (Ju et al., 2013). The structure of SP-8356 was already described in previous report . Cell culture media were obtained from WELGENE Inc (Daegu, Korea). Human recombinant TNF-α was purchased from R&D systems (Minneapolis, MN, USA), human recombinant epidermal growth factor (EGF) was purchased from Peprotech Inc. (Hamburg, Germany), and protease inhibitor cocktail was purchased from Roche (Mannheim, Germany). Antibodies against PARP, caspase-3, p-ERK1/2, and Akt were purchased from Cell Signaling Technology (Beverly, MA, USA). Antibodies against survivin, ERK1/2, p-Akt, p-Elk-1, NF-κB p65, and actin were purchased from Santa Cruz Biotechnology (Santa Cruz, CA, USA). Primers for gene cloning and materials for expression vector construction were obtained from Cosmogenetech (Seoul, Korea), and DNA sequencing was conducted by the same company. All other reagents were purchased from Sigma-Aldrich (St. Louis, MO, USA) unless otherwise stated.
Cell growth assay
Huh-7, Hep3B, SK-Hep1 (3,000 cells/well), and Hepa1-6 (1,000 cells/well) cells were seeded into 96-well plates and treated with various concentrations of (1S)-(-)-verbenone derivatives for the indicated times in complete culture medium. Cell growth was measured using a Cell Counting Kit-8 (CCK-8) kit from Dojindo Molecular Technologies, Inc. (Rockville, MD, USA) following the manufacturer's instructions. Cells were incubated with 10 µL of CCK-8 solution for 2 h, and the absorbance of each well was measured at 450 nm using a microplate reader, SpectraMax iD3 (Molecualr Devices, LLC, San Jose, CA, USA).
Lactate dehydrogenase (LDH) assay
Cell cytotoxicity was quantitatively assessed by measuring LDH released from plasma membrane-damaged cells using a cytotoxicity detection kit according to the manufacturer's instructions (Takara Bio Company, Shiga, Japan). Huh-7, Hepa1-6, and Hep3B cells were seeded in a 96-well plate in DMEM with 10% FBS and incubated for 24 h. Next, cells were incubated with 200 µL serum-free DMEM medium with SP-8356 for 24 h and 48 h. Cells treated with vehicle (DMSO) were used as a negative control. Some of the vehicle-treated cells were lysed with 1% Triton X-100 buffer and used as a positive or high control. Microtiter plates were centrifuged at 250×g for 10 min; 100 µL of supernatant was transferred to another 96-well plate with the addition of 100 µL of reaction mixtures. After 30 min of incubation at room temperature, absorbance was measured at 490 nm using a microplate reader (SpectraMax iD3, Molecualr Devices, LLC
Western blot analysis and immunoprecipitation
Cells were lysed in RIPA buffer [50 mM Tris-HCl, pH 7.5, 150 mM NaCl, 1% Triton X-100, 0.5% sodium deoxycholate (w/v), and 0.05% sodium dodecyl sulfate (SDS) (w/v)] containing protease inhibitor. The lysates were centrifuged at 15,000 rpm at 4°C for 15 min. Protein concentrations of the clarified lysates were determined using a Bradford protein assay kit (Bio-Rad, Hercules, CA, USA). Next, 20 µg of cell lysates denatured with sodium dodecyl sulfate (SDS) sample buffer was separated by polyacrylamide gel electrophoresis. Proteins were transferred to nitrocellulose membranes and probed with the relevant antibodies. The signals were then detected using an ECL assay kit (GE Healthcare, Chicago, IL, USA).
HEK293 cells cultured in DMEM with 10% FBS were transfected with the HA-importin α5 plasmid, treated with tumor necrosis factor (TNF)-α, and subjected to immunoprecipitation with anti-HA antibody-conjugated agarose. The precipitates were subjected to western blotting with anti-p65 antibodies (Santa Cruz Biotechnology).
Luciferase assay
Huh-7 cells were seeded into 24-well plates and then transfected with plasmids containing the serum response element (SRE)-luc and NF-κB-luc reporter genes. Cells cultured in serum-free DMEM media for 18 h were treated with different concentrations of SP-8356 for 30 min and then treated with 10% serum, 100 ng/mL of epidermal growth factor (EGF), 1 µM PMA (phorbol 12-myristate 13-acetate), and 10 ng/mL TNF-α as a stimulant. After 6 h, cells were washed with phosphate-buffered saline (PBS) and solubilized with lysis buffer. The luciferase activity of the cell extracts was determined using the standard luciferase assay system from BioTek Instruments, Inc (Winooski, VT, USA).
Wound healing assay
Huh-7 and Hepa1-6 cells (5×10 5 cells/well) were seeded into 6-well plates. Confluent monolayers were manually scratched with a pipette tip and washed with PBS to remove the cell debris. Cells were incubated in DMEM media with different concentrations of SP-8356, and the scratch area was photographed at the indicated times. The area between two cell edges was analyzed using Image J software (National Institutes of Health, Bethesda, MD, USA). The percentage of wound closure was calculated as follows: [(area of original wound−area of remaining wound)/(area of original wound)]×100.
Invasion assay
For the invasion assay, the upper chambers of Transwell inserts (8-µm pore size; Corning, NY, USA) were coated with 20 µL of 1:6 diluted Matrigel (Invitrogen, Carlsbad, CA, USA) and allowed to solidify in an incubator. Next, 2×10 4 Huh-7 cells and 1×10 4 Hepa1-6 cells in serum-free media were placed into the upper chambers of the inserts and then treated with SP-8356. The lower wells were filled with DMEM containing 10% FBS. Cells were incubated at 37°C in a humidified chamber containing 5% CO2 for 18 h. The inserts were washed in PBS, and the cells in the upper chamber that had not invaded were removed with a cotton swab. The membranes were fixed in 4% paraformaldehyde solution and stained with Hemacolor Rapid staining of blood smear (Merck, Darmstadt, Germany); cells that had invaded were counted under a microscope.
Immunocytochemistry
Huh-7 cells were grown on poly-L-lysine-coated glass coverslips in 24-well plates. After 24 h, the cells were incubated in serum-free DMEM for 18 h, treated with 20 µM SP-8356 for 30 min, and then treated with 10 ng/mL of TNF-α or 1 µM PMA for 30 min. Cells were fixed with 4% paraformaldehyde in PBS for 10 min, permeabilized with PBS containing 0.1% Triton X-100 (PBST) for 10 min, and then blocked with 3% bovine serum albumin in PBST for 30 min. Next, the cells were incubated with anti-p65 antibodies in PBST containing 3% bovine serum albumin overnight at 4°C. The coverslips were washed and incubated with fluorescein isothiocyanate (FITC)-conjugated anti-mouse IgG for 1 h. The fluorescent images were viewed using a Leica TCS SP5 laser scanning microscope (Leica, Wetzlar, Germany).
Orthotopic xenograft model and in vivo imaging
All mice were housed in a temperature-controlled (22-23°C) facility with a specific pathogen-free barrier under a 12-h light/ dark photoperiod (lights on at 8:00 am). Mice were allowed standard mouse chow and water ad libitum. All animal experiments and procedures were performed in accordance with the guidelines and regulations of the Institutional Animal Care and Use Committee (IACUC) at Korea University (KOREA-20160153-C4).
Four-to six-week-old male NOD/SCID mice were purchased from KOATECH (Pyeongtek, Korea). Huh-7 cells (2×10 6 ) were injected into the left liver lobe of NOD/SCID mice. After 1 day, the mice were randomized into two groups comprised of five mice each. The experimental group was treated with SP-8356 (30 mg/kg) every day by intraperitoneal injections until the end of the experiment and the control group was similarly treated with vehicle (saline). Body weight was measured every other day. After 40 days, the mice were intraperitoneally injected with luciferin and subjected to in vivo live imaging using NightOWL II LB 983 (Berthold Technologies, Bad Wildbad, Germany). After imaging, the mice were sacrificed and their livers were post-fixed with 4% paraformaldehyde. The nodule number and size in the isolated livers were assessed visually.
Statistical analysis
All statistical analysis was performed using PRISM5 software (GraphPad, La Jolla, CA, USA). The group means were presented as means ± standard deviation (SD) and statistical significance was evaluated using Student's t tests and/or oneway or two-way analysis of variance (ANOVA) with Bonferroni post hoc tests. A p-value less than 0.05 was considered statistically significant. All experiments were performed in triplicate unless otherwise indicated.
SP-8356 inhibits the growth of liver cancer cells
Based on the various effects of (1S)-(-)-verbenone, such as anti-inflammatory, anti-oxidant, and anti-proliferation, we designed and synthesized (1S)-(-)-verbenone derivatives to investigate their anti-cancer effects on liver cancer cells. Among these derivatives, the S form of SP-8356 significantly inhibited the growth of Huh-7 cells in a time-and dose-dependent manner, while the R form of the molecule had no effect (Fig. 1A). Therefore, in further experiments, the S form of SP-8356 was used. Because SP-8356 showed the most significant growth inhibitory effect, we focused only on SP-8356 in subsequent experiments. The effect of SP-8356 was further investigated in several liver cancer cell lines, and its growth inhibitory activity was observed in all tested cell lines, Hepa1-6, Hep3B and SK-Hep1, with different efficiencies (Fig. 1B).
SP-8356 shows mild cytotoxicity by inducing apoptosis in liver cancer cells
Because the growth inhibitory effect of SP-8356 may be related to the cytotoxic activity of the molecule to liver cancer cells, LDH assays were performed to measure cell death due to SP-8356. The LDH activities in the cells were slightly increased by SP-8356 in a dose-dependent manner. Maximum LDH activities were observed at 48 h to be 21.83%, 31.56%, and 8.77% with 20 µM SP-8356 in Huh-7, Hepa1-6, and Hep3B cells, respectively ( Fig. 2A). In particular, the LDH activities in Huh-7 and Hep3B did not change over time as they were similar at both 24 and 48 h. These results may be ascribed to mild cytotoxicity of SP-8356 in these liver cancer cells.
To explore the mechanisms underlying the cytotoxic effects of SP-8356, protein extracts of affected cells were subjected to western blotting with antibodies against death-or survivalrelated proteins. Caspase activation was examined because it is a hallmark of apoptosis (Ola et al., 2011). Cleaved caspase-3 was detected in Hepa1-6 cells treated with 20 µM SP-8356. Although cleaved caspase-3 was not detected in Huh-7 and Hep3B cells, pro-caspase-3 was decreased, indicating that pro-caspase-3 was activated in these cells. Additionally, the cleavage of PARP, another apoptotic marker, was notably detected in all cells, even with 15-µM SP-8356 treatment in Huh-7 cells. Expression of survivin, a member of the apoptosis-inhibitory family, was decreased by SP-8356 in all cell lines (Fig. 2B). Taken together, these results suggest that the anti-proliferative effect of SP-8356 is likely correlated with the cytotoxic activities of SP-8356.
SP-8356 suppresses the migration and invasion of liver cancer cells
Because metastasis accounts for most liver cancer-related deaths, the effect of SP-8356 on cell motility was determined using wound healing migration assays (Okusaka et al., 1997;Chaffer and Weinberg, 2011;Uchino et al., 2011;Valastyan and Weinberg, 2011). SP-8356 significantly reduced cell migration at 15 µM and 20 µM in Huh-7 cells and 20 µM in Hepa1-6 cells compared with of control-treated cells (Fig. 3A).
The effect of SP-8356 on cell invasiveness was investigated using a Matrigel invasion assay. Both Huh-7 and Hepa1-6 cell lines penetrated the matrix toward serum stimulation. The number of invading cells decreased in a dose-dependent manner in response to SP-8356 treatment; in particular, invading Huh-7 cells were not detected following 20-µM SP-8356 treatment (Fig. 3B). Thus, SP-8356 likely exerts a suppressive effect on the migration and invasion of liver cancer cells. Hep3B cells were excluded in the cell motility assay because they are not invasive.
SP-8356 inhibits the MAPK pathway via blocking the nuclear translocation of p-ERK1/2
Following analysis of the anti-cancer activities of SP-8356, the effects of the reagent on signaling molecules were examined. Dysregulation of the MAPK and PI3K/Akt signaling pathways commonly occur in liver cancer; the sustained activation of these pathways facilitates liver cancer proliferation and survival (Liu et al., 2009;Min et al., 2011). SRE-dependent luciferase activity by serum, EGF, and PMA (a protein kinase C activator) was significantly reduced by SP-8356 treatment in Huh-7 cells (Fig. 4A).
To understand which molecular mechanism is involved when SP-8356 suppresses SRE activity in response to the tested stimulants, the phosphorylation of ERK1/2 and Akt was investigated. Although the levels of phosphorylated ERK1/2 and Akt remained unchanged in the presence of SP-8356, it inhibited the phosphorylation of Elk-1, a downstream target of p-ERK1/2, which later forms a temporal complex with SRE (Fig. 4B). Because the nuclear translocation of ERK1/2 is needed to phosphorylate nuclear Elk-1 (Flores and Seger, 2013), it was hypothesized that SP-8356 may inhibit the nuclear translocation of phosphorylated ERK1/2. To test this hypothesis, we compared the presence of phosphorylated ERK1/2 in the cytoplasm and nucleus. In the cytoplasm, the degree of phosphorylated ERK1/2 was increased when the cells were treated with SP-8356 and EGF compared with that in the cells treated with EGF only (Fig. 4C). Phosphorylated ERK1/2 was notably increased in the nucleus of EGF-treated cells. However, nuclear phosphorylated ERK1/2 was unchanged in the presence of SP-8356, suggesting an inhibitory effect of SP-8356 on p-ERK1/2 nuclear translocation. Taken together, these data suggest that SP-8356 regulates the MAPK pathway by inhibiting the nuclear translocation of p-ERK1/2.
SP-8356 reduced the basal transcriptional activity of NF-κB without stimulation. In unstimulated cells, NF-κB is sequestered in the cytoplasm bound to inhibitor of kappa B (IκB). After exposure to stimuli such as TNF-α, PMA, and lipopolysaccharide, IκB is phosphorylated by IκB kinase (IKK), followed by ubiquitination and degradation that lead to the nuclear translocation of the RelA/p65 subunit of NF-κB (Hoesel and Schmid, 2013). To investigate whether SP-8356 inhibits the nuclear translocation of p65, immunocytochemistry was performed. Before TNF-α and PMA treatment, p65 was mainly located in the cytoplasm; after TNF-α and PMA treatment, p65 was translocated to the nucleus. However, 20-µM SP-8356 pretreatment suppressed the nuclear translocation of p65 by TNF-α and PMA (Fig. 5C). The quantitative data of the nuclear translocation showed that the percentage of nuclear p65 induced by TNF-α or PMA was significantly decreased by SP-8356 (Fig. 5D). These results indicate that SP-8356 inhibits NF-κB activation by blocking the nuclear translocation of p65.
Because the nuclear translocation of p65 was carried out by importins, blockade of the interaction between free p65 and importin may be the inhibitory mechanism of SP-8356. This idea was confirmed by immunoprecipitation and subsequent western blotting with cells expressing HA-importin α5; p65 was not detected in the anti-HA antibody-mediated immunoprecipitation of SP-8356-treated cells (Fig. 5E).
SP-8356 regulates the expression of metastasis-related genes
The invasive and metastatic properties of cancer cells are acquired by gene expression related to extracellular matrix degradation and new blood vessel formation around the tumor (Valastyan and Weinberg, 2011). Thus, the effect of SP-8356 on the expression of genes influencing cell adhesion, local invasion, and angiogenesis, most of which are targets of NF-κB, was examined. The mRNA levels of uPA, VEGF-A, and VEGF-C were decreased in Huh-7 cells treated with SP-8356 in a dose-dependent manner, whereas that of PAI was significantly increased compared with that in control cells (Fig. 6A). The mRNA levels of MMP-7 and MMP-9 were also reduced by SP-8356 treatment in cells stimulated with TNF-α (Fig. 6B). These results suggest that SP-8356 inhibits the migration and invasion of liver cancer cells by regulating the metastasis-related genes induced by NF-κB.
Anti-proliferative effect of SP-8356 in a xenograft model
To investigate the in vivo correlation of the cellular effects of SP-8356, we established a xenograft model by implanting Huh-7 cells expressing luciferase in the liver of SCID mice and then treated the mice with SP-8356 or saline every day. Forty days later, the mice were injected intraperitoneally with luciferin and subjected to in vivo live imaging. Average body weight was similar in both groups (Fig. 7A). The luminescence signals in saline-treated mice were much stronger than those in the SP-8356-treated group (Fig. 7B). After perfusion with PBS and fixing with 4% paraformaldehyde, nodule numbers and size in the isolated livers were higher in the control group than in the SP-8356-treated group, implying that SP-8356 inhibited Huh-7 growth in the liver (Fig. 7C). Unfortunately, we did not observe micrometastasis of the cells in the liver or metastasis into other organs, such as the lung, brain, and other peritoneal regions, in histological analysis. The cause may be the low motility of Huh-7 cells in the in vivo model.
DISCUSSION
Most HCC cases are closely associated with NF-κB-related chronic inflammation, and a significant portion of liver cancer patients eventually develop extrahepatic metastasis (Uka et al., 2007;Singh et al., 2018). Treatment of unresectable HCC largely relies on systemic therapeutics, such as multi-kinase inhibitors; however, current systemic treatment options do not produce satisfactory results. (Wang et al., 2003;Pikarsky et al., 2004;Li et al., 2009;Nakagawa and Maeda, 2012;Zhu et al., 2017). Thus, a new anti-cancer agent is needed for effective anti-inflammatory and anti-metastasis activities. In this study, some of the synthesized (1S)-(-)-verbenone derivatives with anti-inflammatory activity (Ju et al., 2013) were screened for growth inhibition of Huh-7 liver cancer cells. One of the de- rivatives, SP-8356, most significantly suppressed cancer cell proliferation. Therefore, SP-8356 was selected as a candidate to treat liver cancer. The inhibitory effect of SP-8356 on liver cancer growth was mediated by its mild cytotoxicity through the induction of apoptosis. ERK1/2 signaling plays a central role in cancer proliferation because many mitogens and growth factors transmit their signals through ERK1/2 (Chambard et al., 2007;Min et al., 2011;Plotnikov et al., 2011). One of the critical steps in the transmission of ERK1/2 signals is the nuclear translocation of ERK1/2 to induce gene expression for cell growth (Flores and Seger, 2013;Plotnikov et al., 2015;Ranjan et al., 2018). SP-8356 inhibits the nuclear translocation of ERK1/2, followed by the suppression of SRE activity, a downstream target of ERK that seems to contribute to SP-8356-mediated growth inhibition of liver cancer cells. Additionally, SP-8356 suppressed the transcriptional activity of NF-κB, a key molecular regulator of genes implicated in cell proliferation, survival, and motility in liver cancer cells (Wu et al., 2009). The nuclear translocation of p65 is necessary for the transmission of the NF-κB signal (Hoesel and Schmid, 2013). The suppression of NF-κB activation through blocking nuclear translocation of p65 by SP-8356 was also confirmed in liver cancer cells. Therefore, the inhibitory action of SP-8356 on liver cancer cell proliferation may be related to ERK and NF-κB regulation via blocking the nuclear translocation process.
Because NF-κB modulates expression of genes implicated in the epithelial-mesenchymal transition and metastasis (Huber et al., 2004;Min et al., 2008;Naugler and Karin, 2008), the suppression of NF-κB by SP-8356 is strongly associated with inhibiting cancer progression. Cancer metastasis starts with the entry of cancer cells from a well-confined primary tumor into the surrounding tumor-associated stroma and then into the adjacent normal tissue parenchyma (Valastyan and Weinberg, 2011). To invade the surrounding tissue, it is necessary to degrade the extracellular matrix (ECM). Matrix metalloproteases (MMPs) are important enzymes that can degrade the ECM and promote cancer cell mobility; in particular, MMP7 and MMP9 are overexpressed in liver cancer and play a central role in liver cancer metastasis (Arii et al., 1996;Yeh et al., 2012;Chen et al., 2013;Lin et al., 2017). uPA is a serine protease that converts plasminogen into plasmin, the active proteinase, cleaving ECM proteins and upregulating the expression of uPA in liver cancer (Chan et al., 2004).
Cancer cells in the stroma stimulate the formation of new blood vessels within their local microenvironment for wide dissemination into distant organs (Valastyan and Weinberg, 2011). Vascular endothelial growth factor (VEGF) plays a central role in promoting angiogenesis (Saharinen et al., 2011). VEGF expression positively correlates with the metastasis and recurrence of HCC (Yao et al., 2005). The mRNA expression of these genes was significantly suppressed by SP-8356 in a dose-dependent manner. Interestingly, the mRNA level of PAI was dramatically increased by SP-8356. PAI binds to the uPA-urokinase receptor complex and induces endocytosis, followed by complex degradation. Through this process, the functions of the uPA-urokinase receptor complex, such as the activation of latent growth factors and pro-MMP, was suppressed (Ulisse et al., 2009). Therefore, the regulation of genes implicated in cancer metastasis by SP-8356 is a possible mechanism to prevent liver cancer metastasis. Unfortunately, we could not determine the anti-metastatic activity in the SP-8356 orthotopic xenograft model with Huh-7 cells; however, its anti-proliferation activity was confirmed by tumor mass. Because Huh-7 is a relatively low metastatic cell line (Tong et al., 2017), the cells did not spread, even in the intrahepatic area. An animal model using highly metastatic liver cancer cells may help to elucidate the anti-metastatic activity of SP-8356 as demonstrated in breast cancer cells (Mander et al., 2019).
In summary, this study demonstrated that SP-8356 exerts an inhibitory effect on liver cancer cell growth and motility by regulating apoptosis-and metastasis-associated gene expression. The mechanisms of SP-8356 related to cell growth and migration may result from its regulation of signaling pathways involving NF-κB and MAPK by inhibiting their nuclear translocation. In conclusion, SP-8356 may inhibit liver cancer progression by modulating multiple target molecules in cancer cell activation mechanisms. After surgical implantation of Huh-7 cells into livers of SCID mice, 30 mg/kg SP-8356 or saline was administered intraperitoneally every day. On 40 day after injection, the anesthetized mice were peritoneally injected with luciferin and subjected to in vivo imaging.
The signaling intensity was used as a quantitative indicator. (C) Isolated liver from sacrificed mice after perfusion with PBS and 4% paraformaldehyde solution. | 2021-01-19T06:16:05.197Z | 2021-01-18T00:00:00.000 | {
"year": 2021,
"sha1": "4730c619c1ca0c187c05c63bd770918d202c6947",
"oa_license": "CCBYNC",
"oa_url": "https://www.biomolther.org/journal/download_pdf.php?doi=10.4062/biomolther.2020.200",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "82b50a43fe7bed687e9926a6477cce414a32f57e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259040621 | pes2o/s2orc | v3-fos-license | How Social Norms Influence Purchasing Intention of Domestic Products: The Mediating Effects of Consumer Ethnocentrism and Domestic Product Judgments
Buying domestic products has become increasingly important in many countries. As a form of social influence, social norms affect people’s domestic purchasing intentions and behavior. The current study aims to examine the mechanisms by which social norms influence domestic purchasing intentions through the lens of consumer ethnocentrism and domestic product judgments. The data were collected through an online survey in China, and a total of 346 valid responses were obtained. The results indicate that social norms influence domestic purchasing intention through four paths, namely, direct path, motivational path, cognitive path, and motivational–cognitive path. Consumer ethnocentrism and domestic product judgments, serving as the motivational and cognitive factors, respectively, play mediating and serial mediating roles in the relationship between social norms and domestic purchasing intention. In addition, consumer ethnocentrism has two dimensions, namely, pro-domestic and anti-foreign consumer ethnocentrism, and only the former plays a significant role in the model. The current study has theoretical contributions to research on domestic purchasing intention and practical implications for interventions in domestic purchasing behavior. Future studies are encouraged to conduct experiments, distinguish between different types of social norms, measure purchasing behavior, and verify the relationships in other countries.
Introduction
With the rise of anti-globalization, regional protectionism, and nationalist discourse worldwide, there has been a growing trend of defending domestic products and companies. For instance, in 2017, former President Donald Trump signed an executive order called "Buy American, Hire American", aimed at supporting domestic products and companies [1]. Similarly, in 2021, the incumbent president Biden signed an executive order titled "Ensuring the Future is Made in All of America by All of America's Workers" [2]. Many other countries, including Australia, Indonesia, Vietnam, South Africa, and China, have also launched similar campaigns to promote domestic products and companies [3].
Given the great importance of promoting domestic products, many studies have investigated the influencing factors of domestic purchasing intention and behavior, including product characteristics [4], consumer demographics [5], consumer characteristics [6], and cultural factors [7]. However, there is still a research gap in understanding the role that social influence plays in this.
Social norms, which refer to the rules and standards that are shared by members of a group, are the most typical social influence [8]. In this research, social norms refer to the Behav. Sci. 2023, 13, 453 2 of 14 perception, attitudes, and behaviors about domestic purchasing that are approved of and expected by the majority of people in society. Previous studies have confirmed that social norms have an impact on domestic purchasing intention and behavior [9][10][11]. However, no study has investigated the mechanism of the effect. The current study aims to examine the mechanisms of social norms' effect on domestic purchasing intention. Based on previous research showing that social norms affect people's behavior in four ways, we come up with the four paths through which social norms influence domestic purchasing intention, i.e., the direct path, the motivational path, the cognitive path, and the motivational-cognitive path. Consumer ethnocentrism and domestic product judgments represent the motivational factor and cognitive factor, respectively.
This research employed a questionnaire survey to measure variables and path analysis and a mediation test to examine the data. The advantage of this method is that it not only provides information about the direct relationships between variables but also enables the examination of indirect effects. Additionally, simultaneously including different paths in the model allows for mutual control of the variables. In the current study, the direct effect of social norms on domestic purchasing intention and the mediating effects and serial mediating effect of consumer ethnocentrism and domestic product judgments on the relationship between social norms and domestic purchasing intention are examined.
The current study is the first empirical investigations to explore the mechanisms of social norms' effect on domestic purchasing intention. Moreover, coming up with the four paths through which social norms influence domestic purchasing intention and simultaneously examining them in the same model is a novelty. This research can fill the current gap in knowledge on how domestic consumption is affected by social influence and help policy makers and marketers to develop targeted strategies and interventions to promote domestic consumption.
This research takes China as the research object, because China has a large consumer market which plays an important role in the world economy; people in China accept the idea that domestic purchasing is the socially desirable way of consumption [12]; and social norms about purchasing domestic products have an impact on people's consumption behavior [13]. It is worth mentioning that although the specific objectives of the campaigns promoting domestic products may vary across countries, encompassing economic, social, and environmental dimensions, they can all play a role in domestic purchasing intention and behavior through social norms.
The Four Paths through Which Social Norms Affect Domestic Purchasing Intention
Social norms are important to the normal operation of society and the effective conduct of social cooperation. In the evolution and development of human society, social norms as an informal system without the force of laws constrain a wide variety of anti-social behavior, such as discrimination [14] and corruption [15,16], and promote various pro-social behavior, such as fair behavior [17] and altruistic behavior [18].
Based on previous studies, social norms affect people's behavior in four ways. First, by direct influence: people unconsciously conform to the majority under social influence [19]. Second, by changing people's motivation: Social norms convey the value orientation of most people in society. Individuals tend to internalize external social norms through social learning to guide their behavior [20]. Third, by changing people's cognition: To pursue the correct decision making, individuals consciously or unconsciously take other people's behaviors or opinions as reference [21]. If most people in society behave in a certain way, people will think that the behavior is more reasonable, and then produce the corresponding behavior. Fourth, by changing people's motivations and cognition successively: If urges, drives, and wants are activated, the cognitive process can be biased to meet the need. The internalized social norms will influence people's cognitions, which in turn influence behavior [22]. Domestic purchasing intention can be affected by social norms through the four paths.
The Direct Path: The Direct Effect of Social Norms on Domestic Purchasing Intention
According to the theory of planned behavior, behavioral intention is the most proximal determinant of people's behavior, and it is shaped by attitude, social norms (termed subjective norms in the theory), and perceived behavioral control [19]. Among the three constructs, social norms explain the social influence on human behavior.
Previous studies have confirmed the effect of social norms on the purchasing intention of a variety of products, including utilitarian products [23], luxury products [24], fair trade products [25], organic products [26], and so on. Specific to domestic products, Granzin and Painter find that social norms significantly predict consumers' domestic purchasing behavior in Portugal and the United States (the US) [9]. Maduku and Phadziri find that social norms are significantly correlated with consumers' domestic purchasing bias in South Africa [11]. Jia et al. find that social norms are positively related to consumers' willingness to buy domestic products in China [9].
The Motivational Path: The Mediating Effect of Consumer Ethnocentrism on the Relationship between Social Norms and Domestic Purchasing Intention
Consumer ethnocentrism is an important concept to explain cross-national product choice. It refers to the belief held by consumers about the appropriateness and morality of buying foreign products [27]. People with high consumer ethnocentrism believe that buying foreign products damages the domestic economy, and even leads to the unemployment of compatriots, which is inappropriate and immoral [3]. Social norms and consumer ethnocentrism are both normative beliefs about the appropriateness and morality of buying foreign or domestic products. However, consumer ethnocentrism refers to personal norms [27], which are different from social norms. Social norms reflect external rules, while personal norms reflect internal standards [28].
It has been argued that personal norms are internalized social norms [29]. According to social cognition theory, people acquire values of what is right and what is wrong through social learning [30]. Specifically, social norms convey a signal that purchasing a certain type of product is desirable, which will form people's personal norms about the correctness of purchasing this type of product. Abundant studies have shown that social norms about purchasing a certain type of product shape personal norms [25,26,31,32]. Research on domestic purchasing also shows that there is a significant positive relationship between social norms and personal norms (consumer ethnocentrism; [9,11]).
According to norm activation theory, personal norms are the driving force of altruistic behavior [33]. Domestic purchasing can be seen as altruistic, especially in less-developed countries, where foreign products are often of higher quality and people need to sacrifice their interests to favor domestic ones [34]. Meta-analysis research shows that consumer ethnocentrism is positively correlated with domestic purchasing intention [35]. This relationship exists in many counties, such as the US [36], Spain [37], India [6], and China [38][39][40].
The Cognitive Path: The Mediating Effect of Domestic Product Judgments on the Relationship between Social Norms and Domestic Purchasing Intention
Social norms can alter people's cognition. Domestic product judgments, which are also termed quality perception, quality judgment, and general beliefs toward domestic products, refer to evaluations of domestic products' quality, price, reliability, value for the money, etc. [41].
Attribution theory suggests that people make causal explanations for what happens to predict and control the environment [42]. Social norms send a message about the choices most people currently make, i.e., most people in society prefer domestic products. Previous research shows that social norms are negatively associated with the judgment of products from foreign countries [43]. However, to our knowledge, no research has examined the effect of social norms on domestic product judgments. Given the literature above, we assume that higher social norms may lead to more positive evaluations of domestic products because people might attribute the popularity of domestic products to their higher quality.
According to the hypothesis of the economic man, people are rational and selfinterested, and they act in a way that maximizes utility in economic activities [44]. Product judgments play important roles in consumer choice, i.e., if people give better evaluations on the products, they are more inclined to buy them [45]. Zebal and Jackson find that one of the incentives for Bangladeshi consumers to buy local clothing brands is positive product judgment [4]. Rahnama finds that Iranian consumers are willing to buy domestic rice because of its good quality and price [46]. More direct evidence shows that domestic product judgments are positively correlated with willingness to buy domestic products [41,47].
The Motivational-Cognitive Path: The Serial Mediating Effect of Consumer Ethnocentrism and Domestic Product Judgments on the Relationship between Social Norms and Domestic Purchasing Intention
Social identity theory claims that people are motivated to identify themselves as a member of groups and develop an in-group preference to maintain a positive selfidentity [48]. When people view a group as their in-group, they will not only be favorably biased toward the in-group members, but also the products of the in-group [22]. Shimp and Sharma find that people with high consumer ethnocentrism have a halo effect on domestic products; that is, compared with people with low consumer ethnocentrism, they have more positive evaluations of domestic products [27].
Many studies confirm Shimp and Sharma's conclusion [27]. For example, Brodowsky finds that consumers with higher consumer ethnocentrism in the US have more positive evaluations of cars designed or manufactured and assembled in the US [49]. Orth and Firbasova find that Czech consumers with higher consumer ethnocentrism rate domestic yogurt higher [50]. The positive relationship between consumer ethnocentrism and domestic product judgments is also found in Poland [51], China [52], Austria and Slovenia [47], the United Kingdom [53], and Slovakia [54].
The Two Dimensions of Consumer Ethnocentrism
Since Shimp and Sharma proposed the concept of consumer ethnocentrism and its measurement (the CETSCALE) [27], it has been examined in different countries. Although most of them agreed that consumer ethnocentrism is a single-dimensional construct, some studies confirmed a two-dimensional construct. For example, Akbarov uses an Azerbaijani sample and finds that consumer ethnocentrism has two dimensions, i.e., "hard consumer ethnocentrism", which contains a strong hostile attitude toward foreign products; and "soft consumer ethnocentrism", which does not prompt exclusions of foreign products but simply emphasizes the preference for domestic products [5]. This is consistent with studies conducted in Greece [55] and Malaysia [56]. In addition, studies on Chinese consumers obtain similar results. For instance, Wei et al. defines the two dimensions as "pro-China ethnocentrism" and "pro-foreign ethnocentrism" [57], Hsu and Nien define the two dimensions as "conservative patriotism" and "defensive patriotism" [58], and Bi et al. just define the two dimensions as "CE1" and "CE2" [59]. According to the connotation of these dimensions, we name the two dimensions "pro-domestic consumer ethnocentrism" and "anti-foreign consumer ethnocentrism".
Previous studies show that the two dimensions have distinct effects on purchasing intention and product judgments. For example, Hsu and Nien demonstrate that conservative patriotism (pro-domestic consumer ethnocentrism) has a great impact on domestic purchasing intention, while defensive patriotism (anti-foreign consumer ethnocentrism) does not among Chinese consumers [58]. Teo et al. find that the path from consumer ethnocentrism to perception towards domestic brands in Malaysia is significant for soft (pro-domestic) consumer ethnocentrism but not significant for hard (anti-foreign) consumer ethnocentrism [56].
The Hypotheses of the Research
The current study aims to examine the mechanisms of social norms' effect on domestic purchasing intention through the four paths. Given the literature above, we propose that H1: Social norms influence domestic purchasing intention through the direct path. Specifically, social norms have a direct effect on domestic purchasing intention.
H2: Social norms influence domestic purchasing intention through the motivational path. Specifically, pro-domestic consumer ethnocentrism mediates the relationship between social norms and domestic purchasing intention, while anti-foreign consumer ethnocentrism does not.
H3: Social norms influence domestic purchasing intention through the cognitive path. Specifically, domestic product judgments mediate the relationship between social norms and domestic purchasing intention.
H4: Social norms influence domestic purchasing intention through the motivational-cognitive path. Specifically, pro-domestic consumer ethnocentrism and domestic product judgments mediate the relationship between social norms and domestic purchasing intention sequentially, while anti-foreign consumer ethnocentrism and domestic product judgments do not.
See Figure 1 for the research hypotheses.
The current study aims to examine the mechanisms of social norms' effect on domestic purchasing intention through the four paths. Given the literature above, we propose that H1: Social norms influence domestic purchasing intention through the direct path. Specifically, social norms have a direct effect on domestic purchasing intention.
H2: Social norms influence domestic purchasing intention through the motivational path. Specifically, pro-domestic consumer ethnocentrism mediates the relationship between social norms and domestic purchasing intention, while anti-foreign consumer ethnocentrism does not.
H3: Social norms influence domestic purchasing intention through the cognitive path. Specifically, domestic product judgments mediate the relationship between social norms and domestic purchasing intention.
H4: Social norms influence domestic purchasing intention through the motivational-cognitive path. Specifically, pro-domestic consumer ethnocentrism and domestic product judgments mediate the relationship between social norms and domestic purchasing intention sequentially, while antiforeign consumer ethnocentrism and domestic product judgments do not.
See Figure 1 for the research hypotheses.
Sample and Data Collection
The data were from the same sample as Jia et al.'s study 2 [9], which was collected in 2022 through an online survey by sending questionnaire links on WeChat. A total of 512 Chinese consumers over 18 years old finished the questionnaire, and 346 valid responses were obtained. Among them, 54% were females. Respondents ranged in age from 18 to 79, with an average of 35 years old. See Table 1 for demographic information on the respondents.
Sample and Data Collection
The data were from the same sample as Jia et al.'s study 2 [9], which was collected in 2022 through an online survey by sending questionnaire links on WeChat. A total of 512 Chinese consumers over 18 years old finished the questionnaire, and 346 valid responses were obtained. Among them, 54% were females. Respondents ranged in age from 18 to 79, with an average of 35 years old. See Table 1 for demographic information on the respondents. In the process of data collection, all the participants were informed about the purpose of the study, and they were informed that the data would be used only for scientific research, their participation in the survey was completely voluntary, and they could choose to withdraw at any time.
Instruments
The questionnaire includes three parts. First is the single-item domestic purchasing intention measurement, which was adapted from Tong and Li's research [52]. Second is the influencing factors measurements, including 3-item social norms scale, which was the same as Jia et al.'s research [9]; 17-item consumer ethnocentrism scale, which was from Shimp and Sharma's CETSCALE [27]; and 3-item domestic product judgements scale, which was adapted from Kervyn et al.'s research [60]. Third is the demographic information measurements, including gender, age, education level, and average monthly income. All the items except for demographics were measured on a seven-point Likert scale (1 = totally disagree; 7 = totally agree). See Table 2 for the items of the constructs.
Common Method Bias
We adopted confirmatory factor analysis (CFA) to test for common method bias [61]. We found that the fit of the three-factor model was significantly better than that of the single-factor model (∆χ 2 = 718.23, ∆df = 3, p < 0.01). Additionally, the fit indices of the three-factor model were not different from those of the measurement model with an unmeasured latent variable (∆CFI = 0.07 < 0.10, ∆TLI = 0.07 < 0.10, ∆RMSEA = 0.02 < 0.05). Therefore, common method bias was not a problem in the current research. Table 3 shows the descriptive statistics and correlations of the variables. We
Measurement Model
To test the dimensionality of the CETSCALE, we adopted an exploratory factor analysis (EFA) with SPSS 27.0. Kaiser-Meyer-Olkin (KMO) and Bartlett tests were conducted to examine the suitability of EFA for the data. The results showed that KMO = 0.96 > 0.50, and Bartlett's test of sphericity was significant (p < 0.01), which indicated that EFA was suitable for the data. Then, a principal components analysis (with varimax rotation) was conducted to extract the factors. The results suggested a two-factor solution, which explained 67.68% of the variance. Item 10 was deleted because its loadings on both factors were greater than 0.50. The first dimension included eight items (pro-domestic consumer ethnocentrism: CE1, CE2, CE3, CE4, CE7, CE8, CE9, and CE13), and the second dimension included eight items (anti-foreign consumer ethnocentrism: CE5, CE6, CE11, CE12, CE14, CE15, CE16, and CE17).
To test the reliability and validity of these constructs, we conducted a confirmatory factor analysis with Mplus 7.4. The results showed that the model had a good fit, χ 2 = 638.01, df = 203, χ 2 /df = 3.14. CFI = 0.92, TLI = 0.91, RMSEA = 0.08, SRMR = 0.05. The factor loadings of the items were higher than the recommended threshold of 0.60 [62], except for CE3, which had a factor loading of 0.49 and was deleted in the model. See Table 2 for details. The values of Cronbach's α and composite reliability were above 0.70, which indicated good reliability of the scales [63]. The average variance extracted (AVE) values were above 0.50 [64], which indicated adequate convergent validity. The square root of the AVE score of each construct was greater than its correlation coefficients with other constructs (except that the square root of the AVE score of pro-domestic consumer ethnocentrism was slightly lower than the correlation between pro-domestic consumer ethnocentrism and anti-foreign consumer ethnocentrism), and the heterotrait-monotrait (HTMT) ratios were below the threshold value of 0.90, which indicated acceptable discriminant validity [64,65]. See Tables 3 and 4 for details. Above all, the reliability and validity of the constructs were satisfied and suitable for examining the structural model.
Mediation Analysis
To test the mediating effects, we performed a bootstrapping procedure with 5000 samples and a 95% confidential interval (CI) with Mplus 7.4, and the results are presented in Table 6. If the CI contains a value of 0, the effects are not significant, and if the CI does not contain a value of 0, the effects are significant. We found that the direct effect of social norms on domestic purchasing intention was significant (effect = 0.40, 95% CI = 0.22 to 0.56). The indirect effect of pro-domestic consumer ethnocentrism (effect = 0.20, 95% CI = 0.05 to 0.34), the indirect effect of domestic product judgments (effect = 0.07, 95% CI = 0.03 to 0.15), and the serial indirect effect of pro-domestic consumer ethnocentrism and domestic product judgments (effect = 0.04, 95% CI = 0.01 to 0.10) on the relationship between social norms and domestic purchasing intention were significant. The indirect effect of anti-foreign consumer ethnocentrism (effect = −0.03, 95% CI = −0.11 to 0.03) and the serial indirect effect of anti-foreign consumer ethnocentrism and domestic product judgments (effect = 0.01, 95% CI = 0.00 to 0.03) were not significant. Above all, H1, H2, H3, and H4 were all supported. Note: CE = consumer ethnocentrism; LLCI = bootstrapping lower-level confidential interval; ULCI = bootstrapping upper-level confidential interval; "not significant" means that the effect would be not significant.
Discussion
The current study reveals that social norms affect domestic purchasing intention through four paths, i.e., the direct path, motivational path, cognitive path, and motivationalcognitive path. In addition, this research shows that consumer ethnocentrism has two dimensions, i.e., pro-domestic and anti-foreign consumer ethnocentrism, and they function differently in the model.
Specifically, this research shows that social norms are positively related to domestic purchasing intention, and their direct effect on domestic purchasing intention is significant, which is consistent with H1 and previous studies [9][10][11]. The results support the theory of planned behavior, showing that social norms are important predictors of people's behavioral intentions [19].
Social norms are positively related to pro-domestic consumer ethnocentrism and prodomestic consumer ethnocentrism is positively related to domestic purchasing intention. The mediating effect of pro-domestic consumer ethnocentrism is significant, while it is not significant for anti-foreign consumer ethnocentrism, which is consistent with H2 and previous studies [38][39][40]. These results support social cognition theory and norm activation theory, i.e., people will internalize social norms as their personal norms (pro-domestic consumer ethnocentrism) through social learning, and personal norms are the direct predictor of people's behavioral intention [30,33]. The results further validate norm activation theory, which indicates that social norms can impact behavioral intention indirectly through personal norms [66]. The "social norms-personal norms-consumer behavior" link has been tested in many fields, such as organic food consumption [26,31,32] and fair trade products consumption [25]. This research is the first to apply this theory to domestic product consumption.
Social norms are positively related to domestic product judgments, and domestic product judgments are positively related to domestic purchasing intention. Domestic product judgments play a significant mediating role. The results are consistent with H3 and previous studies [41,47], supporting attribution theory and the hypothesis of the economic man, i.e., people will attribute the fact that most people buy domestic products to their high quality to some extent, and better product judgments lead to higher purchasing intention [44].
Pro-domestic consumer ethnocentrism is positively related to domestic product judgments. The two factors have a sequential mediating effect on the relationship between social norms and domestic purchasing intention, while anti-foreign consumer ethnocentrism does not. These results are consistent with H4 and previous studies [52,56], supporting social identity theory, i.e., people who are identified with a group will have an in-group preference [22]. The results confirm that motivations influence cognitions and these factors function together in consumer behavior [47,67].
The current research shows that consumer ethnocentrism has two dimensions in China, namely, pro-domestic and anti-foreign consumer ethnocentrism. The mean score of the former is higher than midpoint 4, while the mean score of the latter is lower than 4. These results confirm previous studies conducted in China and are consistent with the situations in other countries, including Azerbaijan and Greece [5,55,59]. Moreover, this research finds that pro-domestic consumer ethnocentrism has a great impact on domestic product judgments and domestic purchasing intention, while anti-foreign consumer ethnocentrism does not. This is because anti-foreign consumer ethnocentrism solely represents a negative attitude towards purchasing foreign products, which does not necessarily imply a positive attitude towards buying domestic ones. Domestic purchasing intention can be influenced by numerous other factors. Hence, even if consumers hold negative attitudes towards foreign products, they will still take into account other factors to make their purchasing decisions. The different effects of the two dimensions of consumer ethnocentrism on domestic product judgments and domestic purchasing intention are consistent with previous studies [56,58].
Contributions and Implications
The current study has theoretical contributions. Firstly, it is the first to explore the mechanisms of social norms' effect on domestic purchasing intention, making up for the lack of empirical results and expanding the application of related theories, such as norm activation theory; secondly, it is the first to investigate the effects of the two dimensions of consumer ethnocentrism on domestic product judgments and domestic purchasing intention at the same time, and it is the first to explore the effects of the two dimensions of consumer ethnocentrism on domestic product judgments in China, increasing the knowledge on Chinese consumers and enriching the literature on consumer ethnocentrism. This research has practical contributions. Firstly, it shows a significant direct effect of social norms on domestic purchasing intention. This suggests that administrators can cultivate pro-domestic social norms by issuing pro-domestic policies or using celebrity endorsement to promote people's domestic purchasing intention. Secondly, it indicates a significant mediating effect of pro-domestic consumer ethnocentrism, instead of antiforeign consumer ethnocentrism. This reminds us to pay attention to the internalization of social norms and to distinguish the different dimensions of consumer ethnocentrism. Appropriate policies should be set up to encourage people to support domestic products without excluding foreign ones, which will benefit both domestic and international economies. Thirdly, it demonstrates a significant mediating effect of domestic product judgments, suggesting that more attention should be paid to improving the quality of domestic products and expanding the influence of domestic brands.
Limitations and Future Studies
The current research is not without limitations. Firstly, it uses cross-sectional data, which can only claim correlational relationships. Future studies can conduct experiments to indicate causal relationships. Secondly, it adopts a general construct of social norms, which cannot distinguish the effects of different types. Future studies can divide social norms into pro-domestic social norms and anti-foreign social norms, or descriptive social norms and injunctive social norms to obtain more specific results. Thirdly, it measures purchasing intention only. Future studies can measure real purchasing behavior to obtain more concrete conclusions. Fourthly, it uses a convenience sample in China, which limits the generalizability of the results to some extent. Future studies can verify the relationships in other samples or other cultures to improve the external validity of the current findings.
Conclusions
The current study reveals that social norms affect domestic purchasing intention through four paths, i.e., the direct path, motivational path, cognitive path, and motivationalcognitive path. Consumer ethnocentrism and domestic product judgments, as the motivational factor and the cognitive factor, respectively, play mediating and serial mediating roles in the relationship between social norms and domestic purchasing intention. In addition, consumer ethnocentrism has two dimensions, i.e., pro-domestic and anti-foreign consumer ethnocentrism, and only the former plays a significant role in the model. This research has theoretical contributions to research on domestic purchasing intention and practical implications for interventions in domestic purchasing behavior. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.
Conflicts of Interest:
The authors declare no conflict of interest. | 2023-06-03T15:13:29.590Z | 2023-05-31T00:00:00.000 | {
"year": 2023,
"sha1": "26e2050493d5be2947770ffcd254021ad2e207f8",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/bs13060453",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f88cab268d1af7172e1bd927cb8b490a986fb93c",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
209376191 | pes2o/s2orc | v3-fos-license | Dynamics of small grains in transitional discs
Transitional discs have central regions characterised by significant depletion of both dust and gas compared to younger, optically-thick discs. However, gas and dust are not depleted by equal amounts: gas surface densities are typically reduced by factors of $\sim 100$, but small dust grains are sometimes depleted by far larger factors, to the point of being undetectable. While this extreme dust depletion is often attributed to planet formation, in this paper we show that another physical mechanism is possible: expulsion of grains from the disc by radiation pressure. We explore this mechanism using 2D simulations of dust dynamics, simultaneously solving the equation of radiative transfer with the evolution equations for dust diffusion and advection under the combined effects of stellar radiation and hydrodynamic interaction with a turbulent, accreting background gas disc. We show that, in transition discs that are depleted in both gas and dust fraction by factors of $\sim 100-1000$ compared to minimum mass Solar nebular values, radiative clearing of any remaining $\sim 0.5$ $\mu$m and larger grains is both rapid and inevitable. The process is size-dependent, with smaller grains removed fastest and larger ones persisting for longer times. Our proposed mechanism thus naturally explains the extreme depletion of small grains commonly-found in transition discs. We further suggest that the dependence of this mechanism on grain size and optical properties may explain some of the unusual grain properties recently discovered in a number of transition discs. The simulation code we develop is freely available.
INTRODUCTION
Transitional discs are so-named as the stage in evolution of a single star's protoplanetary disc in-between the optically thick Class II and the optically thin Class III stage, within the framework of inside-out disc clearing (Alexander et al. 2014). Although sometimes confused with circumbinary discs (Espaillat et al. 2007;Ireland & Kraus 2008), these discs remain an important stage in single star evolution (e.g. Ruíz-Rodríguez et al. 2016) and may be a signpost for giant planet or multiple giant planet formation (Zhu et al. 2011;Dodson-Robinson & Salyk 2011). Even if they are not a sign of planet formation in all systems, the phase when the disc is transitioning from optically thick to thin disc in the giant planet formation region is generally when giant planets are at their most detectable from high contrast surveys, as the recent example of PDS 70 has shown (Keppler et al. 2018;Wagner et al. 2018). E-mail: mark.krumholz@anu.edu.au † E-mail: michael.ireland@anu.edu.au However, many disc features are ambiguous, and the planetary nature of such features remains highly debated in numerous discs including T Cha (Huélamo et al. 2011;Cheetham et al. 2015), the infrared emission in LkCa 15 (Kraus & Ireland 2012;Thalmann et al. 2016), HD 100546 (Quanz et al. 2013;Rameau et al. 2017) and HD 169142 (Biller et al. 2014;Reggiani et al. 2014;Ligi et al. 2018). Much of the confusion surrounding these features is attributable to the inadequacy of simple dust models to explain the observed scattered light emission. T Cha had very bright emission from forward scattering, LkCa 15 had both bright and red emission appearing in a forward scattering geometry (Ireland & Kraus 2014;Thalmann et al. 2015;Currie et al. 2019), and HD 169142 required emission from extremely small or quantum heated grains in a disc where micron sized grains were largely absent (Birchall et al. 2019). These observed complex grain distributions motivates this paper, which considers grain segregation processes in transitional discs.
Most previous work on grain segregation in gaseous discs considers the effects of either settling or gas pres-sure gradients combined with variable dust stopping times. For example, Takeuchi & Lin (2002) considered the radial flow of dust particles in a disc where radiation pressure was neglected, and the disc evolved viscously with a constant α prescription. In transition discs, the lower gas densities mean that significantly smaller grains settle to the midplane, and as the disc becomes optically thin, radiation pressure on grains can become important. Takeuchi & Artymowicz (2001) considered the joint effects of a gas pressure gradient and radiation pressure in a disc that is optically thin throughout, neglecting the effects of gas accretion. They found that radiation could remove <100 µm dust grains from the inner 10s of AU of a 10 M ⊕ disc on ∼kyr timescales, and segregate grains according to their size. Takeuchi & Lin (2003) modeled the combined effects of radiation pressure and accretion, focusing on discs similar to a minimum mass solar nebula, where surface outward dust motion was almost negligible compared to the motion of the bulk of the disc inwards with the gas accretion. Tazaki & Nomura (2015) considered the motion of grains with a large radiative cross section in the surface layers of a minimum mass solar nebula (10 M J disc). They found that compact grains were not efficiently transported by radiation pressure, while smaller grains could be. Kenyon et al. (2016) considered a variety of mechanisms to remove dust from transition discs, and concluded either that planet formation must leave behind far fewer small grains than one might naively expect, or that some other mechanism, for example drag-induced accretion, must be available to clear dust from discs.
In this paper, we extend previous work by jointly considering the effects of gas pressure gradients and gas flows, stopping times that depend on grain radius, radiation and turbulent diffusion in order to study the dust temporal evolution during the transitional disc period of a protoplanetary disc. In this period, gas masses within ∼30 AU are between ∼10 M ⊕ and 10 M J and small grains have been depleted through settling and growth. Evidence for gas disc depletion comes primarily from CO observations (van der Marel et al. 2016), where typical gas depletions of order 10 −3 are found for transition discs. We note that masses derived from HD (McClure et al. 2016) are significantly higher than CO derived disc gas masses, but HD is more sensitive than CO to mass in the outer disc, so the HD results are less relevant to the region of interest for us. Even more significant depletions in both micron and mm sized grains are found, and can be attributed to grain growth beyond mm sizes in the inner disc (Birnstiel et al. 2012) and perhaps planetesimal formation. In this paper, we begin with initial conditions of a depleted gas disc that is partly depleted of dust, and show that the combined effects of radiation pressure and accretion are able to reduce the dust content much further, to the almost negligibly small levels seen in transition discs.
The code we use to carry out all of the simulations presented in this paper is available from https://bitbucket. org/krumholz/dustevol/ under an open source license. Full simulation outputs are available upon request from the authors, but are not included in the public repository due to their size.
DUST EVOLUTION MODEL
We are interested in modeling the evolution of a population of dust grains orbiting a central star of mass M and luminosity L in a background gas disc. We will treat grains as simple spheres of radius a and density ρ s ≈ 1 g cm −3 , and we ignore coagulation and shattering, so that the total mass of dust grains of any given size is conserved. The grains move in response to radiation forces and to drag forces exerted by the gas; below we will assume that the stopping time of grains is small, so that these forces are always in balance and the grains are at their terminal velocity. We further separate grain motion into two types: systematic drift at a terminal velocity determined by force balance considering only the bulk velocity of the gas, and random motions as a result of local drag forces due to turbulent motions in the gas, which we approximate as a diffusion process. Under these assumptions, the density ρ d,a of dust grains of radius a evolves following an advection-diffusion equation, Here v d,a is the bulk velocity and j d,a is the flux due to turbulent diffusion. This formulation of the evolution equations is identical to that proposed by Takeuchi & Lin (2002). We now proceed to calculate the bulk velocity and diffusion coefficient. In the following discussion, we use and z to denote the radial and vertical position in a cylindrical coordinate system centred on the star, and r = 2 + z 2 and θ = cos −1 (z/r) to denote the corresponding radius and polar angle in a spherical coordinate system. We assume that the system is symmetric in the azimuthal angle. In the equations that follow, we aim to differentiate between radial and cylindrical components; a novel aspect of our model is the full treatment of two dimensional drift in the presence of radiation pressure.
Gas disc model
As a first step, we specify our model for the gas disc through which the grains flow. We treat the gas disc as constant in time. The run of surface density and temperature through the disc are described by powerlaws, where is a dimensionless factor that scales the mass in the disc to that of the minimum mass Solar nebula (MMSN), Σ MMSN ≈ 2200 g cm −2 , T 0 is the disc temperature at 1 AU (T ≈ 120 K for a Sun-like star), and p and q are constant. We are interested in transition discs, for which 1. Standard values for us are p = −3/2 and q = −3/7, as expected for a Chiang & Goldreich (1997) passive disc profile, but we will also consider the case of discs that have been depleted at small radii, and thus have p = 0. Our temperature profile neglects the effects of dust settling, and we further simplify the situation by assuming that the gas is vertically isothermal. Under this assumption the sound speed c s is constant with , and the scale height h g follows the usual relation is the Keplerian angular velocity at the midplane. Assuming the gas disc is in hydrostatic equilibrium, we have where P g = ρ g k B T/µm H is the gas pressure under the reasonable approximation of an ideal gas, µ is the mean molecular weight in units of the hydrogen mass m H , ρ g is the gas volume density, and the final equation contains fiducial MMSN values and scalings (Chiang & Youdin 2009). Also assuming hydrostatic equilibrium in the radial direction, we have (Chiang & Youdin 2009) where Ω g is the gas angular velocity. Expanding this equation in powers of z/r and keeping terms up to order (z/r) 2 (Takeuchi & Lin 2002), we have This is conveniently expressed as: Here Ω K is the Keplerian speed a height z above the midplane and Ω g is the gas angular velocity, which is smaller than the Keplerian velocity by a factor of √ 1 − η due to the effects of pressure support.
In order to solve for the dust velocity we will require an expression for the gas radial velocity. Specifically we require a height-dependent velocity to be self-consistent. For simplicity we choose an α-disc model for this purpose, though since the gas radial velocity v g, is generally small, the details of the model will not be tremendously important. We begin with the azimuthal component of the momentum equation: where ν is the kinematic viscosity. The term v g,z (∂/∂z)( 2 Ω g ) is smaller than v g, by h g / , and so can be neglected (Takeuchi & Lin 2002). Solving for radial gas velocity, and retaining terms up to order (h g /r) 2 , we find where α = ν/(c s h g ) is the usual dimensionless viscosity (see also Keller & Gail 2004). Equation 13 gives the gas radial velocity as a function of cylindrical radius (implicitly, through r and z) and height z above the midplane.
Dust velocity
Now consider a dust grain, working in a reference frame corotating with the grain at azimuthal velocity v d,φ . In this frame, the equation of motion in the ( , z) plane, considering only the bulk velocity of the gas and not its small-scale turbulent motion, and neglecting Poynting-Robertson drag, is where β is the ratio of outward radiation pressure force to inward gravitational force and F drag is the force exerted by gas drag. The first term here represents the centrifugal force, the second is the radiative minus gravitational force, and the third is the gas drag force. We omit the Coriolis force because it exerts forces only in the azimuthal direction.
Radiation force
In general the radiation pressure force must be determined by integrating in frequency. Following Wolfire & Cassinelli (1987), if the central star has specific luminosity L ν and we neglect the scattered and dust-reprocessed component of the radiation field compared to the direct stellar field, then the radiation pressure force on a grain is where Q A a,ν and Q S a,ν are the absorption and scattering efficiencies for grains of size a at frequency ν, and g a,ν is the cosine of the mean scattering angle (with g a,ν = 1 indicating complete forward scattering). The optical depth from the stellar surface at radius r * to the radial distance r of the grain is where ρ s is the density of the grains. Evaluation of Equation 15 in general must be done numerically, and is numerically expensive if one requires high frequency resolution. However, the simplest application and one with broad applicability is for grains of size a much larger than the wavelength of photons at the peak of the stellar spectral energy distribution. Specifically, if the stellar effective temperature is T * , yielding a wavelength of peak emission per unit wavelength λ * ≈ hc/(4.965k B T * ), grains will be in the limit of geometric optics, Q A s,ν + (1 − g a,ν )Q S s,ν = 1, if their size obeys For such grains, the radiation force and optical depth reduce to With this simplification, it is convenient to express the ratio of radiation pressure force to gravitational force as simply where is the ratio of radiative to gravitational force for a grain of size a exposed to the full, unshielded luminosity of the star.
Drag force
We compute the drag force under the assumption that the grains are small enough to obey the Epstein drag law, where ρ g is the gas density, c s is the gas sound speed, and ∆v is the relative velocity of the gas and dust. Combining the radiative and drag terms, the total equation of motion for a single grain of size s is where is the usual stopping time. Below it will be more convenient to work with the dimensionless stopping time Note that, although we will not write this out explicitly for reasons of compactness, it is important to recall that T s is a function of the grain size a.
If we limit ourselves to considering grains of size a such that T s 1 near the midplane, then over timescales longer than an orbit the left hand side of Equation 23 approaches zero in the radial and vertical direction as the grains reach terminal velocity. The condition for this to hold is that where ρ g,mid is the midplane gas density, and in the second step we have taken M = M and inserted our fiducial value for ρ g (Equation 6). Thus our approximations that grains can be treated in the geometric optics limit and that they reach terminal velocity quickly are valid over a wide range of grain sizes -from ≈ 0.1 µm up to cm to m, depending on the value of .
In the vertical direction we find the dust terminal velocity to first order is where we have taken v g,z = 0 to arrive at the second equation. To obtain an expression for the radial drift, first consider the azimuthal component of the dust momentum equation. Because the dominant source of angular momentum is simply Keplerian motion, we can relate the rate of angular momentum change to the drift rate of solids (Pinilla & Youdin 2017): We can use this relation to replace the L.H.S of the φ component of Equation 23, which simplifies to We can use this relative velocity in the azimuthal direction to solve for the dust terminal velocity in the (cylindrical) radial direction. From Equation 23 we have: We require a linearized expression for the dust azimuthal velocity v d,φ . Following Takeuchi & Lin (2002) and Pinilla & Youdin (2017), we can remove higher order terms by relating Solving for v 2 d,φ and inserting back into Equation 31, we finally arrive at the cylindrical terminal velocity: where v K = Ω K is the Keplerian velocity at the position of the grain.
Turbulent diffusion
Having solved for the bulk velocity of the gas, we next calculate the rate of turbulent mixing. Following Takeuchi & Lin (2002), we model the diffusive flux of dust as where α is the dimensionless viscosity, and we have defined f d,a = ρ d,a /ρ g as the dust mass fraction for grains of size a, and as the diffusion coefficient for dust grains of size a. Note, however, the D d,a is the diffusion coefficient for grain concentration, rather than grain density.
Derivation
Before proceeding to full numerical solution of Equation 1, it is helpful to gain insight by considering a simplified system that we can solve semi-analytically. We do so by making the following approximations. First, we neglect the vertical structure of the disc, and focus on a small radial section so that we can neglect curvature (i.e., we treat the coordinate system as Cartesian, with the x direction aligned with the radial direction), and can treat the initial dust density distribution, background gas disc, and Keplerian speed as uniform (i.e., ρ g , v K , and c s are all constant). Second, we consider only a single size of dust grain a, with constant stopping time t s . Third, we neglect both the radial inflow of the gas and the slow inward drift of dust compared to gas as a result of drag, i.e., we set v g, = 0 and η = 0. While these assumptions are obviously oversimplifications, they retain the essence of the problem: the radial evolution of the dust will be determined by the competition between radiation pressure forces, which attempt to sweep the dust up into an outward-moving shell, and diffusion, which attempts to force the dust distribution back towards uniform.
Under the approximations we have described, Equation 1 reduces to the one-dimensional PDE where we have dropped the subscript a's since we are considering only a single grain size, and we orient our coordinate system so the star lies at x 0. As an initial condition we take ρ d = 0 for x < 0 and ρ d = ρ d,0 for x > 0, i.e., the dust initially occupies the positive half-plane. With this initial condition, we can write the optical depth to position x as The first step in solving Equation 38 is to nondimensionalise it. We normalise the density to the initial density, measure length in units of the optical depth at the initial density, and measure time in units such that the diffusion coefficient is unity. Mathematically, this amounts to making a change of variables Here x d and t d are the characteristic length and time scales for the problem. This allows us to rewrite Equation 38 as where Here we have used Equation 25 and Equation 37 for D d and T s , respectively, r = v K /Ω K is the radial location of our region of interest, and f d,0 = ρ d,0 /ρ g is the initial dust fraction. The interstellar dust abundance is f d,0 ≈ 0.01, but the transition discs in which we are interested have undergone considerable grain agglomeration into larger bodies, and have observed dust abundances that lie more in the range ∼ 10 −5 − 10 −4 (e.g., van der Marel et al. 2016).
Thus we see that our simplified 1D system represents a single-parameter family of PDEs. The parameter χ characterises the relative importance of advection by radiation forces (the second term in Equation 38) and diffusion by the gas (the third term). Intuitively, we expect that radiation forces on the exposed face of the dust at x = 0 will begin to sweep dust into an advancing wave, which will be spread out to a characteristic width determined by diffusion. The parameter χ controls the characteristic speed with which the wave moves, sweeping up dust as it goes. The value of χ in a real disc obviously varies significantly depending on the local properties, as we discuss in further detail in Section 3.4, but for the cases of greatest interest to us we will have χ in the range tens to thousands.
Semi-analytic Solution
We cannot obtain an exact analytic solution even to Equation 38, but we can derive some analytic constraints on the asymptotic behaviour of the solution, which we can use to derive a semi-analytic model. First note that for material at high optical depth, i.e., any dust that begins at x 1, the advection term is negligible because it is proportional to e −τ . Thus the equation reduces to which can we can solve via the usual similarity transformation for diffusion problems, ζ = x /2 √ t . This reduces the problem to an ODE, which has the solution The two constants of integration c 1 and c 2 are determined by the boundary conditions. One of them can be fixed by requiring that ρ d → 1 as ζ = x /2 √ t → ∞, i.e., that the density approach the initial density far upstream of the advancing dust wave. Applying this condition, the analytic solution at large τ must approach where k is a constant that depends on χ, and erfc(x) = 1 − erf(x) is the complementary error function.
Thus the solution will consist of a low-optical depth, low-density downstream region over which the dust wave has already passed, a transition zone where τ ≈ 1 located at position s(t ), and an upstream region where the solution approaches Equation 46. At late times, when s(t ) 1, the great majority of the dust mass that was initially at x < s(t ) must be in the upstream region, since by definition the downstream and transition regions contain a mass per unit area of order unity in our dimensionless variables. Thus conservation of mass requires that for s(t ) 1 we Table 1. Parameters and results for simulations of the simplified 1D system. Here χ is the dimensionless advection to diffusion ratio, t max is the dimensionless time t for which we run the simulation, L is the dimensionless size of the simulation domain, ∆x min is the spatial resolution of the best-resolved run, k is the estimated order of convergence (error ∝ ∆x −k ), λ is our best estimate for λ based on Richardson extrapolation, and Err(λ) is the estimated fractional error on λ. See main text for details of how ∆x min , k, λ, and Err(λ) are computed. have . (47) This equation can be satisfied for arbitrary t only if we have Thus we learn that the position of the dust wave at late times must be proportional to √ t , with a constant of proportionality that depends on χ. 1
Numerical Solution
We can verify this analytic calculation, and calibrate the dependence of λ on χ, using numerical solutions to Equation 41. We solve the system using a 1D finite volume method that is second order accurate in both space and time; we give a full description of the method in Appendix A. In Figure 1 we show an example solution to Equation 41 for log χ = 1.5. The parameters for this run are included in Table 1. As predicted, the position of the dust front (defined here by the location of maximum dust density) as a function of time is fit extremely well by s(t ) ∝ t 1/2 ; a simple leastsquares fit to the solution shown in Figure 1 gives λ = 3.81.
In order to determine the dependence of λ on χ, we solve the system numerically at a range of χ values. We list all the simulations we carry out in Table 1; the domain sizes L and simulation times t max are chosen to ensure that the 1 Although Equation 46 and Equation 49 are exact, they are inconvenient for practical computation when λ 5 because k becomes very large and erfc(x /2 √ t ) very small. In this case it is preferable to evaluate ρ d = 1 + k erfc(ζ) via the series expansion . Top: dimensionless density ρ d as a function of position x at five different times, for the case log χ = 1.5 at a resolution ∆x = 1/128. Note that the region plotted is larger than the domain size L due to our sliding grid; see Appendix A for details. Bottom: position of the dust front s(t ) as a function of dimensionless time t . The blue points show the simulation results, where we define the front location as the position of maximum dust density (every 5th time plotted, to avoid clutter), while the black dashed line shows the best-fitting semi-analytic solution, width of the dust wave is L at all times, and that the dust wave advances to x ≈ 20, by which point the wave position as a function of time has always converged very well to the asymptotic s ∝ t 1/2 behaviour we predict analytically. We ensure that our results are converged in resolution by carrying out a convergence study: for every case, we first run the simulation at a resolution of 64 cells and then 128 cells, compute λ via a least-squares fit to the front position as a function of time at both resolutions, and compare the results. If they differ by more than 1%, we double the resolution again, to 256 cells, and repeat the process. We continue doubling the resolution until either (1) we reach a resolution of 4096 cells or (2) for the highest two resolutions, the two values of λ that emerge from our fit differ by < 1%. We then use a Richardson extrapolation of the resolution-dependent results to generate our final estimate for λ for that value of χ; we do this in two steps. First, we estimate the order of convergence by fitting the difference between the outcome at a given resolution at the maximum resolution as a function of resolution. 2 Second, we use this estimate for the extrapolation. We list the final, extrapolated value of λ, the order of convergence, and the maximum resolution we use in Table 1. We also give our estimated fractional error in λ, which we take to be the difference between the final two Richardson extrapolates, normalised by our final estimate of λ. Figure 2 shows λ as a function of χ from out study. Clearly over the range we have studied, the data are consistent with a simple powerlaw relationship between λ and χ. A least squares fit to the data in Table 1, including our estimated uncertainties returned by the Richardson extrapolation procedure, is
Astrophysical Implications
We are now in a position to consider the astrophysical implications of this finding. For any specified dust and gas distribution in a disc, we can compute χ from Equation 43 at any point in the disc, and then from our semi-analytic solution s(t ) = 2λ √ t , we can compute the characteristic time t that would be required for radiation pressure to move the dust at that position a distance comparable to its current distance from the star. To be precise, at any point a radial distance r from the star, we define the dimensionless radiative dust clearing time t by the condition that r/x d = 2λ √ t . The corresponding dimensional time is 2 Formally our method is second-order accurate for smooth flows. However, the flow is not smooth in the vicinity of the maximum dust density, where τ ≈ 1. Since the behaviour in this region is critical to determining the solution, the actual accuracy will be worse than second order. We find typical convergence orders of 1 − 2 depending on χ.
which is the (dimensional) time that would be required for a dust wave following the semi-analytic solution derived in the previous section to move a distance r. For small grains, T s 1, in a thin, moderately-accreting disc, v K /c s ∼ 10 2 and α ∼ 10 −2 , significant dust sweeping in 10 6 orbits is expected for λ ∼ 1 − 10, corresponding to χ of tens to hundreds.
We can also write the clearing timescale in terms of the classical viscous accretion timescale t acc = r 2 /ν, where ν = αc 2 s /Ω K is the kinematic viscosity. Then we simply have and we again see that we expect grains to be cleared faster than they accrete only for λ 1, meaning χ 10. Finally, it is instructive to insert our best-fit scaling for λ as a function of χ, Equation 50, into Equation 53, and then to substitute for χ using Equation 43. Doing so gives Thus we see that the dust clearing timescale is very sensitive to both the grain size (roughly t clr ∝ a −2 ) and the background dust density (t clr ∝ ρ 2 g ). Thus smaller grains in denser gas are much more resistant to clearing, while larger grains are easier to clear.
To give a sense of the numerical values implied by Equation 53, let us consider a disc in which the dust and gas are initially in equilibrium in the absence of radiative forces or radial transport (i.e., with v d,a, = 0, j d,a, = 0, and β = 0 in Equation 1). Takeuchi & Lin (2002) show that the steady-state vertical distribution of size a in such a disc has a steady-state solution where T s,mid is the stopping time evaluated at the midplane gas density, Sc = 1 + T 2 s,mid is the Schmidt number for a particular grain size a, and the midplane density ρ d,a (0) is set by requiring that Here Σ g is the gas column density at cylindrical radius , f dust is the dust to gas ratio. For any given choices of parameters describing the star (M, L) and the disc ( , f dust , T 0 , α, p, q) and the dust (a, ρ s ), we can use this expression to compute the dust and gas densities at every point, and then use these to compute χ and t clr . In Figure 3 we show an example map of χ and t clr for a transition disc with = 10 −2 , f dust = 10 −4 . In the example shown, the midplane is quite resistant to dust clearing (t clr 1 Myr, but above the midplane significant clearing is possible on timescales well under a Myr. Recall, however, that we have the near-proportionality t clr ∝ f d,0 ρ 2 g . Thus we expect clearing to become much more rapid as we move to discs that are more dust-or gas-depleted than the example shown in Figure 3. Conversely, for richer discs clearing will be much slower. T 0 = 120 K, α = 10 −3 , p = 3/2, q = 3/7, a = 0.1 µm, ρ s = 1 g cm −3 . The top row shows these parameters in true position ( , z), with the axes sized to reflect the true disc aspect ratio. In the bottom row we show the same data, but in coordinates (log , µ = z/r), so that the inner disc is more clearly visible, and radial rays from the central star correspond to horizontal lines. White lines are contours of dust density ρ d , starting with ρ d = 10 −24 g cm −3 for the lowest contour and increasing by factors of 100 for each successive contour. We do not show χ and t clr for ρ d < 10 −30 g cm −3 .
2D SIMULATIONS
Armed with the general understanding provided by the simplified 1D system solved in Section 3, we now proceed with full numerical solutions to Equation 1 in 2D, including the full spatial dependence of the background gas disc. We summarise the properties of the runs we carry out in Table 2. Motivated by Figure 3, we take = 10 −2 , f dust = 10 −4 as our most gas-and dust-rich case, and explore from those values downward.
Numerical Method
We solve Equation 1 using a conservative finite volume method that we fully describe in Appendix B. Our method is second-order accurate in time, second-order accurate in space for the diffusion terms, and third-order accurate for the advection terms. The calculation operates on a 2D spherical polar grid defined by coordinates (r, θ), which we divide in N r × N θ cells. For convenience, since we will go back and forth between polar and cylindrical coordinates, we will use µ = cos θ = z/r as our coordinate rather than θ; z and µ both increase in the same direction, and µ = 0 corresponds to z = 0. The inner and outer radial edges of the grid lie at r = r min and r max , respectively, and in the polar direction the edges of the outermost cells are at µ = 0 and µ = µ max . We assume symmetry about the midplane at µ = 0. All the simulations we present here use N r = 512, N θ = 256, r min = 0.1 AU, r max = 50 AU, and µ max = 0.1.
In each computational cell we track the density of N a logarithmically-spaced grain size bins, each with mean grain size a k . That is, the density of grains in size bin k represents the total density of grains with sizes from √ a k−1 a k to √ a k a k+1 , where k = 1 . . . N a , and a 0 and a N a +1 are set so that bins 1 and N a contain the same logarithmic range of grain sizes as all other bins. For all simulations we present here, we adopt N a = 4 with a k = 10 (k−1)/2 µm, so grain sizes go from 10 −0.5 − 10 µm in steps of 0.5 dex. We adopt zero flux boundary conditions across both the midplane at θ = 0 and the top of the disc at θ = θ min . At the inner and outer radial boundaries of our computational domain we adopt diode boundary conditions. At the inner edge we set the diffusive mass flux to zero, and we set the mass flux across the boundary to zero in any location where the velocity is into the computational domain, but we allow mass to flow inward radially if the velocity at the domain edge is inward. For the outer boundary, we similarly set the diffusive mass flux to zero, and anywhere the radial velocity is out of the domain we allow mass to flow out freely, but no mass to enter.
Initial Conditions
We initialise all simulations using the analytic solution derived by Takeuchi & Lin (2002), given by Equation 54. However, unlike in Section 3, we now consider multiple grain size bins, and thus the constraint on the initial midplane density ρ d,a (0) becomes where Σ g is the gas column density at cylindrical radius , f dust is the initial total dust to gas ratio summed over all grain sizes, and f a is the fraction of the total grain mass found in grains in the size bin whose mean size is a. We set the initial fractional masses f a in each grain size bin p Gas density index −1.5 −1.5 −1.5 0 0 0 Mass relative to MMSN at 1 AU 10 −2 10 −2 10 −3 10 −2.5 10 −2.5 10 −2.5 f dust Initial D/G ratio 10 −4 10 −5 10 −4 10 −3 10 −4 10 −5 Table 2. Summary of 2D simulation parameters by assuming that grains follow a size distribution consistent with a collisional cascade, dn/dm ∝ m q d , where m ∝ a 3 is the mass of an individual grain and q d ≈ −11/6, as expected for a collisional cascade after larger bodies have started to form (Dohnanyi 1969). From this choice, we have f a,k ∝ (a k+1 a k ) 3(2+q d )/2 − (a k a k−1 ) 3(2+q d )/2 for size bin k, which together with the constraint k f a,k = 1 fully specifies the initial mass in each bin.
We carry out six simulations, using the parameters shown in Table 2. The first three of these use our fiducial p = −1.5 density profile for the initial gas and dust disc, with varying amounts of gas and dust depletion. The most gas-and dust-rich of these cases (case 1 in the table, with = 10 −2 , f dust = 10 −4 ) corresponds to the parameters shown in Figure 3, and roughly to the highest column density inner transition disc in the sample of van der Marel et al. (2016); case 2 has a lower initial dust-to-gas ratio by a factor of 10, while case 3 has the same dust-to-gas ratio as case 1, but a factor of 10 lower disc mass overall. The next three simulations use a flat gas density profile p = 0, as might be expected to prevail late in disc evolution after the majority of the gas in an inner disc accretes onto the host star. These are interesting both because they potentially represent the evolution at such late stages of accretion, and because the choice p = 0 implies that there is no radial drift of grains into the star, and thus no other grain removal mechanism operates. For these cases we fix the ratio of mass relative to the MMSN at 1 AU to = 10 −2.5 , and use three dust abundances f dust = 10 −3 , 10 −4 , and 10 −5 ; note that, since the gas surface density is normalised at 1 AU, our choice = 10 −2.5 implies that, at the outer disc edge at 50 AU, the gas surface density is in fact 10% larger than that of the fiducial MMSN at that radius.
Cases with a radial gas gradient
We show snapshots of our simulation results for cases 1, 2, and 3 in Figure 4, Figure 5, and Figure 6, respectively. Comparing the runs, we see some common features and some differences. As expected, our initial distribution of grains places the largest grains at the smallest scale height. Because of their small scale height, the largest grains at the outer edge of the disc are completely shielded from radiation by the dense inner disc. As a result the grains drift inward from the outer edge of the disc at 50 AU, leaving a void behind them; this is the usual result of gas drag, and is fastest for the largest grains because they are nearest to be being criticallydamped, T s ≈ 1. At smaller radii, where grains are exposed to radiation pressure, the situation is very different. In case 1, which is our most gas-and dust-rich, the inner disc is close to static over the duration of our simulation. This is simply a consequence of the small values of χ for the inner part of the disc shown in Figure 3: due to the strong drag forces imposed by high gas densities, the net rate of grain drift is small. The smallest grains that are lofted well above the disc plane and that are exposed to radiation can drift at appreciable speeds, but the mass of grains in regions that are subject to drift is negligible compared to the much larger mass in regions where drift is negligible. Consequently any grains that are pushed outward by radiation are immediately replaced as turbulence causes the much larger reservoir of low-altitude grains to diffuse upward.
In cases 2 and 3, on the other hand, the outcome is very different. Case 2 has lower shielding against radiation due to its lower dust mass, while case 3 has both lower shielding and reduced drag. Both lead to a substantially higher value of χ, and a shorter dust clearing time, such that dust is driven back from the disc inner edge on timescales of ∼ 100 kyr. The smallest grains are swept up most rapidly, because their greater height within the disc leaves them both more exposed to stellar radiation, and less slowed by gas drag. Figure 4. Snapshots of the dust distribution in our simulation with = 10 −2 , f dust = 10 −4 , and p = −1.5 (case 1 in Table 2). Each column shows the distribution at a different time, as indicated at the top of the column. The top four rows show the density ρ d, a of grains of radius a = 10 −0.5 , 1, 10 0.5 , 10 µm, as indicated by the white labels on each row, as a function of position; we use log and µ = cos θ as our position coordinates, so in this projection radial rays from the star correspond to horizontal lines. The bottom row shows the vertically-integrated column density Σ d for each grain size bin, and summed over all grain sizes (see legend), as a function of log radius. An animated version of this figure is included in the Supplementary material (online).
Larger grains that sit lower in the disc move outward more slowly, and form a sharper ring of dust due to the stronger drag forces in the regions where they reside. Consequently, radiation sorts the grains by size; this sorting is especially apparent for case 3 ( Figure 6). However, the entire structure of sorted grains moves outward over time, eventually colliding with the outer edge of shielded grains drifting inward due to gas drag. At this point all the grains are collected into rings that radiation pushes outward. If we allow the simulation to run long enough, eventually all the dust leaves the computational domain. Figure 5. Same as Figure 4, but for the run with = 10 −2 , f dust = 10 −5 , p = −1.5 (case 2 in Table 2); note that the colour scales in the two figures are not the same. Due to the higher χ value, the dust is efficiently pushed outwards on timescales of 100 kyr. An animated version of this figure is included in the Supplementary material (online).
Cases without a radial gas gradient
In Figure 7, Figure 8, and Figure 9 we show the results for cases 4, 5, and 6, respectively, our three cases without an initial radial gradient in the gas or dust surface density. These runs show a qualitatively different evolution from the previous cases, in that there is no inward migration of dust caused by drag. Instead, there is only outward flow of the dust caused by radiation pressure, which sweeps up an outward-moving front. Grains sort by size, but by a smaller amount than in the cases with p = −1.5.
Moreover, the radius of the front versus time is quite different than in cases 1 -3. In the cases with p = −1.5, the most difficult part of the disc to evacuate is the centre. Once the central regions are clear, however, the process tends to run away: the declining density with radius, and thus the decrease in both mass to be swept up and strength of diffusive mixing, makes it relatively easy to clear the entire disc inside a few hundred kyr. For the cases with p = 0, neither Figure 6. Same as Figure 4, but for the run with = 10 −3 , f dust = 10 −4 , p = −1.5 (case 3 in Table 2); note that the colour scales in the two figures are not the same. The order of magnitude reduction in gas density compared to case 2 facilitates the collection of dust into rings, whose location and shape are grain-size dependent. An animated version of this figure is included in the Supplementary material (online).
the amount of material nor the strength of diffusive mixing decrease with radius, and thus the inner part of the disc becomes easier to clear than the outer part. In the cases with f dust = 10 −4 and 10 −5 (Figure 8, and Figure 9), the inner 1 AU of the disc is evacuated in only ∼ 50 kyr, but the front does not reach 10 AU until hundreds of kyr. Thus in this configuration we expect to find a long-lived dust hole in discs. By contrast in the case with f dust = 10 −3 , the smallest grains are only evacuated to ∼ 1 AU after 200 kyr, while the largest grains stall: after moving outward for ≈ 150 kyr, they cease to be pushed back from the star, and after some time even begin to re-occupy the central region from which they were at first expelled. Figure 7. Same as Figure 4, but for the run with = 10 −2.5 , f dust = 10 −3 , p = 0 (case 4 in Table 2); ote that the colour scales in the two figures are not the same. The absence of radial density (and pressure) gradients inhibits the formation of rings and an inner hole when the gas mass is non-negligible. An animated version of this figure is included in the Supplementary material (online).
Astrophysical Implications
In our simplified one dimensional case, we found that grains clear faster than they accrete for a dimensionless parameter χ 10 (Equation 43). Considering now the 2D models of Section 4, it is helpful to keep in mind that the accretion timescale t acc in physical units was ∼ 3 Myr for our relatively low viscosity parameter α = 10 −4 , so a more relevant criterion for whether radiative dust clearing is significant is arguably whether the clearing timescale t clr is on the same order of magnitude as the ∼ 0.1 Myr timescale for the transitional disk phase as constrained by population studies (Alexander et al. 2014). This timescale t clr has no dependence on α for high χ, and is proportional to 2 f d (Equation 53, noting that ρ g ∝ ). For our case with gas surface density power law index p = −1.5, in our most gasand dust-rich case, case 1, we have 2 ( f d /0.01) = 10 −4 , while the values are 10 −5 for case 2 and 10 −6 for case 3. The simu- Figure 8. Same as Figure 4, but for the run with = 10 −2.5 , f dust = 10 −4 , p = 0 (case 5 in Table 2); note that the colour scales in the two figures are not the same. The timescale for clearing the outer disk increases markedly when compared to similar systems (cases 1 and 3) with a radial pressure gradient. An animated version of this figure is included in the Supplementary material (online).
lations show that a transition to rapid dust clearing occurs around 2 ( f d /0.01) ∼ 10 −5 . For our p = 0 cases, models 4 -6, we also find a transition to efficient clearing around 2 ( f d /0.01) ∼ 10 −5 , corresponding to case 5 (c.f. Table 2). Thus our simulations, together with our analytical calculation of timescales and their dependence on disc dust and gas properties, support the general hypothesis that radiative dust clearing is a significant process in any disc satisfying 2 ( f d /0.01) 10 −5 .
In the critical inner ∼ 10 AU, gas power law densities have not been carefully measured for Class II objects, although it appears possible to do so in the coming years with ALMA (Miotello et al. 2018). For transitional discs, gas density models that combine spectra with partially resolved observations in CO isotopologs result in only moderately satisfactory fits (van der Marel et al. 2016), but clearly indicate very significant depletion of CO in the inner ∼ 10 AU. This CO depletion is further supported by simultaneous Figure 9. Same as Figure 4, but for the run with = 10 −2.5 , f dust = 10 −5 , p = 0 (case 6 in Table 2); note that the colour scales in the two figures are not the same. The absence of a radial pressure gradient inhibits ring formation because grains do not experience inward drift. Instead a wide and long-lived inner hole forms. An animated version of this figure is included in the Supplementary material (online).
modelling of spectra and spectro-astrometry (Pontoppidan et al. 2008), where the complete lack of CO gas at high velocities is strong evidence of cleared CO within ∼ 5 AU of SR 21 in particular. Typical values of around 10 −2.5 and power law indices between 0 and −1.5, i.e., precisely the range spanned by our simulations, are consistent with those papers. Small dust grains are also severely depleted in these discs, with depletions in the very inner disc between 10 2 and 10 6 ; indeed, spectral energy distribution modelling is consistent with there being no dust at all at moderate radii (∼ 10 AU; van der Marel et al. 2016). The mechanisms we have explored in this paper provide a natural explanation for these results, since we show that the combination of radiative acceleration, gas drag and turbulent viscosity could start from the small grain dust distribution of a Class II T Tauri star and produce an inner hole largely cleared of small grains, so long as some grain growth ( f d ∼ 10 −4 ) and accretion-based gas clearing ( ∼ 10 −2.5 ) occurs during the Class II phase.
Model Limitations
We end this discussion by pointing out some of the limitations of the models we have explored thus far, which point to the directions required in future work. First, our calculation is limited to grains whose interaction with radiation can be approximated by geometric optics. Although this is an excellent starting point, since it applies to almost all grains larger than a few tenths of a micron, it does not represent the situation in the most evolved of discs, where for example there is evidence for very small grains (Oph IRS 48 and HD 169142, Birchall et al. 2019) or unusually bright scattering indicative of unusual optical properties (LkCa 15, Thalmann et al. 2016). Indeed, the mechanism proposed in this paper provides a natural explanation for clearing out grains with a "normal" optical properties, and thus high ratio of radiative to gravitational acceleration β, leaving unusually low β grains behind. A second limitation of our models is that we considered a static background disc, rather than one whose structure is self-consistently generated as a result of viscous accretion and similar processes that shape the gas distribution. This makes it difficult to directly relate our model parameters to the stellar accretion rate, which is a key measurable parameter of real star-disk systems. We chose not to evolve a model where the gas was in a viscous steady state because although turbulent diffusion is certainly a key driver of the evolution of the dust density distribution, turbulent viscosity (Shakura & Sunyaev 1973) is not the leading candidate for driving gas accretion in cool, evolved disks approaching or in the transitional phase (Bai & Stone 2013;Turner et al. 2014). Ideally, the work in this paper could be coupled with a plausible gas accretion mechanism. We note that gas clearing may be coupled with dust clearing, as the dust density distribution directly feeds back to the true gas scale height and dynamics -another physical mechanism beyond the scope of this paper.
CONCLUSIONS
We have shown in this paper that as grain growth and accretion processes naturally clear a protoplanetary disc, there is a transition point where the combination of radiation pressure and gas drag can rapidly remove small (∼micron) sized grains from the inner disc, leaving a transitional disc structure behind. The physical mechanism for this clearing is a reduction in the effective gravity of small grains, resulting in a smaller orbital velocity so that gas drag can move the grains outwards. Using 2D simulations in which we simultaneously include radiative forces, turbulent diffusion of dust by gas, and inward flow of dust due to gas accretion and radial pressure gradients, we show that the disc clearing is not simply a surface effect, and can affect the entire small grain dust disc structure. This process had not been studied in depth before, because the mechanism alone is not effective for a minimum mass solar nebula, and requires the disc to already have evolved significantly. Conversely, however, once these evolutionary processes drive the gas and dust density low enough, radiative clearing becomes both unavoidable and rapid.
Our proposed physical clearing process has a number of appealing features. It does not invoke planet formation directly, and can take place even in a disc that does not form planets. It leaves behind a structure that is consistent with current transitional disc observations. Notably, this process clears dust and not gas, so is consistent with transitional discs still having moderately large gas discs while having an inner cavity that is almost completely devoid of dust. The primary limitation of our work thus far is that, while we have considered a range of dust grain sizes, we have thus far limited our calculations to grains whose interaction with starlight can be described by geometric optics. Future work will involve relaxing this assumption allowing us to consider not just grains smaller than ∼ 0.1 µm that are too small for geometric optics, but also grains with differing radiative properties, for example high degrees of scattering asymmetry. This will also enable radiative transfer models to see how effectively the dust structures produced by this model can reproduce real observations.
APPENDIX A: NUMERICAL METHOD FOR 1D SYSTEMS
Here we describe the numerical method we use to solve the simplified 1D system, Equation 41. To avoid clutter in this appendix we drop the primes on all the terms in this equation, but all the quantities listed are the non-dimensonalised ones.
A1 Spatial discretisation, initial conditions, and boundary conditions
We use a uniform grid with constant cell size ∆x, with the left edge of cell 0 at x = 0 and the right edge of cell N − 1 at x = x max ; we use x i−1/2 and x i+1/2 to denote the positions of the left and right edges of cell i. We use a finite volume discretisation on this grid; integrating Equation 41 over cell i, we have where ρ d,i = ∆x −1 ∫ x i+1/2 x i−1/2 ρ d dx is the average of ρ d over cell i, and the subscripts i+1/2 and i−1/2 indicate that a particular quantity is to be evaluated the corresponding cell edge.
We initialise the simulation with ρ d,i = 1 in every cell, and adopt boundary conditions whereby both the advective and diffusive fluxes out of the domain are set to zero. To ensure that these choices do not affect the result, we always choose the size of our domain large enough so that the right edge of the domain is well beyond the dust wave, and thus the flux through cells near it is negligibly small in any event.
As the simulation evolves, and the dust front moves to larger x, an increasingly large fraction of the computational domain becomes filled with cells for which ρ d,i ≈ 0. To avoid expending CPU cycles needlessly updating these nearly-empty cells, at the end of each time step (see next section) we shift our grid to the left, removing the leftmost cells within which ρ d,i < 10 −6 . We keep the number of cells constant by adding an equal number of new cells on the right hand side of the domain, all initialised to ρ d,i = 1; since our domains extend well past the edge of the dust wave at all times, the existing cells adjacent to those being added also have ρ d,i ≈ 1, and thus the newly-added cells blend smoothly with the existing ones.
A2 Time discretisation and time-stepping strategy
We advance the calculation in time using an implicit-explicit update step with Strang (1968) splitting between the advective terms, which we handle explicitly, and the diffusion terms, which we handle implicitly. To advance the calculation from time t n to time t n+1 = t n + ∆t, starting from the dust densities ρ (n) d,i in every cell at time n, we carry out the following steps: (i) Advance the diffusion subsystem (the ∂ ρ d /∂ x terms in Equation A1) for a time ∆t/2 using an implicit method that is second-order accurate in space and time (Section A3.1). We denote the state after this step as ρ ( * ) d,i . (ii) Advance the advection subsystem (the ρ d e −τ terms in Equation A1), starting from state ρ ( * ) d,i , for a time ∆t using an explicit method that is second-order accurate in time and third-order accurate in space (Section A3.2). We denote the state that results from this procedure ρ The overall scheme is second-order accurate in time. We set the time step based on a Courant-Friedrichs-Lewy (CFL) µ = cos θ, and r min is the inner edge of the computational domain. The centre of cell i j is located at coordinates x i and µ j , and its upper right corner is located at x i+1/2 and µ j+1/2 . In the radial direction the grid starts at r −1/2 = r min and ends at r N r −1/2 = r max , and in the azimuthal direction it extends from µ −1/2 = 0 to µ N θ −1/2 = µ max ; the grid is N r × N θ cells in size, and is uniformly-spaced so that the sizes of cells are ∆x = (x N r −1/2 − x −1/2 )/N r and ∆µ = µ N θ −1/2 /N θ . For this grid, cell volumes and radial and angular cell face areas are respectively. We also discretize in bins of grain size, by defining a logarithmically-spaced set of grain size bins. Specifically, we use N a grain size bins, with a −1/2 = a min representing the smallest size grains in the smallest bin, and a N a −1/2 the largest grains in the largest bin, ∆ log a = log(a k+1 /a k ) = log(a N a −1/2 /a −1/2 )/N a constant, and a k = √ a k−1/2 a k+1/2 representing the mean size of grains in the kth size bin. We let ρ d,k represent the mean density of grains in the size range from a k−1/2 to a k+1/2 . We adopt a finite-volume spatial discretization strategy. Integrating Equation 1 over the volume of cell i j, and making use of the divergence theorem, we have ∂ ∂t ρ d,i jk = −V −1 i j · F adv,i+1/2, j,k A i+1/2, j − F adv,i−1/2, j,k A i−1/2, j + F adv,i, j+1/2,k A i, j+1/2 − F adv,i, j−1/2,k A i, j−1/2 + F diff,i+1/2, j,k A i+1/2, j − F diff,i−1/2, j,k A i−1/2, j + F diff,i, j+1/2,k A i, j+1/2 − F diff,i, j−1/2,k A i, j−1/2 , (B4) where F adv and F diff are the advective and diffusive fluxes at the cell faces, defined by Here, for each size bin k, ρ d,i jk is the mean dust density in cell i j, v d,r and v d,θ are the r and θ velocities, and D d,k is the diffusion coefficient evaluated for grains of size a k . Note that Equation B4 is exact. We defer discussion of how we evaluate the fluxes to Section B2. We discretise the equations in time and advance using the same approach as in the 1D case, as described in Appendix A2. Specifically, we break the problem into advective and diffusive subsystems, and use Strang (1968) splitting to advance them alternately while retaining second-order accuracy in time.
B2 Subsystems
Here we describe our procedures for advancing the advection and diffusion subsystems.
B2.1 Diffusion
The diffusion subsystem of Equation B4 is ∂ ∂t ρ d,i jk = −V −1 i j · F diff,i+1/2, j,k A i+1/2, j − F diff,i−1/2, j,k A i−1/2, j + F diff,i, j+1/2,k A i, j+1/2 − F diff,i, j−1/2,k A i, j−1/2 . (B9) We evaluate the diffusive fluxes using second-order accurate centred finite differences: F diff,i±1/2, j,k = −D d,i±1/2, j,k ρ g,i±1/2, j · x i±1/2, j df d,k dx i±1/2, j (B10) F diff,i, j±1/2,k = −D d,i±1/2, j,k ρ g,i, j±1/2 · where x = dx/dr is the derivative of the mapping between radius and our radial variable x, and similarly for the µ derivatives. The subscripts indicate the cell face or centre at which all quantities are to be evaluated, and we note that ρ g and D are known analytically at all positions. We discretise the diffusion subsystem in time using a second-order accurate Crank-Nicolson scheme. Defining, Θ = 1/2 as the time-centring parameter, for a time step ∆t we have where ρ (n) d,i jk and ρ (n+1) d,i jk denote quantities evaluated at the previous and new times, respectively.
We can rearrange Equation B13 to obtain a sparse linear system for each grain species k, Here ρ (n+1) d,k is vector with N r N θ elements ordered so that element contains the dust density in cell (i, j) = ( mod N r , /N r ), i.e., and similarly for ρ (n) k . The term ρ (n) k,diff represents the rate of change in density due to diffusion evaluated at the old time, and is given by diff,i, j−1/2,k A i, j−1/2 .(B16) We solve Equation B14 using a biconjugate gradient stabilised solver (BiCGSTAB) as implemented in the Eigen software package (Guennebaud & Benoît 2010).
B2.2 Advection
The advection subsystem of Equation B4 is ∂ ∂t ρ d,i jk = −V −1 i j · F adv,i+1/2, j,k A i+1/2, j − F adv,i−1/2, j,k A i−1/2, j + F adv,i, j+1/2,k A i, j+1/2 − F adv,i, j−1/2,k A i, j−1/2 .(B17) We solve this subsystem using the same TVD approach described in Appendix A3.2. Given a starting state ρ (n) d,i jk , we advance the calculation from t n to t n+1 = t n + ∆t via adv,i+1/2, j,k A i+1/2, j − F (n) adv,i−1/2, j,k A i−1/2, j + conditions. At the inner boundary, we solve for the velocity across the innermost cell face v d,−1/2, j,k,r as described in the previous section, but if v d,−1/2, j,k,r > 0 (i.e., if the velocity is radially outward, and thus into the domain), we set v d,−1/2, j,k,r = 0 so that no mass enters the domain from smaller radii. We treat the outer boundary in the same way: we compute v d, N r −1/2, j,k,r , but if the resulting value is negative, indicating flow into the computational domain, we reset the value to zero. This paper has been typeset from a T E X/L A T E X file prepared by the author. | 2019-12-14T05:34:36.000Z | 2019-12-14T00:00:00.000 | {
"year": 2020,
"sha1": "339fef8908d27baf9987791c7b4f31c7649b2b13",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1912.06788",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "339fef8908d27baf9987791c7b4f31c7649b2b13",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
259361084 | pes2o/s2orc | v3-fos-license | Fast Estimation of Physical Error Contributions of Quantum Gates
Large-scale quantum computation requires a fast assessment of the main sources of error in the implemented quantum gates. To this aim, we provide a learning based framework that allows to extract the contribution of each physical noise source to the infidelity of a series of gates with a small number of experimental measurements. To illustrate this method, we consider the case of superconducting transmon architectures, where we focus on the diabatic implementation of the CZ gate with tunable couplers. In this context, we account for all relevant noise sources, including non-Markovian noise, electronics imperfections and the effect of tunable couplers to the error of the computation.
INTRODUCTION
In recent years, there has been a steady advance in the scale and quality of quantum computing architectures, although the presence of noise and imperfections in the system remains a vital issue holding back the achievement of quantum advantage.An accurate diagnosis of the physical origin of the errors is key to specifically tailor hardware modifications, fabrication or calibration procedures to tackle the problem.In addition, noise-aware error mitigation techniques [1] require a detailed understanding of the noise processes in the system, while error correcting codes, which currently assume the hardware noise to be completely uncorrelated, will require tackling more complex error sources such as leakage [2] to advance beyond state of the art experiments [3,4].
The noise present in quantum circuits is typically modelled by appending specific noise channels after the application of each gate in the algorithm [5][6][7].These noise channels can be reconstructed experimentally, or constructed based on a predefined model together with some experimentally measured parameters.However, this approach contains a number of implicit assumptions about the underlying error processes, namely that the noise contains no temporal or spatial correlation, is tracepreserving and small in the operator norm sense.
While the above modelling is a good first approach to predict the approximate performance of an algorithm, many of its assumptions are not justified in a realistic circuit execution.Considering superconducting qubits as an example, many experiments report a longer spin echo dephasing time compared to a Ramsey decay time [3,[8][9][10], directly demonstrating the presence of time-correlated or non-Markovian noise.Moreover, the second-excited level of a transmon plays a major role in the implementation of single-qubit rotations, and is currently one of the major infidelity contributors to such operations, even though such a noise channel is not trace preserving in the trun-cated qubitized subspace [11].Furthermore, many twoqubit gate implementations rely on the transfer of the population outside of the computational subspace, meaning that an imperfect calibration will often result in a part of the population remaining outside of of the computational subspace.Since many state-of-the art quantum computing architectures employ non-computational elements on the chip, such as tunable couplers, the effect of leakage into tunable coupler states is even more convoluted [9,10,[12][13][14][15][16][17].
This evidence has driven the community to develop benchmarking and characterization methods beyond the standard assumptions.While typical noise characterization protocols such as, e.g., Gate Set Tomography (GST) [18] or Randomized Benchmarking (RB) [19], still operate under the same assumptions, some of them have recently been extended to also include non-Markovian effects [20][21][22], including upgraded functionalities to differentiate different kinds of non-Markovianity [23].Other proposals have focused on the use of phenomenological non-Markovian master equations to describe and predict the dynamics of superconducting quantum computing processors [24][25][26].
The above are very important advances in our understanding of the presence and impact of complex errors in the system, but assessing their physical origin is key to developing accurate error suppression, mitigation and correction techniques.In this regard, GST has been extended to connect the reconstructed errors to specific physical noise sources within the Markovian approximation [27,28].However, an accurate characterization of the main error sources, including more complex ones such as correlated noise and leakage errors, presents a mayor drawback: accessing every single error parameter necessitates dedicated experiments or may even be untractable in some cases.Moreover, even if a gate is well-characterized at one point in time, due to the drifts that may occur in both the qubits and their environment, the error contributions will evolve over time [29].These temporal drifts mean that the operations we implement must be re-calibrated on a regular basis.In the very near future, when chips are scaled to large numbers, we there- x h P c 8 B 9 J o F q P j Z A q G T m V k y H R B K q T V p l E 4 I 7 / + V F a J 3 U 3 P P a 2 e 1 p t X 5 V x F F C B + g Q H S M X X a A 6 u k E N 1 E Q U P a J n 9 I r e r C f r x X q 3 P m a t S 1 Y x s 4 / + y P r 8 A a M E k 5 0 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " Z q 7 p A J < l a t e x i t s h a 1 _ b a s e 6 4 = " Z q 7 p A J < l a t e x i t s h a 1 _ b a s e 6 4 = " X S F R 7 q q Y o J n N M f 3 8 v p 6 I V X M J r t fore need a fast procedure to characterize the errors in the quantum computers, in order to minimize the time needed for the re-calibration.
In this paper we present a new error budgeting approach that enables efficiently accessing the contribution of the main noise sources to the system infidelity.The method is based on comparing experimental measurements with precomputed simulations obtained for a broad range of parameters, in order to avoid having to chararacterize all of the noise parameters individually.Thus, our method substantially differs from previous ones based on an adaptive approach, where the computational and experimental efforts are concentrated on finding the simulation with a complete set of noise parameters that best describes the system dynamics.Interestingly, this new diagnosis scheme allows to gain knowledge about errors that are hard to parameterize with more general benchmarking techniques, such as those that are non-Markovian or non-trace-preserving, and is general to any quantum computing platform where the dynamics can be modelled sufficiently accurately.This technique is sketched in Fig. 1(c).
We illustrate this method by considering transmonbased superconducting qubit devices.To this aim, we have developed what perhaps is the most complete error modelling framework existing for such devices.The proposed framework remains simple enough to be efficiently simulated, as required by our budgeting scheme.Finally, we discuss the results obtained for the contribution of the different error sources to the infidelity of different gate sequences.
The paper is organized as follows: We first describe the relevant physical error sources of both single and twoqubit quantum gates in Sec.III.In Section IV we show how such physical noise processes can be efficiently simulated, even when including non-Markovian dynamics, before demonstrating their effects on the gate performance in Sec.V C. In Sec.V D we describe our learning-based, approach to diagnose gate errors.
II. RECONSTRUCTING ERROR BUDGETS BASED ON NOISE MODELLING
In this section we describe our proposed scheme to extract information about the infidelity contribution from realistic devices.The scheme uses supervised learning techniques to interpolate between simulations, obtained with different noise parameter values, and an experimental result.This allows us to make comparisons between experiments and the most similar simulations and extract the infidelity contributions.We use Gaussian Process Regression (GPR), a widely successful machine learning technique with demonstrated uses in different fields ranging from geostatistics [30], material science [31] and also the modelling of classical integrated circuits [32,33].The main benefits of using GPR are the inherent uncertainty predictions based on the similarity of the experimental values to the simulated ones, the representational flexibility, as well as typically smaller training samples [34].
While standard characterization is based on performing a number of different experiments to obtain the parameters of each noise model, our proposal is experimentally less costly, as it outputs the desired error budget by comparing a smaller number of experiments with low shot numbers to a large number of simulation results.Unlike typical gate calibration procedures [9], we also assume that we only have access to circuit level results.
FIG. 2.
An illustrative scheme of a simplified scheme to measure the T1 decay induced infidelity using a single input experiment and GPR.The input experiment used in this case consists of preparing a qubit in the excited state and waiting for 1 µs before measuring the excited state (x-axis).The data points correspond to a number of simulated qubit evolutions with different T1 decay times, and different measurement errors indicated by the horizontal error bars.An interpolation of these points using GPR is plotted with the solid blue line, with the shaded area representing the uncertainty of the GPR prediction due to state preparation or measurement errors.As a reference, the exact relationship between the measured excited state probability and a single-qubit gate infidelity (yaxis) is plotted with the dotted blue line.In general, this relationship is typically not known, due to the presence of different error sources, and numerical simulations are the only available probe.According to the proposed method, when an experimental result is obtained (vertical red solid line), the GPR prediction curve would allow to determine the corresponding infidelity (horizontal red solid line).Furthermore, an estimate of the accuracy of this prediction due to the GPR prediction is depicted with the shaded red area.The correct value is plotted with the horizontal dotted red line.
To be more specific, currently, if one wishes to extract the information about the error budgets of the quantum gates, the general workflow of such a procedure is described by the following steps: 1. Construct a model of the noise in the system.
Perform a number of experiments needed to char-
acterize the noise model.
3. Extract model parameters from experimental measurements.
4. Simulate the model with the measured parameters and extract the relevant information about the error budget.
Validate model on new experiments.
However, extracting all the relevant parameters of the noise is costly, or even not-doable, like in the presence of non-Markovian effects, generally produced by complex environments that need to be fully characterized, or when having to infer the properties of the non-computational and non-accessible elements such as couplers in the case of superconducting qubits.We therefore propose an alternative approach of extracting similar information, which has been sketched in Fig. 1(c): 1. Construct a physical noise model of the system.
2. Simulate different experiments with a large number of different parameters.
3. Perform the experiments on real hardware.
4. Systematically compare the similarity between the different simulations and experiment.
What is crucial is the last point in the list.If the experimental measurements coincide with a simulation for a specific set of noise parameters, we can be fairly certain that our model is sufficiently accurate and the noise parameters in experiment are the ones we used for the simulation, however this is rarely the case.Even if our model was perfect we will likely never perfectly guess the correct parameters and this must be reflected in the certainty of our predictions.We therefore need to base the error bars of our reconstructed error budgets on the difference between the experimental measurement and the most similar simulations.
The predictions from the experimental input are then given by interpolating the values from the simulation, while the uncertainties of the predictions are given by two sources.The first is the similarity of the simulated and experimental inputs, and the second is the inherent uncertainty of the measured results, either due to finite sampling noise or state preparation and measurement errors.More formally, we do this by using Gaussian Process Regression (GPR) [34].
In other words, we use the trained GPR as a model to describe the more complex relationship between the outcomes of circuit executions and the infidelity contributions, similarly to the scheme proposed in Ref. [35].A naive simplified example of how this procedure can be employed is illustrated in Fig. 2. The example in Fig. 2 only covers one error source and one input experiment, but realistically more experiments can be used as an input, and each error source warrants its own GPR model (since the error sources are independent of each other).We stress again, that in a practical setting, due to the presence of several error sources, it is difficult to disentangle the individual contributions from circuit execution data.
GPR is a Bayesian regression method based on finding the optimal Gaussian process which best fits the data.A Gaussian process is a stochastic process characterized by a multivariate Gaussian probability distribution.More details about the implementation of GPR can be found in Appendix A. The main benefits of using Gaussian processes to interpolate the data is their ability to predict the uncertainty of the outputs, which is not the case for most other regression or machine learning models.Additionally, this method does not require unrealistically large amounts of data, with training sets on the order of 500 simulations being typically enough, however this depends on the uncertainty of the parameters we are considering.
It can be shown that GPR is a universal function approximator, meaning that it can be used to model complex non-linear relationships [36], however it is also important to note that since the optimization of the GPR parameters can be costly and non-convex, we typically repeat the training step several times in order to obtain the most optimal solution [34].The complexity of the model can be easily increased or decreased by modifying the kernel function (see Appendix A) and it is typically evident from the optimized parameters when the model complexity is sufficient.
The uncertainty of the predictions of a properly optimized GPR model are typically low in the vicinity of the training inputs, and larger further away.Practically, this means that if the experimental input is significantly different from the simulated sample, the GPR model will return large uncertainties, giving us a clear signal that our theoretical model predictions, within the parameter ranges we have simulated, do not adequately explain the experiment, which is crucial knowledge since no model will ever be perfect.Another source of uncertainty in the predictions is the inherent noisiness of the input output pairs, since we are, e.g., fundamentally limited in precision by the finite number of shots used to probe the system.
We can see that in both the standard and our alternative approach, one has to specify a noise model at one point or another, so we are not restricting ourselves significantly more than in the standard approach.If we are not able to model the noise accurately, then there is no way to estimate the effect of a noise source on the gate with any type of procedure.
One drawback of the alternative approach compared to the standard methods is therefore not the physical modelling of the noise in the gates we are interested in, but state preparation and measurement (SPAM) errors, which therefore must be taken into account in the simulation sample.We have described how we simulate these in Secs.IV A, IV C. If this modelling of the SPAM errors is not correct, we will see this as model violation, i.e. large uncertainties in the predictions.
Alternatively, one could define a similar characterization scheme without having a large precomputed sample of simulations, based on an optimization loop which finds the best model parameters.However, noisy simulations of quantum systems, especially with non-Markovian effects, tend to be very demanding so by precomputing a larger sample, we can efficiently parallelize our simulations, thus drastically reducing the computational time needed.
III. PHYSICAL ERROR SOURCES OF QUANTUM GATES
In this section we will present all the error sources currently limiting the gate fidelities of transmon quantum computers.Here we will also make the distinction between an error source and an error type.As an example, consider leakage to the higher levels of a transmon during a single-qubit gate operation -we consider this to be an error type, which can have several error sources, such as e.g.non-unitary heating transitions or unitary leakage dynamics due to the small anharmonicity.In detail, • We improve the modelling of environmental effects in the two-qubit gate scheme.Namely the T 1 decay by considering the many-body effects in the nonunitary dynamics derived from the Hamiltonian of a physically realistic environment of two-level systems (TLSs), as well as considering the non-Markovian flux noise as a more realistic description of the pure dephasing of flux-tunable transmons.
The main error contributions for single and twoqubit gates are sketched in Figs.1(a) and 1(b), respectively.A similarly detailed Hamiltonian modelling of noise in ion trap systems, with good experimental agreement can be found in Ref. [37] .
• Additionally we have compiled realistic estimates on the accuracy of the control electronics and calibration procedure.We have focused on the most common microwave pulse driven single-qubit gates with the DRAG pulse scheme [11] and the nonadiabatic implementation of the controlled-Z (CZ) gate in a tunable coupler architecture [4,9,12,13].
For two-qubit gates we have chosen an implementation that has shown very high fidelities, does not suffer from residual ZZ interaction and has been implemented as the native entangling operation in some of the most significant experiments in field [4,38,39] by companies such as Google, with also IBM having recently announced a tunable coupler based chip [17,40].Additionally, it has been shown that the tunable coupler based CZ gate can also be extended to co-design processors, such as the one presented in Ref. [41].
While the models presented in this section could always be further improved, they capture the most relevant dynamics, while still being tractable in simulation.
A. Single-Qubit Gates
We model the transmon as a driven anharmonic oscillator, with a Hamiltonian of the form where â are bosonic annihilation operators, ω is the frequency of the qubit and α is the anharmonicity.The last term above represents the action of the control waveform parametrized by the function Ω(t), which is additionally parameterized by the two quadrature controls I(t) and Q(t) so that We accentuate here that the frequency of the drive ω d and the transmon transition frequency ω are not necessarily equal [42].
The above parametrization is useful when employing the pulse shape known as Derivative Removal by Adiabatic Gate (DRAG) [11].This pulse shape is commonly used due to its effectiveness in reducing leakage to the f -state of the transmon, as well as its simplicity and therefore ease of calibration.The DRAG scheme is very straightforward, and consists of applying any finite pulse shape s 0 (t) on one quadrature, and its scaled derivative s 1 (t) ∝ ṡ0 (t) on the other [11].More specifically, we use Gaussian pulses defined by (3) In other words, s 0 (t) is a Gaussian envelope with a finite time cut-off T , and s 1 (t) is the re-scaled derivative of s 0 (t).By using , we can choose between implementing qubit rotations around the X or Y -axes of the Bloch sphere.The parameters σ, µ and T are considered to be fixed, while A and β are tuned in the calibration procedure.The offset B is set so that the truncated Gaussian curve does not have any discontinuities, i.e. s 0 (t = 0) = s(t = T ) = 0.This definition assumes the pulse is performed at time t = 0, however if this is not the case, as we are also interested in performing simulations of multiple gates, the formulas are translated so that s 0,τ (t) = s 0 (t − τ ) so that the pulse begins at time τ .
All of our single-qubit gate simulations also include some electronics imperfections such as the finite sampling rate, which we fix to 1/δt = 2.4 GHz.This means the real output of the electronics is given by discretized values and equivalently for the second quadrature Q(t).While we always assume our pulse is continuous when programming the pulse shapes, the output is actually to be discretized.
Amplitude damping errors
One of the best-known error sources is amplitude damping characterized by the so-called T 1 decay time, which can be easily measured by preparing the qubit in the excited state and measuring the population at different times.
It is believed that for current hardware the observed T 1 times are limited by the coupling of the transmon to a bath of two-level systems (TLSs) [43][44][45][46].The interaction originates from the coupling of the TLS electric dipole to the transmons charge degree of freedom [47].There are also other physical contributions to T 1 such as e.g.interactions with non-equilibrium Bogoliubov quasiparticles, however their effect was found to be smaller in some studies on transmon qubits [48,49], in the absence of burst events [50].
Except for rare occasions of resonant qubit-TLS interactions, the observed T 1 decays are well-described by an exponential curve [29], and therefore modelled with the Markovian approximation.Motivated by the results of Ref. [51], where the qubit effective temperature was shown to be considerably higher than the cryostat temperature and can result in excited state populations in the order of 1% [52][53][54], we also consider that the bath producing the T 1 decay is at a finite temperature, which means that there is a finite heating rate in the model.
We use the Lindblad equation to simulate the dynamics of such finite temperature T 1 decay Ĉ− = â and Ĉ+ = â † and the T 1 decay rate in a standard excited state population decay experiment is given by Γ The ratio of both rates is determined by the detailed balance condition Γ + 1 /Γ − 1 = e −ω/k B T eff , which is valid if we assume the qubit is embedded in a thermalized environment [55].The exponential suppression of the heating rate Γ + 1 implies that it has a small effect on the dynamics itself.Nevertheless, it has a significant effect on the state preparation, as it means that the qubit has some residual excited state population initially.
In this model, the decay rate from the second to first excited state is exactly twice the rate of the excited to ground state transition, which is in reasonable agreement with the transmon qutrit decay rates reported in Refs.[56,57].
Finally, the above equation is completely independent of the origin of the decay process, as all the characteristics of the environment are reduced to two rates.
Markovian Pure Dephasing
In the majority of cases, it is observed that the T 2 decay time measured in a Ramsey experiment, of a qubit is not limited by T 1 , or more specifically T 2 < 2T 1 [3,17].
Much like the amplitude damping contribution characterized by the decay time T 1 , we also consider a Markovian contribution to the pure dephasing dynamics of the system.To this aim, we add an additional jump operator to the Lindblad equation in Eq. 6, of the form Ĉφ = â † â with the associated decay rate Γ φ .The observed Γ 2 = 1/T 2 decoherence rate under this Markovian model is given by Γ The pure dephasing dynamics can be probed by measuring the pure dephasing decay time of an idling qubit with a Ramsey experiment or by additionally applying a dynamical decoupling sequence such as a spin echo.However, the above Lindblad equation can not describe the effects of the low frequency noise, that is the largest contributor to the pure dephasing and will be introduced in the next section [58].
If we assumed that the Lindblad equation accurately describes the dynamics of our system, the T 2 time would remain unchanged even if dynamical decoupling sequences are used in the evolution.However, in practice this is often not the case, meaning that a different mechanism is responsible for most of the pure dephasing observed in typical flux-tunable transmons [59][60][61][62].
1/f -type flux noise
Comparing the results of Ramsey and spin echo experiments in flux tunable transmons, we often observe a discrepancy between these two decay times.This discrepancy immediately implies that the pure dephasing in our system has a sizeable contribution from low frequency or non-Markovian noise.This low frequency noise in flux-tunable transmons originates from the coupling to magnetic dipole moments of TLSs in the vicinity of the SQUID loop or possibly also from the flux line [8], and typically exhibits a 1/f noise power spectrum [46,59,62].
For a flux-tunable transmon, with two asymmetric Josephson junctions, the relationship between the qubit frequency and external flux threading the SQUID loop is given by [61] where Φ 0 is the flux quantum, and d is the junction asymmetry defined by the two Josephson energies as The maximal frequency ω max can be expressed in terms of the transmon charging E C and total Josephson energies Flux noise can then be treated as a perturbation of the external flux threading the SQUID loop Φ → Φ + δΦ.
In order to capture these time-correlated non-Markovian effects of the flux noise, we model this noise via a classical stochastic process, comprised of a large number of random telegraph signals.We have chosen this approach over using a completely Gaussian Ohrnstein-Uhlenbeck process since a random telegraph signal can be interpreted as a classical version of the switching of a single magnetic TLS [45,63].
As mentioned, the noise couples to the qubit via the flux, meaning that the interaction term of the Hamiltonian has the following form [61] Ĥ1 where the above Hamiltonian describes one realization of the stochastic process δΦ(t), which is generated with a 1/f noise power spectral density.The flux dispersion of the transmon ∂ω ∂Φ can be simply computed from Eq. 7. When we consider the fact that in every quantum experiment we must perform a large number of shots, for each shot the stochastic process δΦ(t) will have a different realization.Therefore, when we average over a number of simulations, each with a different realization of δΦ(t) (with the same statistics), the unitary dynamics of each individual realization results in a pure dephasing decay.
Calibration errors
We have already mentioned that the pulse parameters in Eq. 3 must be calibrated accordingly to ensure the best possible gate performance.Each of these parameters A, β as well as the drive frequency ω d are tunable and must be optimized for each qubit separately.
a. Amplitude The amplitude of the pulse A must be optimized so that the angle of rotation is set to the correct value.Small deviations will manifest themselves as over or under rotated gates.In the calibration procedure, it is often assumed that the amplitude of a π/2 rotation is simply half the amplitude of a π rotation, i.e. that the relation between the angle of rotation and pulse amplitude is linear [64], which might not be accurate due to non-linearity in the control electronics [65,66].This assumption results in a mismatched amplitude A, as often we wish to calibrate both π/2 and π rotations.We characterize the magnitude of this error by introducing the parameter which characterizes the angle of over or under rotation, based on the ideal value for the amplitude A ϕ .This effect was observed to manifest itself in an under rotation angle of up to 3 degrees in Ref. [66].
An additional source of the discrepancy has also been reported in Ref. [67], where the amplitude of the pulse generated by the electronics was observed to fluctuate on the timescale of hours, by up to 0.3 %.This mismatch would correspond to an angle of approximately 0.5 o or 1 o for a π/2 or π rotation respectively.
b. DRAG parameter The parameter β determines the amplitude of the derivative quadrature, and a mismatch will result in increased leakage to the second excited state of the transmon.In the truncated qubit subspace, such an error is non-trace-preserving.We characterize the error in this parameter by considering the relative offset of the value compared to the ideal value β ϕ .c. Frequency detuning Perhaps the most complex effect of miscalibrated parameters results from the frequency mismatch of the drive, i.e. when ω d − ω = 0.This mismatch results in a phase difference between the qubit and the drive which grows with time.This can be easily understood if we simplify our system with the following assumptions.We consider only the first two-levels of the transmon driven with a simplified pulse on only one quadrature, i.e.I(t) = s 0 (t) and Q(t) = 0 with the pulse being applied at time τ , and perform the rotating wave approximation to obtain the driving Hamiltonian in the rotating frame of the qubit, in the basis {|0 , |1 } This driving Hamiltonian can be interpreted as a rotation around an axis in the x-y plane of the Bloch sphere which slowly drifts in time with an angular frequency δω = ω − ω d .While detunings large enough to have an effect on a timescale of single gate time might not be realistic, if we are performing a longer algorithm the single qubit rotation at time τ will be shifted by an angle of δω τ .While this error does not have any history dependence, its magnitude therefore depends on the time of the execution of the gate.In the above formula we have also implicitly defined the qubit Bloch sphere such that the drive and qubit are in phase at time t = 0. Realistically, since measuring the qubit frequency via a Ramsey experiment is very accurate, such effects might be a consequence of the qubit frequency drifts due to the stochastic nature of the TLS environment of the transmon [68].These drifts were observed to be in the range of a couple of kHz with infrequent jumps up to 20 kHz [29].
This gate is implemented by introducing another noncomputational element into the circuit known as a coupler (C).The two computational transmons (Q 1,2 ) are then capacitively coupled to the coupler and to each other.The coupler is also a flux-tunable transmon, however in the idling configuration the frequency of this element must be significantly detuned from the frequency of both computational transmons in order to suppress the interaction between them.
Such a system is modelled with the following Hamiltonian, The CZ gate is implemented by tuning the frequency of the coupler via a flux pulse closer to the frequency of the computational transmons, so that the coupler frequency is a time dependent function ω C (t).The couplings between the transmons g ij = β ij √ ω i ω j actually also depend on the frequencies, meaning that while g Q1Q2 is constant, g Q1,2C is also time dependent.The prefactors β ij depend on the coupling capacitances, as well as self-capacitances of the transmons in the lumped element circuit model.We use a flattop-Gaussian pulse in the realization of our gates.This pulse shape is the result of a convolution of a rectangular pulse and a Gaussian function, with the equation where τ c is the duration of the rectangular pulse, σ is the standard deviation of the Gaussian, τ b is the rise time of the pulse and erf(•) is the error function.The above formula still represents a pulse with infinite duration and fixed amplitude, so the actual coupler frequency has a time dependence of The constant A represents the amplitude of the pulse and the offset B is there to avoid any discontinuities in the coupler frequency due to the finite duration of the pulse.In Eq. 14, we have set the gate duration to τ c + 2τ b , which makes the pulse symmetric and fixes B = f (0).The pulse in the above parametrization starts at time t = 0 and must be accordingly shifted if we choose to perform a pulse at a different time.
Since the convolution of two functions is simply the product of their Fourier transforms, the Gaussian function suppresses the slowly decaying frequency tail of the rectangular function thus minimizing the probability for exciting higher states.
We have also considered the effects of finite sampling of the electronics with a rate of 2.4 GHz as in Eq. 5, however after the flux pulse is passed through a low-pass filter with a cut-off of 1 GHz, as in Ref. [10], almost no difference compared to the analytical pulse shape in Eq. 13 was observed.
As the coupler flux pulse is being applied, the level repulsion involving the levels of the double excitation manifold results in an avoided crossing between the states , where the subscripts refer to the qubits (Q) or the coupler (C).In this formulation, the frequency of Q 2 is larger than the frequency of Q 1 .The population of the computational |1 Q1 0 C 1 Q2 state then undergoes a full Rabi oscillation which implements a conditional phase [14].
The benefits of implementing the gate in this specific manner are two-fold.First of all, unlike having directly coupled computational transmons where the interaction strength asymptotically decays to zero with the qubitqubit detuning, a well-designed system with a tunable coupler has one or two coupler frequencies where the interaction between the qubits can be tuned to exactly zero in theory [13], with an effective coupling strength of approximately 1 kHz reported in Ref. [9].Furthermore, since the gate is based on a non-adiabatic interaction the gates are relatively fast, typically on the timescale of 30 to 100 ns [9,10,13,14,17].
Obviously, the introduction of a non-computational element such as the coupler is a large issue for the characterization of the system.The coupler which has its own error sources cannot be read out, and neither are we able to apply microwave pulses, meaning that we can only perform z-axis rotations via flux tuning.We mention here that adding additional control or readout lines to the chip should be avoided due to the increased risk of cross-talk, as well as scalability issues arising from the increased heat load of additional control lines in the cryostat.Moreover, the population of the computational |11 state is transferred outside of the computational subspace during the gate duration, and while measurements of the second excited state population are doable, they are typically not implemented as they are not needed for computation.All in all, this means that we need to characterize a system without being able to measure or control a significant part of the Hilbert space.
Additionally, we would like to stress that the computational basis is now given by the eigenstates of the full Hamiltonian in Eq. 12, and is therefore slightly delocalized due to the coupling between the transmons [14].We identify the computational state eigenbasis via maximum overlap with the local basis, so that the computational |ij state, where i, j ∈ {0, 1}, is the eigenstate |ψ of Eq. 12 which maximizes the overlap The basis states are therefore defined with the coupler in the ground state.This definition of the computational basis is valid only when the coupler is significantly detuned from the qubit frequencies, and the basis can be uniquely defined, as the overlaps in this regime are close to one.
Amplitude damping errors
Much like the single-qubit case, errors related to Markovian T 1 decay are still a major source of infidelity for the two-qubit gate system.
The simplest approach to model this type of incoherent dynamics would be to consider a Lindblad equation with the same jump operators as in 6, but localized to each transmon.Neglecting any heating transitions, it is obvious that the steady state of such a system is simply the state |0 Q1 0 C 0 Q2 .It can easily be checked that this is not the ground state of the Hamiltonian with non-zero coupling coefficients g ij , and therefore such a model is flawed when trying to describe a system in thermodynamic equilibrium with its environment.
While this local approximation to the Lindblad dynamics tends to be accurate in the dispersively coupled regime, the diabatic two-qubit gate is operated in the strongly coupled or near-resonant regime, where the eigenstates of the system are significantly delocalized from the physical transmon states [14].In this regime, the local approximations will not accurately describe the dynamics.
We obtain a more accurate description by following a microscopic derivation of the Lindblad equation [55] under the assumptions that each transmon is coupled to its own bath.This is supported by the idea that the main source of T 1 decay are TLSs with electric dipoles, and the electric field of each transmon is strongest in its immediate vicinity.Significant electromagnetic interactions between transmons (beyond the implemented capacitive couplings) would result in large crosstalk and therefore dysfunctional qubits.We therefore derive a Lindblad equation where each transmon is coupled to a number of TLSs, similarly to the standard tunnelling model analyzed in Ref. [45,69].
The resulting global Lindblad equation that describes the non-unitary decay of a coupled multi-body system obviously preserves the Lindblad form, but with jump operators that couple the eigenstates of the system.This can be written down as where the density matrix ρ refers to the whole system of three transmons.The jump operators represent transitions between eigenstates (denoted with indices a and b), and are defined as Ĉab = |a b|.The Lamb-shift Hamiltonian is denoted with ĤLS .A global model is needed in order to explain the observed T 1 times during the operation of an adiabatic CZ gate observed in Ref. [70], where the hybridization of the computational states with the coupler results in a reduced decay time during the operation of the gate, thus further discouraging the use of local jump operators together with the T 1 times measured at the idling point of the system when modelling the incoherent dynamics.
The details of this derivation describing the calculation of the rates Γ ab , based on the coupling to a TLS bath, can be found in Appendix B. Identically as in the single qubit case we also include heating transitions via the detailed balance condition, similarly to the model used in Ref. [4].We mention here that we assume that the T 1 is independent of qubit frequency as an approximation, since the most commonly observed frequency dependence actually exhibits seemingly random fluctuations due to resonant couplings to specific TLSs in the environment [44,69].
It is also worth noting, that while the Lindblad equation considered here obviously describes a Markovian evolution of the full system, the global approach will induce incoherent leakage transitions outside of the computational subspace.Thus, the evolution of the computational subspace in this case can actually be non-Markovian.This can be seen if we consider the transition from the qubit to coupler excited state |0 Q1 0 C 1 Q2 → |0 Q1 1 C 0 Q2 as an example.If the coupler excited state does not instantaneously decay into the ground state of the system, the presence of coupler excitations will distort the energy levels of the computational basis and therefore negatively affect subsequent gate operations.Since the population of the coupler excited state depends on the coherence times as well as the number of gate performed in the past, the latter history dependence means that this error process is non-Markovian.
1/f -type flux noise
In a similar way as in the single-qubit case, the slowly varying magnetic flux noise due to spin defects or classical electronics is also present in two-qubit gates.
This noise is even exacerbated in the tunable coupler, where the pure dephasing times are observed to be up to an order of magnitude lower compared to the computational transmons [10].This is a natural consequence of the different design of the coupler which must be easily tunable over a large frequency range and the typical trade-off between noise and control must be made.
By making the adiabatic approximation, i.e. assuming that the noise varies slowly enough not to induce any transitions between the eigenstates, which is a justified assumption due to the large magnitude of the lowfrequency part of the spectrum, the susceptibility of the coupled system to the slowly varying flux can be characterized by generalizing the single-qubit formula from Eq. 8 so that where we have written the multi-body Hamiltonian from Eq. 12 in its eigenbasis with eigenstates |a and corresponding eigenergies ε a .Besides the regular flux susceptibility of each transmon frequency ∂ωi ∂Φi , the coupling of the system (as well as the dependence of the coupling coefficients g ij on the frequencies) is reflected in the first coefficient of the form ∂εa ∂ωi .In the uncoupled case with g ij = 0, this coefficient can only be integer or zero, and the above equation reduces to the single qubit case as in Eq. 8.During the gate operation, in the highly hybridized regime, this coefficient then takes into account the effect of the local flux noise on the hybridized system.
Since the coupler frequency is being tuned during the operation of the gate, this will also affect the flux dispersion of the coupler ∂ω C /∂Φ C , as seen from Eq. 7.
The noise is again assumed to be localized to each transmon without any correlation between them.For the noise from the classical electronics this is obvious since each transmon is coupled to its own flux-line with very little crosstalk between them, and the magnetic TLSs are assumed to produce local noise as per the same arguments as for the charge noise.It is worth mentioning that since the computational basis consists of hybridized eigenstates, this means that especially the coupler flux noise will result in correlated pure dephasing of the compuatitional states.
Identically as with a single qubit, many realizations of the classical stochastic processes δΦ(t) must be simulated and averaged over uniformly.
The same model was used to explain the pure dephasing in a tunable coupler setup already in Ref. [14] and tested out previously in Ref. [10] with good agreement observed.
Calibration errors
While the flattop-Gaussian pulse from Eqs. 13 and 14 has a number of free parameters, only two of them need to be tuned in order to calibrate the gate.
Typically, the rise time τ b and standard deviation σ are fixed in such a way that τ b > σ, and the amplitude A and rectangular pulse duration τ c are tuned.
As mentioned previously, the gate is based on a Rabi oscillation between the levels |1 Q1 0 C 1 Q2 and |0 Q1 0 C 2 Q2 , so the most obvious consequences of imperfect calibration are either the population not completing the full Rabi cycle, resulting in leakage to the |0 Q1 0 C 2 Q2 state or the population returning with the wrong conditional phase.
Additionally, errors can occur in the initial and final phases of ramping the pulse up and down.The errors here are more unpredictable, however typically we observe Landau-Zener transitions to other noncomputational states.This error occurs even if the pulse is perfectly calibrated and can only be eliminated by choosing a different, more optimal, pulse shape [14].
Pulse distortion
Since no waveform generator is perfect and the pulse passes through a number of filters before reaching the transmons, certain pulse distortions have been observed.
First of all, the frequency of the coupler is tuned via a flux pulse.The relationship between the external flux threading the SQUID loop and the coupler frequency was described in Eq. 7. Typically, the parameter d of the coupler transmon is designed to be either zero or very small in order to ensure a larger tuneability, however fabrication inaccuracies may cause deviations from design values.Assuming the coupler is designed with d = 0, current fabrication inaccuracies might result in d ∼ 10 −2 or even d ∼ 10 −3 with the use of laser annealing [71].Since this translates into d 2 ∼ 10 −4 − 10 −6 and gate operation is done in the regime where the tangent term does not diverge, we neglect the asymmetry term from now on.
We model any flux distortion errors, with the following formula.
where Φ(t) is the distorted flux and Φ(t ) is the desired flux pulse train, meaning that the functions span over a time period of the whole algorithm.The desired flux distortion is being convoluted with a kernel function K(t−t ) in a time-local way so that the current distorted flux depends only on the past and not the future.
Based on the measurements of the step response of the electronics in Refs.[9,72] we parametrize the kernel function in the following way where the delta function in the first term corresponds to a perfect pulse, i.e.Φ(t) = Φ(t) if A n = 0 ∀n and we use a handful of exponential tail distortions with a wide range of typical parameters.Each distortion is parametrized by its amplitude A n and timescale τ n .While the amplitude must be relatively small, A n 0.01, in order to be able to achieve reasonably high fidelities, the timescales τ n can cover a wide range from 10 ns up to 1 µs and possibly longer [9].The distortions with timescales significantly exceeding one gate time will not distort a single pulse shape, but will rather offset the coupler frequency during the idling period away from the frequency where the residual ZZ-interaction is zero.The magnitude of this offset will depend on the exact value of the time correlation coefficient τ n and the number of pulses performed within a time frame specified by this parameter.This makes this error completely history dependent and therefore non-Markovian.
This offset manifests itself as an unwanted conditional phase during idling, as well as a miscalibrated pulse.The miscalibration here originates from the fact that we still perform a pulse of the same amplitude, even though the initial coupler frequency has been offset.If the correlation timescale of the distortion is shorter or comparable to the gate duration, it will result in a deformation of the pulse shape, which again will result in miscalibration errors, as well as more unwanted leakage transitions during the ramping up and down of the pulse.However such an error will also have more Markovian behaviour, since the memory of the flux distortion is shorter.
If τ n is comparable to a gate duration, a single pulse will rise and fall more slowly than expected and the non-Markovianity of such an evolution is smaller.
IV. EFFICIENT SIMULATIONS
In the previous section, we have described a variety of error sources, each with its own characteristics, from sources of leakage, coherent, non-unitary and non-Markovian errors.Since we would like to simulate a gate with all these different errors, and possibly add more in the future, or even perform parameter sweeps, one must approach this problem with more careful consideration.
Since solving noisy pulse level dynamics always reduces to solving some sort of master equation, the addition of 1/f noise introduces another degree of complexity due to the needed averaging over many, typically at least hundreds of trajectories.Moreover, solving each trajectory involves solving a stochastic Schrödinger equation together with non-unitary dynamics.
We therefore describe below how we simulate the time dynamics of the quantum system of interest, as well as the state preparation and measurement.
A. State preparation
We have already mentioned in Sec.III A 1, that transmon residual excited state populations are routinely observed to significantly exceed what is predicted by the Maxwell-Boltzmann distribution with the cryostat temperature, which is typically around 15 mK.
Besides the heating transitions in the Lindblad equations 6 and 15, if we wish to simulate a realistic experiment, the residual excited state population should be taken into account in the initial state preparation.Therefore, for realistic experiments, we assume that the initial state is a thermal state with an effective temperature, specified by β, typically in the range of 50 mK [52][53][54], and Ĥ is either a single or two-qubit Hamiltonian from Eqs. 1 or 12.The partition function of the system enforcing normalization is denoted with Z.This state is also the steady state solution of the Lindblad equations 6 and 15.
B. Time evolution
In order to simplify the evolution we will first vectorize the density matrix of the full system, so that the operator ρ is transformed into a vector ρ → |ρ , and the more general master equation Eq. 15 in Lindblad form is reduced to < l a t e x i t s h a 1 _ b a s e 6 4 = " / / 2 t g S < l a t e x i t s h a 1 _ b a s e 6 4 = " q 1 J l L h U t s 1 i m 6 h w T X f s l g 2 t U 6 / 4 = " > A A G w 4 g 5 P r k P e I 9 T A l p y z d 2 O T 6 B P i U j q I / e m V D v q X D M B B M M h r h 2 4 Z t E q W 2 P g W W J n p I g y V F 3 z q 9 M N a e y z A K g g S r V t K w The form of the superoperator L then depends on the exact vectorization employed.Perhaps the simplest method is to stack the rows of the density matrix, which results in the Liouvillian superoperator L with the matrix form Once we add the stochastic contribution due to 1/f noise from Eqs. 8 and 16 to the evolution, we are forced to solve a master equation a large number of times with different realizations of the classical stochastic process.However, we can make use of the fact that for all practical purposes, the noise in the system is weak, and the evolution can be separated into disjoint parts [73].
We take this into account by discretizing the evolution into smaller time steps ∆t, and approximating each time step with a first order truncated propagator.More formally, since the general solution to Eq. 20 with a time dependent L(t) can be written in terms of a generalized propagator U(t 2 , t 1 ) we can further decompose the superoperator L(t) = −iG(t) − iN (t) + D(t) into three separate parts: • G(t) -representing the closed system dynamics of the system corresponding to the superoperator form of the Hamiltonians in Eq. 1 or 12.
• N (t) -corresponding to a single realisation of the classical 1/f noise process from Eqs. 8 and 16.
• D(t) -corresponding to the non-unitary dynamics of any jump operators in the Lindblad equations 6 and 15.
We can proceed to write down the formal solution of Eq. 20 with a Dyson series expansion In the second to last step we have replaced the linearized propagator with the full propagator of the unitary generator G(t).This is done since it significantly lowers the error of the expansion as the magnitude of the elements of the matrix form of G(t) is much larger compared to the noise generators N (t) and D(t).The leading error terms of a single step with duration ∆t are therefore resulting in a global evolution error of O(ΓωT ∆t), where Γ is either the pure dephasing or amplitude decay rate, ω is the largest frequency of the system, T is the duration of the simulation and ∆t is the duration of a single time step.While the magnitude of the effect of the noise is of the order of ε ∼ Γ 1 T for amplitude decay and ε ∼ (Γ φ T ) 2 for the 1/f -noise, this means that for accurate results ω∆t ε.
The formula in Eq. 26 basically splits the evolution first into a large number of small timesteps and each time step into a separate contribution of each environmental noise source.In the last line we have also used the full propagator for this noise, instead of the approximate linearized form.This is done in order to avoid unphysical properties of the density matrix.
We mention here also, that splitting the noisy and unitary evolutions is a very common approximation when simulating [74] and benchmarking [27,75] current noisy devices, even though the error scaling is not very favourable.Here we have gone one step further, by also including the effect of the noise during the gate operation.
The main computational advantage of this is that the set of propagators of the unitary and Lindblad dynamics U G and U D are always identical, while the generators U N depend on each trajectory of the 1/f -noise we are considering.This means that instead of solving the master equation for each trajectory, we just need to generate the propagators U G and U D by propagating D 2 linearly independent states, where D is the dimension of the system.Since the 1/f noise acts on the eigenbasis of the system, we need to diagonalize the Hamiltonian once per each time step and afterwards the exponentiation of the matrix is trivial.Typically we want to average over approximately N traj ∼ 1000 trajectories, with a system of dimension D. Using N ∆t = T /∆t time steps, the complexity of the evolution is: 1. Propagate D linearly independent pure states to obtain the generators U G , corresponding to the white dotted area in Fig. 3 2. Propagate D 2 linearly independent states to obtain the generators U D , corresponding to the blue diagonal patterned area in Fig. 3. 5. Generate the propagators U N , corresponding to the green area in Fig. 3, by simple numerical integration in the eigenbasis and then transform the generator into the correct frame, resulting in 2 matrix multiplications of size D × D per time step, all together 2N ∆t matrix multiplications per trajectory.
6. Once all the propagators are generated, we need to multiply the initial state at each time step with the precomputed double propagator U G,D and then with the corresponding propagator U N , meaning that we need to multiply a D 2 × D 2 matrix with a D 2 dimensional vector twice for each trajectory.
The most numerically difficult steps depending on the exact size of the system are the matrix multiplications in step 3. with a complexity of O(N ∆t D 6 ) and the actual propagation in step 6. with complexity O(N traj N ∆t D 4 ).
C. Measurement
We assume standard dispersive measurements of the transmons in the computational subspace.Even though the same setup could also accommodate the readout of the higher excited states, this readout is typically not calibrated and we assume we have no access to it.Additionally, as mentioned previously, the coupler is not connected to any readout resonator.In the more general two-qubit case, the probability distribution of measuring the state k is given by Where the density matrix ρq is defined in the computational subspace, meaning that it is extracted from the density matrix of the full system ρ.This procedure is described in the following section V A in more detail.Equivalently, also the projector Pk is defined in the computational subspace.
The normalization in Eq. 27 is needed due to the presence of leakage to the excited states of the transmon, so that the probability distribution P (k|ρ) is normalized.Once the probability distribution is obtained, we sample N shots results from it, in order to describe the finite sampling effects.After this procedure we obtain a vector of probabilities of measuring each state (or bit string) P(ρ) = (P (00|ρ), P (01|ρ), P (10|ρ), P (11|ρ)) T .
As readout errors in current hardware are large and unavoidable, we model them by transforming the obtained vector of probabilities with a misclassification matrix of the form P (k|ρ) = i∈{00,01,10,11} P (k|i)P (i|ρ), (28) together with the preservation of probability k P (k|i) = 1.
V. ERROR BUDGETS OF QUANTUM GATES
A. Gate performance measure The first step in benchmarking a quantum gate is to define a performance measure, which is typically a distance measure between the desired and observed outputs of the gate.The most common measures are typically state or process fidelities, trace distance or diamond norms [76].
Since the simulations are based on state propagation, we choose to examine the gate performance in terms of the averaged state fidelity defined as where U id represents the unitary of the gate we are trying to implement and E[•] is the quantum dynamical map of the noisy evolution corresponding to the chosen gates.The average should be taken over a set of states {|ψ } which is close to the Haar random measure.The state fidelity is defined as so that if one of the density matrices ρ or σ corresponds to a pure state, the state fidelity is the module squared of the overlap of the density matrix with the pure state.However, the full Hilbert space of the hardware is much larger than the computational basis, meaning that it is not straightforward to reduce the whole system to its computational basis.Moreover, this is relevant also for the description of the measurement procedure in Eq. 27.
In the case of a single qubit it is straightforward to define a qubitized density matrix in the computational subspace, by simply considering where we consider the set of computational basis states for the single qubit case as |φ i ∈ {|0 , |1 }.We note here that such a definition of the qubitized density matrix ρcomp has trace less than one, i.e. tr {ρ comp } ≤ 1, which is due to leakage outside of the computational subspace.We do not enforce the normalization when assessing the performance, meaning that the value of the fidelity is limited to the trace of the extracted computational subspace density matrix.Since the original matrix ρ is positive semi-definite it follows that also ρcomp is positive semidefinite, as it is defined as a principal submatrix of ρ [77].
It is also obvious that by this definition the qubitized density matrix is hermitian.If also the normalization is taken into account, such a qubitized density matrix is completely physical.
The above definition can be easily extended to the multi-qubit case with couplers by considering computational states where the coupler is not excited, namely |φ i ∈ {| 000 , | 001 , | 100 , | 101 }, with the ordering qubit 1, coupler, qubit 2. We stress here again that the computational states are given by the eigenstates of the multiqubit Hamiltonian and identified via the maximum overlap rule, which is why we have denoted these Hamiltonian eigenstates with a tilde.
However, there is also a different definition of the qubitized density matrix in this case, which is closer to the actual measured state, if one had access to it.Since we cannot probe the coupler states in any way, it would make sense to trace out the coupler degrees of freedom and then restrict the Hilbert space as in Eq.31, which would result in a slightly different definition.By this definition, for the two-qubit gate with one coupler, the qubitized density matrix is computed by The same arguments as in the previous definition still hold to show that this density matrix is physical.The full density matrix is first transformed into the eigenbasis of the full Hamiltonian and afterwards the trace over the coupler states is performed.The states |φ i now no longer contain the coupler degrees of freedom, but are still defined in the Hamiltonian eigenbasis, and are therefore given by |φ i ∈ {| 00 , | 01 , | 10 , | 11 }.
While the latter definition from Eq. 32 might be closer to the experimental measurement, we define the fidelity of an operation with the first definition from Eq. 31, which takes into account the fact that any leakage into the couplers excited states is undesired, since it may corrupt subsequent gate operations.
When considering contributions of individual error sources we limit ourselves to the effects of a single error source being present.This means that we do not independently simulate the performance of the gate for arbitrary combinations of errors, which would also take into account any potential interplay between them.
B. Individual error source contributions
We have shown in the previous section that many of the error sources affecting quantum gates may be non-Markovian or time dependent, meaning that also the contribution of such an error source is time or history dependent and cannot be condensed into a single number.
A gate performance measure that would take into account such effects has therefore not yet been defined and is also out of the scope of this work.
Instead, we choose to monitor the evolution of the averaged state fidelity after applying a series of gates.For non-unitary Markovian noise sources such a contribution will increase linearly and for coherent errors quadratically provided the error is small.More complex errors might display more complex behaviour, with non-monotonic decays being associated with non-Markovianity [21].
C. Examples with typical parameters
Now that we are able to simulate quantum gate operations with more realistic noise models, we can examine the effect of individual error sources on a series of quantum gates.
Single-Qubit Gates
Here we analyse the average effect of the error sources of single qubit gates.The error sources we have considered have been listed in Sec.III A.
We define the infidelity contribution of an error source by simulating the effect on an ideal system with only such single error source, and evaluating the state averaged fidelity from Eq. 29.
FIG. 4. The distributions of relative infidelity contributions of different error sources with typical experimental parameters.
The relative error contribution of an error source is obtained by simulating the dynamics of a Gaussian DRAG pulse with σ = 4 ns and tg = 16 ns, with each error source individually and then normalizing with respect to all of the other error contributions.Top row : Relative infidelity contributions of π/2 rotations microwave pulses around the x or y-axes, calculated by repeating the gate one, three or five times (columns).Bottom row : The error contributions of microwave pulse π rotations.We consider the same parameters for both π and π/2 rotations, which are listed in Table I.The error sources which are not listed were found to have a negligible effect on the infidelity.The horizontal bars represent the 5th and 95th percentiles of each distribution.
Since the parameters of the qubit as well as the noise differ between chips and also qubits on the same chip, we must look at a large range of possible parameters.In order to do so we sample the noise parameters from independent Gaussian distributions with realistic standard deviations and mean values obtained from the current literature.The range of parameters as well as the references considered is summarized in Table I, which is divided into three sets: • The first set of parameters in Table I refer to the single transmon Hamiltonian in Eq. 1 and mainly the value of the anharmonicity α is responsible for the contribution of the leakage of the pulse to the infidelity.We mention here that there are other errors associated with the DRAG pulse, like the breaking down of the RWA as well as phase errors, however in our simulations these are always observed to be much smaller compared to the leakage into the second excited state.
• The second set of parameters concern the most important errors of the system.We note here that while a finite temperature T ef f with both heating and decay processes is taken into account in the amplitude damping simulations, the heating processes have a very small contribution to the infidelity of the gate, due to the relatively low temperature.We do not consider state preparation errors as part of the gate infidelity in order to make a clear distinction between these effects.When evaluating the effect of T 1 , T φ , A , we consider an idealized two-level system where the rotating wave approximation (RWA) holds, while when considering the effect of the DRAG pulse or a miscalibration of the second quadrature β a three-level system without RWA is used.In the case of β the obtained infidelity is subtracted from the infidelity of the full DRAG pulse with ideal parameters in order to isolate the effect of this miscalibration.
The pure dephasing due to Markovian white noise is characterized by the decay time T φ .This timescale is much longer compared to the T 1 decay, since most of the noise in the pure dephasing has a low-frequency 1/f -like spectrum and is simulated as a non-Markovian process.
The errors in the miscalibration of the pulse parameters from Eq. 3 are characterized by the values A and β in Eqs. 9 and 10.We mention here that using the calibration procedures presented in Ref. [64], the amplitude of the pulse is set for either a π or equivalently π/2 rotation, and typically only one of the two rotations will exhibit a larger error in the amplitude.In order to cover both cases, we consider a smaller error than reported in Ref. [66], but comparable to the values from Ref. [67] for both gates.
We can see from Table I and Fig. 4 that even a rel-atively large error in the DRAG amplitude β does not result in a drastically decreased performance.This is of crucial importance also when considering the effects of the drive non-linearity on one single gate.What is meant by drive non-linearity in this context is the fact that the shape of the pulse is slightly compressed (smaller than expected) at higher amplitudes.This effect will mean that the quadrature with the larger amplitude is flattened, and therefore strictly speaking the smaller quadrature is no longer a perfect derivative.However since we are not especially sensitive to errors in the amplitude of the derivative, we are also not especially sensitive to the non-linearity of the drive.
• The last set of parameters was included in the simulation, however their infidelity contributions were found to be much smaller compared to the other errors.The frequency detuning δω = ω − ω d due to the drift of the qubit parameters is simply too small, unless we consider a qubit which has not been calibrated for a very long time.The magnitude of the effect on N gates with duration t g can be estimated by the value (N δω t g ) 2 .
What is more surprising is that the long-time correlated 1/f noise has a negligible contribution even though the associated decay times were assumed to be much shorter compared to the Markovian pure dephasing T φ .The reason for this is two-fold, firstly, the shape of the decay due to such noise is not exponential but rather closer to a Gaussian curve, therefore it decays more slowly on shorter timescales [78].Secondly, the microwave driving of the transmon acts as dynamical decoupling, as was previously described in Ref. [79].Fig. 4 shows the typical behaviour where a single gate infidelity is typically T 1 -limited, however after a number of repetitions of the gate this might not be the case anymore, as the quadratic scaling of coherent errors stemming from the amplitude miscalibration slowly overcome the T 1 contribution in some cases.Additionally, we can see that even an optimized DRAG pulse still results in significant leakage for larger rotation angles.While one might be able to use standard techniques to obtain the single gate infidelity corresponding to the first column of Fig. 4, this shows that a single number must be carefully interpreted and the characteristics of the noise better understood if we want to make statements about the performance of a larger sequence of gates or an algorithm [75].
We note here that in many cases, such as for example the qubit frequency ω from Eq. 1, the exact values depend on the design of each specific chip.In case we are interested in a different set of parameters, the general rule that applies to all of the distributions plotted in Fig. 4 is that increasing the mean value of any error will shift the mean of the distribution towards a larger contribution and increasing the uncertainty of the parameter will increase the width of the distribution.As long as one considers parameter guesses based on bell-shaped curves the shapes of the distributions will not change significantly.
Two-Qubit Gates
A similar analysis can be performed for the two-qubit CZ-gate.The parameters and the uncertainties we consider are shown in Table II.However, in the case of the two-qubit gate, we always simulate the full system comprised of three transmons for two reasons.Firstly, the gate is diabatic and the population must leave the computational subspace, and secondly constructing any reduced models with smaller Hilbert spaces might warrant unrealistic approximations.Since the infidelity of the gate with optimized parameters can be comparable to the infidelity of certain error sources, the infidelity of the optimized pulse is subtracted from the infidelity obtained by adding an individual error source.This is supported by the results from Refs.[73,81], where it was shown that in the case of weak noise, the infidelity contribution is independent of the unitary dynamics.
As an additional comment to the parameters presented in Table II, we have added the maximal frequency of the coupler ω max C , which is relevant when computing the flux dispersion ∂ω C /∂Φ during the flux pulse, as seen from Eq. 7. Otherwise the coupler is idled at the idling point where there is no residual ZZ interaction between the computational states.
We represent the distributions of the relative infidelity contributions for two-qubit gates in Fig. 5.One interesting result observed is that in the infidelity contribution can be actually negative.This corresponds to the situation when the fidelity with the error source is higher compared to the full optimized (but imperfect) unitary evolution, as is the case when considering flux distortions.Since the non-adiabatic implementation of the gate is based on the Rabi oscillation of the computational |11 and non-computational |02 states, there are two conditions for a high fidelity gate, first of which is that all the population after the gate operation returns to the |11 FIG. 5.The distributions of relative infidelity contributions of different error sources with typical experimental parameters of a two-qubit non-adiabatic CZ gate.A 16 ns idling time is introduced after each application of the CZ gate in order to mimic the performance in an algorithm, since 16 ns is the duration of a single-qubit gate analyzed in this work.The relative error contribution of an error source is obtained by simulating the dynamics of a flattop-Gaussian pulse from Eq. 13 with σ = 5 ns and τ b = 2 √ 2σ.Since the transmons in the system have different parameters for each realization, the time τc and amplitude A are numerically optimized for each individual instance, and is typically close to the value of 30 ns.The relative error contributions are obtained by simulating the full system with each error source individually and the normalizing the values, identically as in Fig. 4. The category Control represents the inherent control errors associated with the specified pulse shape, finite time resolution of the electronics τc and amplitude uncertainty A. The error sources considered and corresponding parameter ranges are listed in Table II.The error sources which are not listed were found to have a negligible effect on the infidelity.The horizontal bars represent the 5th and 95th percentiles of each distribution.
state and secondly that the population returns with the correct conditional phase.Furthermore, these two points have to coincide for a realistic gate duration.That is why we often make a trade-off between highest achievable fidelity and gate duration.Connecting this discussion to the effect of the flux tails, since the flux tail will offset the idling point of the coupler frequency which will accumulate an additional conditional phase, as well as effectively shift the amplitude of the pulse, it may compensate e.g. for some error in the original conditional phase.For example, consider a CPHASE gate with a conditional phase of π −ε, in some cases the idling conditional phase due to the flux tails will compensate for the small offset ε resulting in a better perceived gate performance, on the scale of a couple of gates.If more gates are repeated, the effect of the flux distortions eventually outweighs any potential benefits, provided the correlation time τ n is long enough.
We argue that in the two-qubit gate case, it does not make sense to separate the effects of the small error in pulse amplitude A and duration τc , due to the fact that most times the magnitude of these errors is smaller compared to the inherent error of the pulse shape considered here.Secondly, in an experimental setting, typically the finite precision of the electronics resulting in a finite τc means that even if better parameters do exist, they are not achievable with realistic equipment and the noise in the experiment.
Similarly to Fig. 4, also Fig. 5 shows the same behaviour, where T 1 induced error is dominant on the scale of a single gate, but as more operations are performed, other errors start to contribute more.Because of the poor coupler coherence times there is also a small contribution of the 1/f flux noise in the coupler, mainly due to the Gaussian shape of the decay.Moreover, the effect of the flux tails is observed to become even more dominant as more gates are performed and more distortion is accumulated.
D. Experimental reconstruction
In the previous section we have seen how the contributions of different error sources vary considerably depending on the value of a large number of parameters.Now we are able to also perform the budgeting procedure described in Sec.V D on both single and two-qubit gates.
Quantifying the Performance
In order to quantify the performance of our regression, we use the coefficient of determination [82], often referred to as the R 2 score, defined as where y pred i is the model prediction and y true i is the correct value.In our case, the values y i refer to infidelity contribution of a single error source to the infidelity of a single gate series, where the superscript refers to whether this is the value predicted by our model (pred) or the actual correct value (true).We can then calculate the R 2 for each gate series and error source independently.
The sum runs over a testing set of size n over which we define our performance.In our case the testing set consists of simulated experiments together with their corresponding error budgets that were not used in the training a The uncertainty of the amplitude is determined by the discretization step of the parameter sweep when calibrating the gate.Typically, the conditional phase is measured versus the amplitude of the pulse with a finite step in the flux amplitude which controls the coupler frequency.b Additionally, we consider a sampling rate of 2.4 GHz for the electronics generating the flux pulse, corresponding to a minimal time step of 0.4 ns.c Due to the large spread in the parameter values of the flux pulse distortion observed in Ref. [9], we sample these values from a uniform rather than a Gaussian distribution.The entries in the "Std.deviation" column in this case represent half of the distance between the 25th and 75th percentiles.The maximal correlation time τn is limited by the duration of our simulations, since in a realistic setting τn on the order of ms have been measured [9].
step of the GPR.The average of the values in the testing set is denoted as ȳtrue = 1/n i y true i .Such a definition has a clear interpretation, in terms of the amount of variance in the sample the model is accounting for.
The value can be interpreted as R 2 = 1 being the perfect score, while R 2 = 0 means that our predictions are as good as the outputs of a model that always predicts the mean value of the training sample, irrespective of the input.While some information (the mean and standard deviation of the training sample) can still be deduced if R 2 = 0, when R 2 < 0, the model is completely unreliable, with the lower bound depending also on the testing set.In the latter case the performance can be compared to a random guess.
We have chosen this metric for a number of its favourable properties.First of all, since the score is normalized with respect to the variance, we will not be overestimating the performance of our model when the variance is small.Imagine that an error source has a sizeable but constant contribution, unless we are more accurate compared to the variance of this error source, our R 2 score will be small.In this case, metrics such as mean absolute or squared errors might be low, thus falsely overestimating the model performance.
Secondly, since this metric is typically constrained be-tween 0 and 1 it enables us to compare the performance even when the values are very different, i.e. in the case of comparing the infidelity contributions of a single gate or 9 gates, the absolute values, and by extension the mean absolute or squared error values, are very different, while both R 2 scores are in a comparable range.Additionally, compared to other frequently used metrics, such as the mean absolute error, mean squared error, root mean squared error and mean absolute percentage error, the R 2 metric was shown to have less interpretability limitations.Additionally it was shown to also be more truthful and informative compared to the symmetric mean absolute percentage error [83].
However since we are interested in different error sources, we compute the R 2 score for each error source, and determine the final performance with the variance weighted R 2 score, defined as In other words, the R 2 is calculated for each feature independently, since we use a separate GPR model for each feature.A weighted average, with the weights determined by the variance of each feature, defined as σ 2 = 1/n i (y true i − ȳtrue ) 2 in the test sample, is then used to asses the performance over all of the error sources.
Single Qubit Gates
We first test our devised method for error budgeting on single qubit π and π/2 rotations.We make no distinction between X or Y rotations, or positive or negative values of the rotation angle, since the error source contributions are expected to remain the same.Actually the only difference between these operations are the phases of the harmonic drive of the pulse in Eq. 2.
We start by generating a large set of simulations of potential experiments with the parameters from Table I.Additionally, we also consider a random finite temperature of the qubit, as described in Sec.III A 1 and IV A, µ[T eff ] = 45 mK with a spread of σ[T eff ] = 5 mK [52].For the single-qubit case, we consider additional random measurement errors P (0|1) = 2.5% ± 0.8% and independently P (1|0) = 1.0%± 0.5% as seen in Eq. 28.The uncertainty represents one standard deviation of the data.These numbers correspond roughly to the measurement infidelity already demonstrated in a large-scale device in Ref. [4] and significantly better readout performance was demonstrated in Ref. [84].
Each of the parameters is then sampled randomly from a Gaussian distribution and used to simulate a number of noisy experiments.These experiments are shown in Fig. 6(a).The main idea is to repeat the gate of interest a large number of times in order to amplify all the relevant noise processes.Different combinations of gates are there in order to increase the total amount of information, i.e. a < l a t e x i t s h a 1 _ b a s e 6 4 = " 6 E T k g h + S m / O n l K 6 p V V d I 9 5 E 3 t w 0 < l a t e x i t s h a 1 _ b a s e 6 4 = " V S 4 w G s 2 R 4 5 r e 1 M 4 x u g r F Q a Y Z c s f m i K J M E E z L 9 n P S E 5 g z l y B L K t L C 3 E j a g m j K 0 + R R t C N 7 i y 8 u k U a 1 4 V 5 X L h 4 t y 7 T a P o w D H c A J n 4 M E 1 1 O A e 6 u A D A w H P 8 A p v j n J e n H f n Y 9 6 6 4 u Q z R / A H z u c P M n u O T A = = < / l a t e x i t > ⇡/2 < l a t e x i t s h a 1 _ b a s e 6 4 = " V S 4 w G s 2 R 4 5 r H 8 i k M c L q R y A T t M j g = " > A A A B 7 H i c b V B N S 8 N A E J 3 4 W e t X 1 a O X x S J 4 q k n x 6 1 j 0 (a) Circuits used as an input for the algorithm, where θ is equal to either π or π/2, depending on which gate we are interested in.We use all 24 = NG 1 + NG 1 × NG 2 possible combinations of G1 and G2.In the data presented above, N = 25, and the initial state is the thermal ground state as described in Sec.IV A. (b) Performance of the budgeting on a test set of 40 π/2 rotation gates with different error parameters, quantified in terms of the R 2 score, weighted by the variance of each feature.Different shapes correspond to the accuracy of the error source contribution predictions to the averaged state infidelity of a series of gates.(c) Similarly as in (b), except for the π rotation.(d) Two input-output pairs of the testing set for the π/2 rotation, with the input on the left and the output on the right, for different examples.The input data is the probability of measuring the excited state population, together with a finite measurement error and finite number of shots, as described in Sec.IV C. We have only included the most dominant error sources from Fig. 4 in the outputs, and each of the error sources has 5 different columns, corresponding to the infidelity contribution to 1, 3, 5, 7 or 9 gates (in this order).The empty columns represent the predictions of the trained Gaussian Process Regressor model, together with the corresponding uncertainties and the filled columns are the correct value.In an ideal scenario, the filled and empty columns would directly coincide.(e) Identically as in (d), except for the π rotation.combination of positive and negative rotations will cancel the effects of gate amplitude errors.Since this choice of experiments is most likely not optimal, we believe that there is still room for improvement in the future.
After the sample of approximately 500 simulations with the corresponding error source contributions is obtained, this data set is split into a large training set, containing approximately 90 % of the samples, and a small testing set, containing the rest of the samples, which we use to test the performance of the predictions.Each input and output is then linearly rescaled to mean zero and a standard deviation of one, before a separate Gaussian Process Regressor is trained to predict each single error source contribution, for each number of gates individually.The training is described in more detail in Appendix A.
After the training, the performance of the GPR predictions on the test set is evaluated.The results with two input-output pair examples are shown in Fig. 6(b-e).
Looking specifically at 6(b) and (c), we can see that even low shot numbers are typically enough to obtain good performance.Estimating the time needed to acquire one single shot to approximately 0.5 ms with 4000 shots, the time needed to perform the actual experiment is approximately 0.5×4000 ms ∼ 1 minute.Here we have implicitly assumed that the time needed to evaluate one shot is limited by the reset time of the system, which (in case no active reset has been implemented) is typically on the order of a couple T 1 times.The reason for observing a relatively constant performance versus the number of shots is the fact that the inaccuracy in the measurement errors dominates the inaccuracy due to the finite number of shots.
Finally, 6(d) show in the left panels two typical sets of simulated input values, emulating experimental measurements of the excited state probability for a π/2 rotation.Fig. 6(e) shows the same for a π rotation.In all cases, the right panels show the corresponding infidelity contributions for the main error sources as computed directly from the simulated input (real values, displayed in the filled columns) compared to the ones predicted with our method (prediction, displayed in empty columns), for different number of gates.
As a summary, our results show how a number of circuit outcomes, with low shot numbers and realistic state and readout preparation errors can be used to connect the observed data to a theoretical model of the dynamics, for unitary, non-unitary and non-trace-preserving errors.We are typically able to explain between 70% and 90% of the variance of the sample considered.
Two Qubit Gates
While the budgeting reconstruction algorithm for twoqubit gates follows more or less the same steps compared to the single qubit case, there are slight differences in the implementation.Most notably, the experiments used as the input for the algorithm, shown in Fig. 7(a) must also contain single-qubit gates, since the CZ gate is a diagonal operation and applying it to a system in the ground state will not induce any excitations, which are susceptible to the environmental effects.Therefore we start by creating a superposition state in both qubits after which we apply the entangling operation a number of times.Finally, a second layer of single-qubit gates is performed.This was inspired by the work in Ref. [24], where tomographically reconstructed density matrices were used to find the best non-Markovian description of the system.We perform simple linear inversion state tomography [76], since it is not crucial for our developed technique that the reconstructed density matrices are completely physical, however in the case of worse readout parameters, improving the tomographic reconstruction step might be beneficial.Some examples of the reconstructed density matrices after applying the entangling operation a number of times are shown in Fig. 7(d) in the Pauli vector representation.The reason why the vectors seem so different from each other, even though the fidelity of the gate is not particularly low, is the presence of single-qubit phases, which we have not compensated for.
Observing the performance of the procedure for the two-qubit gate in Fig. 7(b,c) and the examples in Fig. 7(d), shows a reduced accuracy compared to the singlequbit case.We attribute this to the fact that for the larger system, there is a much larger number of uncertain parameters, e.g. the couplings between the transmons, making it harder for any algorithm to characterize with a very limited amount of data.This is illustrated in Fig. 7(c), where the performance was additionally broken down into scores for each category.While both qubit T 1 times are modelled in the same way, since the frequency of the second qubit in this model is higher, only the second qubit's states hybridize strongly with the coupler states.The magnitude of this effect depends strongly on the coupling strength between qubit 2 and the coupler as well as the optimal pulse amplitude, making the whole dependence more complex.Additionally, this means that a part of the qubit 2 decay is a consequence of the amplitude damping of the coupler.
We have also demonstrated that we are able to capture some of the effects of control errors and flux-tails, even though these error sources manifest themselves as unitary, non-unitary and non-TP errors.In order to distinguish between the effects of a single pulse it is crucial to have access to the density matrix of the system at different times of the evolution.
The needed number of shots for the presented accuracy is very similar to the single-qubit case, and remains relatively low, with the estimated experimental time on the scale of minutes.
Cost Comparison
Let us make a more detailed comparison on how the errors from Fig. 7 can be diagnosed without the use of the GPR method outlined in this paper.Ignoring non-Markovian effects, methods such as GST [18] and variants of randomized benchmarking [19,21] can be used to obtain more accurate benchmarks, however at a much higher experimental cost.To make the comparison more fair, in our analysis we consider the error characterization procedures from Ref. [9] as a baseline, since we can also make the same assumptions about the error sources as in our GPR based procedure.
As a reminder we mention here that the GPR procedure described in this paper, in the case of the two-qubit (a) < l a t e x i t s h a 1 _ b a s e 6 4 = " M y r 3 K c m i y G l l S y J X i L J y + T 1 l n V u 6 j W 7 s 8 r 9 e u 8 j i I c w T G c g g e X U I d b a E A T C E h 4 h l d 4 c 7 T z 4 r w 7 H / P R g p P v H M I f O J 8 / 3 D + R D g = = < / l a t e x i t > I < l a t e x i t s h a 1 _ b a s e 6 4 = " C 5 q K X 2 n w j D + 0 6 Examples of reconstructing the error budgets of a non-adiabatic two-qubit CZ gate with realistic experimental parameters from Table II.(a) The circuits used as an input for the budget reconstruction.The initial state is the thermal ground state as described in Sec.IV A and two single-qubit gates are used to initiate a superposition state in both qubits, so that all of the elements of the qubitized density matrix are non-zero.The CZ gate is then applied N times, before single-qubit gates are applied, and simple linear inversion state tomography is performed.We use three circuit with N = 3, N = 5 and N = 7 CZ gates.(b) Performance of the budgeting on a test set of 40 CZ gates with different parameters, quantified in terms of the R 2 score, weighted by the variance of each feature.Different shapes correspond to the accuracy of the error source contribution predictions to the averaged state infidelity of different numbers of gates.(c) The detailed performance of the budget reconstruction for each noise source described in Sec.III B. The height of the empty columns represents the relative weight of each feature, given by the variance of that contribution in the test sample, as described in Eq. 34.The filled columns represent how good the predictions are, i.e. a completely filled column indicates a perfect score.(d) Four input-output pairs of the testing set for the CZ gate.The input data plotted are the results of simple linear inversion state tomography on states prepared by running the circuits in (a).The density matrices are transformed into Pauli vectors with elements [|ρ Pauli ] i = tr{ρ Pi}, where Pi is the Kroenecker product of two Pauli matrices indicated on the x-axis.The inputs also include readout errors.Similarly as in Fig. 6 each of the error sources has 3 different columns, corresponding to the infidelity contribution to 1, 3, or 5 CZ gates (in this order).The empty columns represent the predictions of the trained Gaussian Process Regressor model, together with the corresponding uncertainties and the filled columns are the correct value.In an ideal scenario, the filled and empty columns would directly coincide.
gate, requires three state tomography experiments, or 48 different circuits.
The experiments required for a characterization of the same error sources and a comparable accuracy, but without utilizing the GPR method are listed in Table III.To further elaborate on the data in Table III, we assume that a T 1 experiment with a modest accuracy consists of measuring ∼5 circuits, while a single two-qubit state tomography requires 16 distinct circuits and the leakage contribution can be roughly estimated with a single circuit, provided f -state readout is available.For the Ramsey experiments needed to evaluate the flux-tail contri- bution, we can assume that at least ∼ 10 further circuits need to be measured, as an accurate coupler frequency characterization means that several oscillations must be seen in the experimental data.Further assuming that the coupler frequency after applying a flux pulse is measured at ∼5 different delay times in order to estimate the flux-tail magnitude, this brings the minimal total number of distinct circuits needed for the characterization to ∼ 75, which is approximately a factor of ∼ 1.5 times more compared to the proposed GPR based procedure.Additionally, the coupler transmon typically cannot be directly read-out and does not have dedicated drive lines for implementing single-qubit gates.This means that if we want to probe the flux-tail effects on the coupler, an excitation must first be prepared in the computational transmon and then transferred to the coupler with a SWAP-like operation, which needs additional experiments to tune up.The same also applies to the coupler readout.We estimate that this additional calibration, assuming the computational transmon frequency is known from the calibration of single-qubit gates, requires ∼ 25 more data points.To see this, one should consider that to maximize the population transfer between the computational and coupler transmons, we must tune the coupler to the frequency of the qubit transmon, and let the system interact for a time of approximately 1/g 2c , so that one Rabi cycle is performed.If the frequency of the coupler and duration of the interaction are swept with 5 distinct values each, we arrive at the number ∼ 5 × 5 = 25.
Thus, together with the previous steps, this adds up to a total of 100 distinct circuits to run, resulting in a final advantage of more than a factor of ∼ 2 for the GPR based method.
Not utilizing the GPR method also has additional drawbacks, namely: 1. Once the noise parameters are experimentally obtained, a subsequent simulation of the system with them is needed to evaluate the actual infidelity contributions.Translating the measured uncertainties into fidelity bounds will demand even more simulation time.This means that more time is needed to evaluate the error contributions due to the time required to perform the simulations.The time associated with the classical simulations depends on the complexity of the system and error sources, but in our case it can take up to a couple of minutes, meaning that it is comparable to the time needed to run the experiments on the quantum hardware.
2. The GPR based procedure uses circuit outcomes as inputs and no pulse-level control of the system is needed.Probing any error source affecting the coupler (flux tails, coupler 1/f , etc.) will always demand pulse-level access due to the aforementioned SWAP-like operation, while this is not necessary for the GPR method.
VI. CONCLUSIONS
In this work we have demonstrated how different error sources in quantum gates may be pinpointed from experimental data using pulse-level simulations and demonstrated the performance using detailed simulations of superconducting qubits.Unlike traditional system agnostic quantum characterization techniques, we use the fact that we have a model available for the dynamics, which can be used to make educated predictions on the sources of the infidelity of the gate, with a small number of shots needed.We estimate that the proposal requires half as many quantum resources compared to a similar characterization procedure with the same accuracy, and is more flexible, as we have shown how errors resulting in leakage, non-Markovian dynamics and both coherent and incoherent errors can be estimated and even more crucially pinpointed in experiment, with the main bottleneck being the complexity of the noisy gate simulations.
We note that the accuracy of the predictions can be further improved by optimizing the design of input experiments beyond the ones considered in this work.
Because of its low experimental overhead and relative flexibility, the main benefits of our proposed reconstruction procedure will become evident for the error diagnosis of different gates in large scale chips, for example after or before a calibration procedure has been performed.Furthermore, this method can also be extended to codesigned circuits containing computational elements beyond qubits, such as the ones proposed in Refs.[41,85].
To illustrate the method, we have analyzed the error sources contributing to the infidelity of qubit operations in superconducting quantum architectures.To this aim we have presented what is perhaps one of the most accurate and complete noise models, including realistic descriptions of the environmental, as well as control errors present in current devices.Namely, we consider a realistic non-Markovian description of the pure dephasing dynamics and the non-local effects of the T 1 decay due to the coupling to an environment of TLSs.We have additionally accurately simulated the effects of control errors due to imperfections in the electronics, which can also in some cases exhibit non-Markovian behaviour, all while keeping the simulation tractable without the use of high performance computing.
With the help of these accurate models, we have demonstrated that when considering the performance of more than a single gate, current state of the art hardware is not necessarily limited by environmental effects such as T 1 decay, but that control imperfections can be a significant source of error in an algorithm execution.This fact is often hidden when reporting gate fidelities as a performance metric, since the infidelity of a single gate can be dominated by incoherent errors, while the infidelity of a series of gates or an algorithm can be dominated by other error contributions, due to the different scaling of coherent errors.These findings warrant the use of coherent error suppression techniques such as randomized compiling [80] as well as more frequent calibration.Interestingly, we have also found that the non-Markovian nature of 1/f flux noise implies that we will often overestimate the effect of the pure dephasing decay on the gate infidelity, if the pure dephasing decay times obtained via Ramsey experiments are used in the estimation.We expect that the results on the average error contribution will improve the current modelling of whole algorithms.
Since it is well-established that the main decoherence mechanism in transmon qubits is the coupling to a bath of two-level defects [45,69,88], we consider that the transmon is coupled to a two-level system (TLS) bath, with a Hamiltonian of the form ĤTLS = i∈{Q1,C,Q2} k Here we are assuming that the TLSs are localized to each transmon, which is justified by the fact that they reside in the amorphous oxide layers of the various interfaces of each individual circuit element.Additionally, we are neglecting the phononic bath of the TLS, i.e. the intrinsic decoherence of the TLS themselves, and we omit the TLS which have an additional term proportional to σ x ik , as was done in Ref. [88], and focus on the resonant TLS which contribute more to the decay.It is also worth mentioning here that the magnitude of the asymmetry in the standard tunnelling model is assumed to be distributed according to an 1/x-type probability distribution, meaning that very small asymmetry energies are much more likely.
Since each TLS interacts with the transmon via electric fields, the interaction Hamiltonian of the system in the rotating wave approximation is given by Ĥint = i∈{Q1,C,Q2} k The form of the coupling in Eq.B2 is constructed in order to be able to account for the observed excitation exchange between the transmon and a bath TLS [29,88].The coupling strength χ ik was derived in [45,88] The constant δ k n g reflects the effect of the TLS electric dipole on the applied voltage on the superconducting island, while ω i is the transmon frequency and E Ci the charging energy of the same transmon.While this means that the coupling constant depends on the frequency of the uncoupled transmon, the χ ik is not a function of the excitation energy of each individual TLS E ik .We define the correlation function of the bath as χ ik χ i k tr e −i Ĥbath t σ± ik e i Ĥbath t σ± i k ρB . (B4) The bath density matrix is denoted by ρB and is assumed to be in a thermal state ρB ∝ exp (−β ĤTLS ).Further assuming a diagonalized bath Hamiltonian of the form presented in Eq.B1, the correlation function is non-zero only if i = i , k = k and for the operators preserving the mode occupation number, meaning that only the combinations C ik+,ik− and C ik−,ik+ are possible.Focusing on the non-unitary dynamics, the rates Γ ab It can also easily be verified that this definition of the rates Γ ab 1 and jump operators in the uncoupled case when all g ij = 0 reduces to the simpler local Lindblad equation from the single-qubit case in Eq. 6.
The function γ ip (∆ε) determines the decay rate of the eigenstate transition depending on its frequency, and can therefore be probed, in the uncoupled limit, by measuring the T 1 time at different frequencies of the qubit, as was done in Ref. [44].In our TLS bath model, this function is given by γ ip (∆ε) = where we have used the delta function δ(•) to simplify the notation, and as an approximation, we consider the continuous bath approximation from now on, so that ik → dE i d(δ i n g )P (E i , δ i n g ), where P (E i ) is the continuous probability distribution of the TLS energy, which is typically assumed to be uniform [69], and independent of the coupling parameter δn g in the standard tunnelling model, i.e.P (E i , δ i n g ) = P (E i )P (δ i n g ).In reality, the continuous approximation is not needed, since the resonance with an individual TLS described by the delta function in Eq.B7 is broadened due to the intrinsic decoherence of the TLS [89].Additionally, since the parameters of the TLS environments are random [69], a complete description of the TLS bath would imply knowing the parameters of hundreds of TLSs.
While we have presented a form of the transmon-TLS coupling in Eq.B2, we still need to connect the above model to an experimental observation.This is done by assuming that the system is completely uncoupled when the T 1 time is measured, which results in χ 2 i = Γ i 1 .with the constant Γ i 1 = 1/T i 1 of the corresponding transmon i ∈ {Q 1 , C, Q 2 } measured in the uncoupled regime, where the hybridization between the states can be neglected.We have defined the averaged coupling strength as χ 2 i = d(δ i n g ) χ 2 i (δ i n g )P (δ i n g ), and therefore the actual distribution of the coupling strengths of the TLSs is irrelevant, as long as the coupling strength is independent of the TLS energy E ik .
The general form of the rates in Eq. 15, obtained by using the above result in Eq.B5 is The above equation therefore approximately models the decay rates between different eigenstate transitions during the operation of a two-qubit gate.The many-body effects are captured by the matrix elements of the operators â( †) , and the transition energy dependence is a result of the physical TLS model.The exact nature of the bath (spins, bosonic or fermionic), becomes relevant in the regime where the transition energy becomes comparable to the temperature |ε a − ε b | ∼ 1/β.In our simulations, this is a common occurrence for most transitions within a fixed excitation subspace in the near-resonant regime during the operation of the gate, as the frequency corresponding to a typical cryostat temperature of 20 mK is approximately 0.5 GHz.
< l a t e x i t s h a 1 _ b a s e 6 4 = " S Q U A C q V B 9 F 3 C b j p K 1 6 M w L c 0 7 T i 4 = " > A A A B 8 H i c b V D L S g N B E O z 1 G e M r 6 t H L Y h D i J e y K r 2P Q i 8 c o 5 i H J E m Y n k 2 T I z O w y 0 y u E J V / h x Y M i X v 0 c b / 6 N k 2 Q P m l j Q U F R 1 0 9 0 V x o I b 9 L x v Z 2 l 5 Z X V t P b e R 3 9 z a 3 t k t 7 O 3 X T Z R o y m o 0 E p F u h s Q w w R W r I U f B m r F m R I a C N c L h z c R v P D F t e K Q e c B S z Q J K + 4 j 1 O C V r p 8 b 7 U x g F D c t I p F L 2 y N 4 W 7 S P y M F C F D t V P 4 a n c j m k i m k A p i T M v 3 Y g x S o p F T w c b 5 d m J Y T O i Q 9 F n L U k U k M 0 E 6 P X j s H l u l 6 / Y i b U u h O 1 V / T 6 R E G j O S o e 2 U B A d m 3 p u I / 3 m t B H t X Q c p V n C B T d L a o l w g X I 3 f y v d v l m l E U I 0 s I 1 d z e 6 t I B 0 Y S i z S h v Q / D n X 1 4 k 9 d O y f 1 E + v z s r V q 6 z O H J w C E d Q A h 8 u o Q K 3 U I Ua U J D w D K / w 5 m j n x X l 3 P m a t S 0 4 2 c w B / 4 H z + A B N L j / M = < / l a t e x i t >R(✓)< l a t e x i t s h a 1 _ b a s e 6 4 = " f x e e Z L l 3 K h D L p D B G r 7 q K S 3 9 Z u P g = " > AA A B + n i c b Z C 7 T s M w F I Y d r q X c U h h Z L C o k p i p B 3 M Y K F s Y i 0 Y t o o s p x T 1 q r j h P Z D q g K f R Q W B h B i 5 U n Y e B v c N A O 0 / J K l T / 8 5 R + f 4 D x L O l H a c b 2 t p e W V1 b b 2 0 U d 7 c 2 t 7 Z t S t 7 L R W n k k K T x j y W n Y A o 4 E x A U z P N o Z N I I F H A o R 2 M r q f 1 9 g N I x W J x p 8 c J + B E Z C B Y y S r S x e n b F 4 0 Q M O O B 7 7 M m c e n b V q T m 5 8 C K 4 B V R R o U b P / v L 6 M U 0 j E J p y o l T X d R L t Z 0 R q R j l M y l 6 q I C F 0 R A b Q N S h I B M r P 8 t M n + M g 4 f R z G 0 j y h c e 7 + n s h I p N Q 4 C k x n R P R Q z d e m 5 n + 1 b q r D S z 9 j I k k 1 C D p b F K Y c 6
1 < 3 <
t e x i t s h a 1 _ b a s e 6 4 = " R O A P p D q D z p U h M E e t x 3 Z P Z i g 1 2 o U = " > A A A B 8 n i c b V D L S s N A F L 2 p r 1 p f V Z d u g k V w V R L x t S x 2 4 7 K C f W A a y m Q 6 a Y f O I 8 x M h B L 6 G W 5 c K O L W r 3 H n 3 z h p s 9 D q g Y H D O f c y 5 5 4 o Y V Q b z / t y S i u r a + s b 5 c 3 K 1 v b O 7 l 5 1 / 6 C j Z a o w a W P J p O p F S B N G B W k b a h j p J Y o g H j H S j S b N 3 O 8 + E q W p F P d m m p C Q o 5 G g M c X I W C n o c 2 T G i m f N h 9 m g W v P q 3 h z u X + I X p A Y F W o P q Z 3 8 o c c q J M J g h r Q P f S 0 y Y I W U o Z m R W 6 a e a J A h P 0 I g E l g r E i Q 6 z e e S Z e 2 K V o R t L Z Z 8 w 7 l z 9 u Z E h r v W U R 3 Y y j 6 i X v V z 8 z w t S E 1 + H G R V J a o j A i 4 / i l L l G u v n9 7 p A q g g 2 b W o K w o j a r i 8 d I I W x s S x V b g r 9 8 8 l / S O a v 7 l / W L u / N a 4 6 a o o w x H c A y n 4 M M V N O A W W t A G D B K e 4 A V e H e M 8 O 2 / O + 2 K 0 5 B Q 7 h / A L z s c 3 h I u R b A = = < / l a t e x i t > CZ < l a t e x i t s h a 1 _ b a s e 6 4 = "V H r Q 9 e C l R m A V g O r O T / o H j G R j l w M = " > A A A B 7 X i c b V D J S g N B E K 2 J W 4 x b 1 K O X x i B 4 C j P i d g x 6 8 R j B L J A M o a f T k 7 T p Z e j u E c K Q f / D i Q R G v / o 8 3 / 8 Z O M g d N f F D w e K + K q n p R w p m x v v / t F V Z W 1 9 Y 3 i p u l r e 2 d 3 b 3 y / k H T q F Q T 2 i C K K 9 2 O s K G c S d q w z H L a T j T F I u K 0 F Y 1 u p 3 7 r i W r D l H y w 4 4 S G A g 8 k i x n B 1 k n N r h J 0 g H v l i l / 1 Z 0 D L J M h J B X L U e + W v b l + R V F B p C c f G d A I / s W G G t W W E 0 0 m p m x q a Y D L C A 9 p x V G J B T Z j N r p 2 g E 6 f 0 U a y 0 K 2 n R T P 0 9 k W F h z F h E r l N g O z S L 3 l T 8 z + u k N r 4 O M y a T 1 F J J 5 o v i l C O r 0 P R 1 1 G e a E s v H j m C i m b s V k S H W m F g X U M m F E C y + v E y a Z9 X g s n p x f 1 6 p 3 e R x F O E I j u E U A r i C G t x B H R p A 4 B G e 4 R X e P O W 9 e O / e x 7 y 1 4 O U z h / A H 3 u c P k 3 G P J Q = = < / l a t e x i t > !< l a t e x i t s h a 1 _ b a s e 6 4 = " 6 d g 9 H c u C 9 6 Q o + h V F 5 + x W z 1 Z g z X U = " > A A A B + X i c b V D L S g N B E J y N r x h f q x 6 9 D A Z B E M J u 8 H U M e v E Y w T w g u 4 T e y S Q Z M j O 7 z M w G w p I / 8 e J B E a / + i T f / x k m y B 0 0 s a C i q u u n u i h L O t P G 8 b 6 e w t r 6 x u V X c L u 3 s 7 u 0 fu I d H T R 2 n i t A G i X m s 2 h F o y p m k D c M M p + 1 E U R A R p 6 1 o d D / z W 2 O q N I v l k 5 k k N B Q w k K z P C B g r d V 2 3 G s S C D g B f 4 A B 4 M o S u W / Y q 3 h x 4 l f g 5 K a M c 9 a 7 7 F f R i k g o q D e G g d c f 3 E h N m o A w j n E 5 L Q a p p A m Q E A 9 q x V I K g O s z m l 0 / x m V V 6 u B 8 r W 9 L g u f p 7 I g O h 9 U R E t l O A G e p l b y b + 5 3 V S 0 7 8 N M y a T 1 F B J F o v 6 K c c m x r M Y c I 8 p S g y f W A J E M X s r J k N Q Q I w N q 2 R D 8 J d f X i X N a s W / r l w 9 X p Z r d 3 k c R X S C T t E 5 8 t E N q q E H V E c N R N A Y P a N X 9 O Z k z o v z 7 n w s W g t O P n O M / s D 5 / A E p F J K 5 < / l a t e x i t > 2! + ↵ < l a t e x i ts h a 1 _ b a s e 6 4 = " 6 A c a r L t S H t r 9 F V o l / r u B k u k L S l s = " > A A A B 7 3 i c b V D L S g N B E O y N r x h f U Y 9 e B o P g x b A r v o 5 B D 3 q M Y B 6 Q L K F 3 M k m G z O y u M 7 N C W P I T X j w o 4 t X f 8 e b f O E n 2 o I k F D U V V N 9 1 d Q S y 4 N q 7 7 7 e S W l l d W 1 / L r h Y 3 N r e 2 d 4 u 5 e X U e J o q x G I x G p Z o C a C R 6 y m u F G s G a s G M p A s E Y w v J n 4 j S e m N I / C B z O K m S + x H / I e p 2 i s 1 G z f o p T Y O e k U S 2 7 Z n Y I s E i 8 j J c h Q 7 R S / 2 t 2 I J p K F h g r U u u W 5 s f F T V I Z T w c a F d q J Z j H S I f d a y N E T J t J 9 O 7 x 2 T I 6 t 0 S S 9 S t k J D p u r v i R S l 1 i M Z 2 E 6 J Z q D n v Y n 4 n 9 d K T O / K T 3 k Y J 4 a F d L a o l w h i I j J 5 n n S 5 Y t S I k S V I F b e 3 E j p A h d T Y i A o 2 B G / + 5 U V S P y 1 7 F + X z + 7 N S 5 T q L I w 8 H c A j H 4 M E l V O A O q l A D C g K e 4 R X e n E f n x X l 3 P m a t O S e b 2 Y c / c D 5 / A H w l j 5 8 = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " w + j z M S k H v Z n o l U i w P HD f + u Q 7 B A 0 = " > A A A B 7 3 i c b V D L S g N B E O y N r x h f U Y 9 e B o M g C G F X f B 2 D H v Q Y w T w g W U L v Z J I M m d l d Z 2 a F s O Q n v H h Q x K u / 4 8 2 / c Z L s Q R M L G o q q b r q 7 g l h w b V z 3 2 8 k t L a + s r u X X C x u b W 9 s 7 x d 2 9 u o 4 S R V m N R i J S z Q A 1 E z x k N c O N Y M 1 Y M Z S BY I 1 g e D P x G 0 9 M a R 6 F D 2 Y U M 1 9 i P + Q 9 T t F Y q d m + R S m x c 9 I p l t y y O w V Z J F 5 G S p C h 2 i l + t b s R T S Q L D R W o d c t z Y + O n q A y n g o 0 L 7 U S z G O k Q + 6 x l a Y i S a T + d 3 j s m R 1 b p k l 6 k b I W G T N X f E y l K r U c y s J 0 S z U D P e x P x P 6 + V m N 6 V n / I w T g w L 6 W x R L x H E R G T y P O l y x a g R I 0 u Q K m 5 v J X S A C q m x E R V s C N 7 8 y 4 u k f l r 2 L s r n 9 2 e l y n U W R x 4 O 4 B C O w Y N L q M A d V K E G F A Q 8 w y u 8 O Y / O i / P u f M x a c 0 4 2 s w 9 / 4 H z + A H k d j 5 0 = < / l a t e x i t > + … < l a t e x i t s h a 1 _ b a s e 6 4 = " O e v 0 6 V B T E I j 8 4 J b G 7 V c S o e 5 1 A I 8 = " > A A A B 6 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L A b B U 0 n E q s e i F 4 8 V 7 A e 0 o W y 2 m 3 b p 7 i b s b o Q S + h e 8 e F D E q 3 / I m / / G T Z u D t j 4 Y e L w 3 w 8 y 8 M O F M G 8 / 7 d k p r 6 x u b W + X t y s 7 u 3 v 5 B 9 f C o r e N U E d o i M Y 9 V N 8 S a c i Z p y z D D a T d R F I u Q 0 0 4 4 u c v 9 z h N V m s X y 0 U w T G g g 8k i x i B J t c 6 r v I H 1 R d r + b N g V a J X x A X C j Q H 1 a / + M C a p o N I Q j r X u + V 5 i g g w r w w i n s 0 o / 1 T T B Z I J H t G e p x I L q I J v f O k N n V h m i K F a 2 p E F z 9 f d E h o X W U x H a T o H N W C 9 7 u f i f 1 0 t N d B N k T C a p o Z I s F k U p R y Z G + e N o y B Q l h k 8 t w U Q x ey s i Y 6 w w M T a e i g 3 B X 3 5 5 l b Q v a v 5 V r f 5 w 6 T Z u i z j K c A K n c A 4 + X E M D 7 q E J L S A w h m d 4 h T d H O C / O u / O x a C 0 5 x c w x / I H z + Q P f F I 1 8 < / l a t e x i t > #l a t e x i t s h a 1 _ b a s e 6 4 = " Z q 7 p A J V 1 F M u W l Y r x q c + A I x F N w y E = " > A A A B 6 3 i c b V D L S s N A F L 2 p r 1 p f V Z d u B o P g q i T F 1 7 L o x m U F + 4 A 2 l M l 0 0 g 6 d m Y S Z i V B C f 8 G N C 0 X c + k P u / B s n b R b a e u D C 4 Z x 7 u f e e M O F M G 8 / 7 d k p r 6 x u b W + X t y s 7 u 3 v 5 B 9 f C o r e N U E d o i M Y 9 V N 8 S a c i Z p y z D D a T d R FI u Q 0 0 4 4 u c v 9 z h N V m s X y 0 U w T G g g 8 k i x i B J t c 6 r u o P q i 6 X s 2 b A 6 0 S v y A u F G g O q l / 9 Y U x S Q a U h H G v d 8 7 3 E B B l W h h F O Z 5 V + q m m C y Q S P a M 9 S i Q X V Q T a / d Y b O r D J E U a x s S Y P m 6 u + J D A u t p y K 0 n Q K b s V 7 2 c v E / r 5 e a 6 C b I m E x S Q y V Z L I p S j k y M 8 s f R k C l K D J 9 a g o l i 9 l Z E x l h h Y m w 8 F R u C v / z y K m n X a / 5 V 7 f L h w m 3 c F n G U 4 Q R O 4 R x 8 u I Y G 3 E M T W k B gD M / w C m + O c F 6 c d + d j 0 V p y i p l j + A P n 8 w f g m I 1 9 < / l a t e x i t > #2 < l a t e x i t s h a 1 _ b a s e 6 4 = " I y v 7 4 3 / M p L x 2 P 6 E T T Y 6 8 p a W n E k Y = " > A A A B 6 3 i c b V D L S s N A F L 2 p r 1 p f V Z d u B o P g q i S + l 0 U 3 L i v Y B 7 S h T K a T d u j M J M x M h B L 6 C 2 5 c K O L W H 3 L n 3 z h p s 9 D W A x c O 5 9 z L v f e E C W f a e N 6 3 U 1 p Z X V v f K G 9 W t r Z 3 d v e q + w c t H a e K 0 C a J e a w 6 I d a U M 0 m b h h l O O 4 m i W I S c t s P x X e 6 3 n 6 j S L J a PZ p L Q Q O C h Z B E j 2 O R S z 0 X n / a r r 1 b w Z 0 D L x C + J C g U a / + t U b x C Q V V B r C s d Z d 3 0 t M k G F l G O F 0 W u m l m i a Y j P G Q d i 2 V W F A d Z L N b p + j E K g M U x c q W N G i m / p 7 I s N B 6 I k L b K b A Z 6 U U v F / / z u q m J b o K M y S Q 1 V J L 5 o i j l y M Q o f x w N m K L E 8 I k l m C h m b 0 V k h B Um x s Z T s S H 4 i y 8 v k 9 Z Z z b + q X T 5 c u P X b I o 4 y H M E x n I I P 1 1 C H e 2 h A E w i M 4 B l e 4 c 0 R z o v z 7 n z M W 0 t O M X M I f + B 8 / g D i H I 1 + < / l a t e x i t > #l a t e x i t s h a 1 _ b a s e 6 4 = " f x e e Z L l 3 K h D L p D B G r 7 q K S 3 9 Z u P g = " > A A A B + n i c b Z C 7 T s M w F I Y d r q X c U h h Z L C o k p i p B 3 M Y K F s Y i 0 Y t o o s p x T 1 q r j h P Z D q g K f R Q W B h B i 5 U n Y e B v c N A O 0 / J K l T / 8 5 R + f 4 D x L O l H a c b 2 t p e W V 1 b b 2 0 U d 7 c 2 t 7 Z t S t 7 L R W n k k K T x j y W n Y A o 4 E x A U z P N o Z N I I F H A o R 2 M r q f 1 9 g N I x W J x p 8 c J + B E Z C B Y y S r S x e n b F 4 0 Q M O O B 7 7 M m c e n b V q T m 5 8 C K 4 B V R R o U b P / v L 6 M U 0 j E J p y o l T X d R L t Z 0 R q R j l M y l 6 q I C F 0 R A b Q N S h I B M r P 8 t M n + M g 4 f R z G 0 j y h c e 7 + n s h I p N Q 4 C k x n R P R Q z d e m 5 n + 1 b q r D S z 9 j I k k 1 C D p b F K Y c 6 x h P c 8 B 9 J o F q P j Z A q G T m V k y H R B K q T V p l E 4 I 7 / + V F a J 3 U 3 P P a 2 e 1 p t X 5 V x F F C B + g Q H S M X X a A 6 u k E N 1 E Q U P a J n 9 I r e r C f r x X q 3 P m a t S 1 Y x s 4 / + y P r 8 A a M E k 5 0 = < / l a t e x i t > hZi (c) < l a t e x i t s h a 1 _ b a s e 6 4 = " O e v 0 6 V B T E I j 8 4 J b G 7 V c S o e 5 1 A I 8 = " > A A A B 6 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L A b B U 0 n E q s e i F 4 8 V 7 A e 0 o W y 2 m 3 b p 7 i b s b o Q S + h e 8 e F D E q 3 / I m / / G T Z u D t j 4 Y e L w 3 w 8 y 8 M O F M G 8 / 7 d k p r 6 x u b W + X t y s 7 u 3 v 5 B 9 f C o r e N U E d o i M Y 9 V N 8 S a c i Z p y z D D a T d R F I u Q 0 0 4 4 u c v 9 z h N V m s X y 0 U w T G g g 8 k i x i B J t c 6 r v I H 1 R d r + b N g V a J X x A X C j Q H 1 a / + M C a p o N I Q j r X u + V 5 i g g w r w w i n s 0 o / 1 T T B Z I J H t G e p x I L q I s n b R b a e u D C 4 Z x 7 u f e e M O F M G 8 / 7 d k p r 6 x u b W + X t y s 7 u 3 v 5 B 9 f C o r e N U E d o i M Y 9 V N 8 S a c i Z p y z D D a T d R F D M / w C m + O c F 6 c d + d j 0 V p y i p l j + A P n 8 w f g m I 1 9 < / l a t e x i t > #2 < l a t e x i t s h a 1 _ b a s e 6 4 = " I y v 7 4 3 / M p L x 2 P 6 E T T Y 6 8 p a W n E k Y = " > A A A B 6 3 i c b V D L S s N A F L 2 p r 1 p f V Z d u B o P g q i S + l 0 U 3 L i v Y B 7 S h T K a T d u j M J M x M h B L 6 C 2 5 c K O L W H 3 L n 3 z h p s 9 D W A x c O 5 9 z L v f e E C W f a e N 6 3 U 1 p Z X V v f K G 9 W t r Z 3 d v e q + w c t H a e K 0 C a J e a w 6 I d a U M 0 m b h h l O O 4 m i W I S c t s P x X e 6 3 n 6 j S L J a P
3 <
c u P X b I o 4 y H M E x n I I P 1 1 C H e 2 h A E w i M 4 B l e 4 c 0 R z o v z 7 n z M W 0 t O M X M I f + B 8 / g D i H I 1 + < / l a t e x i t > #l a t e x i t s h a 1 _ b a s e 6 4 = " 0 Y P d T 3 B 8 5 2 E t u Z N 3 x G p M 5 w b k p l a y I f i z J 8 + T 5 n H V P 6 u e 3 p 6 U a 5 d 5 H E W w D w 7 A E f D B O a i B G 1 A H D Y D B I 3 g G r + D N e X J e n H f n Y 1 p a c P K e X f A H z u c P P Q i X V w = = < / l a t e x i t > Error source < l a t e x i t s h a 1 _ b a s e 6 4 = " O e v 0 6 V B T E I j 8 4 J b G 7 V c S o e 5 1 A I 8 = " > A A A B 6 3 i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L A b B U 0 n E q s e i F 4 8 V 7 A e 0 o W y 2 m 3 b p 7 i b s b o Q S + h e 8 e F D E q 3 / I m / / G T Z u D t j 4 Y e L w 3 w 8 y 8 M O F M G 8 / 7 d k p r 6 x u b W + X t y s 7 u 3 v 5 B 9 f C o r e N U E d o i M Y 9 V N 8 S a c i Z p y z D D a T d R F s n b R b a e u D C 4 Z x 7 u f e e M O F M G 8 / 7 d k p r 6 x u b W + X t y s 7 u 3 v 5 B 9 f C o r e N U E d o i M Y 9 V N 8 S a c i Z p y z D D a T d R F
2 <
D M / w C m + O c F 6 c d + d j 0 V p y i p l j + A P n 8 w f g m I 1 9 < / l a t e x i t > #l a t e x i t s h a 1 _ b a s e 6 4 = " M y r 3 K c 7 z C m / P o v D j v z s e 8 N e d k M 4 f w B 8 7 n D 6 n R j N w = < / l a t e x i t > N < l a t e x i t s h a 1 _ b a s e 6 4 = " R + f 6 9 L 1 k f / S 3 j o 4 i h D q b 7 z C m / P o v D j v z s e 8 N e d k M 4 f w B 8 7 n D 6 n R j N w = < / l a t e x i t > N < l a t e x i t s h a 1 _ b a s e 6 4 = " M y r 3 K c 7 z C m / P o v D j v z s e 8 N e d k M 4 f w B 8 7 n D 6 n R j N w = < / l a t e x i t > N < l a t e x i t s h a 1 _ b a s e 6 4 = " R + f 6 9 L 1 k f / S 3 j o 4 i h D q b
2 <
a d R d y 7 q 5 / d n t e Z 1 U U c Z D u E I T s C B S 2 j C H b S g D R h S e I Z X e D O e j B f j 3 f i Y j 5 a M Y u c A / s D 4 / A F C 6 5 L Z < / l a t e x i t > G l a t e x i t s h a 1 _ b a s e 6 4 = " e x 3 O Q H q j Z c n n s o t b m j K M b 2 U 8 2 t I = " > A A A B 6 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y K r 2 P Q i 8 c E z Q O S J c x O Z p M h s 7 P L T K 8 Q l n y C F w + K e P W L v P k 3 T p I 9 a L S g o a j q p r s r S K Q w 6 L p f T m F l d W 1 9 o 7 h Z 2 t r e 2 d 0 r 7 x + 0 T J x q x p s s l r H u B N R w K R R v o k D J O 4 n m N A o k b w f j 2 5 n f f u T a i F g 9 4 C T h f k S H S o S C U b T S f a P v 9 c s V t + r O Q f 4 S L y c V y F H v l z 9 7 g 5 i l E V f I J D W m 6 7 k J + h n V K J j k 0 1 I v N T y h b E y H v G u p o h E 3 f j Y / d U p O r D I g Y a x t K S R z 9 e d E R i N j J l F g O y O K I 7 P s z c T / v G 6 K 4 b W f C Z W k y B V b L A p T S T A m s 7 / J Q G j O U E 4 s o U w L e y t h I 6 o p Q 5 t O y Y b g L b / 8 l 7 T O q t 5 l 9 a J x X q n d 5 H E U 4 Q i O 4 R Q 8 u I I a 3 E E d m s B g C E / w A q + O d J 6 d N + d 9 0 V p w 8 p l D + A X n 4 x v T d 4 2 D < / l a t e x i t > Q 1 FIG. 1.(a) A standard implementation of single-qubit rotations around the x or y axes, based on applying two separate out of phase microwave pulses, which can also excite the population outside of the computational subspace in weakly non-linear (|α| ω) superconducting qubits.Additionally, incoherent decay from the excited states and pure dephasing are represented with red arrows.The uncertainty of the energy levels represents the effect of 1/f -type flux noise and long timescale drifts.(b) The energy level diagram of a tunable coupler based CZ gate implementation.Because of the near resonance between the coupler and qubit excited states a global (i.e.non-local) Lindblad equation is needed to describe the incoherent dynamics (red arrows).All three parts of the system are coupled, as denoted by the double sided arrows beneath.(c) Scheme to extract information of the infidelity contribution from different error sources.Supervised learning techniques are used to interpolate between simulation results for different parameters and an experimental result.The setup requires considering the same gate N times so that errors are sufficiently amplified to facilitate the discernment of its different contributions.Taking into account state preparation and measurement errors, we are then able to estimate the magnitude of each error source and the corresponding uncertainties.
FIG. 3 .
FIG.3.A schematic representation of Eq. 26, showing how the initial density matrix is subsequently multiplied by the propagators of each noise.What is not included in the figure is the fact that due to the non-Markovian component the result has to be averaged over a large number of realizations of UN .
3 .
Multiply the propagators U G and U D , corresponding to N ∆t multiplications of D 2 × D 2 matrices.4. Diagonalize the Hamiltonian of the system at each time step, corresponding to N ∆t diagonalizations of a D × D matrix.
7 s 8 r 9 e u 8 j i I c w T G c g g e X U I d b a E A T C E h 4 h l d 4 c 7 T z 4 r w 7 H / P R g p P v H M I f O J 8 / 3 D + R D g = = < / l a t e x i t > I < l a t e x i t s h a 1 _ b a s e 6 4 = " T 0 r M 6 s p k u w 5 7 / 8 l 3 S O G / Z Z 4 / T m p N 6 8 L O u o o n 1 0 g I 6 Q j c 5 R E 1 2 j F m o j i h 7 Q E 3 p B r 8 a j 8 W y 8 G e + z 0 Y p R 7 u y i X z A + v g E J P p W c < / l a t e x i t > Y ✓ < l a t e x i t s h a 1 _ b a s e 6 4 = " H s Z K P S h s i 1 3 c x w Y 8 F j / 0 c X B 4 2 e U = " > A A A B / n i c b V D L S s N A F J 3 4 r P U V F V d u B o v g x p K I r 2 X R j c s K 9 i F N C J P p p B 0 6 e T B z I 5 Q Q 8 F f c u F D E r d / h z r 9 x 0 m a h r Q c G D u f c y z 1 z / E R w B Z b 1 b S w s L i 2 v r F b W q u s b m 1 v b 5 s 5 u W 8 W p p K x F Y x H L r k 8 U E z x i L e A g W D e R j I S + Y B 1 / d F P 4 n U c m F Y + j e x g n z A 3 J I O I B p w S 0 5 J n 7 T k h g K M P s I f e y E w e G D E i O P b N m 1 a 0 J 8 D y x S 1 J D J Z q e + e X 0 Y 5 q
2 <
4 p e X S a d R d y 7 q 5 / d n t e Z 1 U U c Z D u E I T s C B S 2 j C H b S g D R h S e I Z X e D O e j B f j 3 f i Y j 5 a M Y u c A / s D 4 / A F C 6 5 L Z < / l a t e x i t > G l a t e x i t s h a 1 _ b a s e 6 4 = " M y r 3 K c 7 z C m / P o v D j v z s e 8 N e d k M 4 f w B 8 7 n D 6 n R j N w = < / l a t e x i t > N < l a t e x i t s h a 1 _ b a s e 6 4 = " R + f 6 9 L 1 k f / S 3 j o 4 i h D q b h 4 t y 7 T a P o w D H c A J n 4 M E 1 1 O A e 6 u A D A w H P 8 A p v j n J e n H f n Y 9 6 6 4 u Q z R / A H z u c P M n u O T A = = < / l a t e x i t > ⇡/2 < l a t e x i t s h a 1 _ b a s e 6 4 = " D E L t b W N a r n R N L O x I H N y K K Q w D 8 m s = " > A A A B 6 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y K r 2 P Q i 8 e I 5 g H J E m Y n s 8 m Q 2 d l l p l c I S z 7 B i w d F v P p F 3 v w b J 8 k e N L G g o a j q p r s r S K Q w 6 L r f T m F l d W 1 9 o 7 h Z 2 t r e 2 d 0 r 7 x 8 0 T Z x q x h s s l r F u B 9 R w K R R v o E D J 2 4 n m N A o k b w W j 2 6 n f e u L a i F g 9 4 j j h f k Q H S o S C U b T S Q z c R v X L F r b o z k G X i 5 a Q C O e q 9 8 l e 3 H 7 M 0 4 g q Z p M Z 0 P D d B P 6 M a B Z N 8 U u q m h i e U j e i A d y x V N O L G z 2 a n T s i J V f o k j L U t h W S m / p 7 I a G T M O A p s Z 0 R x a B a 9 q f i f 1 0 k x v P Y z o Z I U u W L z R W E q C c Z k + j f p C 8 0 Z y r E l l G l h b y V s S D V l a N M p 2 R C 8 x Z e X S f O s 6 l 1 W L + 7 P K 7 W b P I 4 i H M E x n I I H V 1 C D O 6 h D A x g M 4 B l e 4 c 2 R z o v z 7 n z M W w t O P n M I f + B 8 / g B S / Y 3 X < / l a t e x i t > ⇡ < l a t e x i t s h a 1 _ b a s e 6 4 = " V S 4 w G s 2 R 4 5 r H 9 n P S E 5 g z l y B L K t L C 3 E j a g m j K 0 + R R t C N 7 i y 8 u k U a 1 4 V 5 X L h 4 t y 7 T a P o w D H c A J n 4 M E 1 1 O A e 6 u A D A w H P 8 A p v j n J e n H f n Y 9 6 6 4 u Q z R / A H z u c P M n u O T A = = < / l a t e x i t > ⇡/2 < l a t e x i t s h a 1 _ b a s e 6 4 = " D E L t b W N a r n R N L O x I H N y K K Q w D 8 m s = " > A A A B 6 n i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y K r 2 P Q i 8 e I 5 g H J E m Y n s 8 m Q 2 d l l p l c I S z 7 B i w d F v P p F 3 v w b J 8 k e N L G g o a j q p r s r S K Q w 6 L r f T m F l d W 1 9 o 7 h Z 2 t r e 2 d 0 r 7 x 8 0 T
FIG. 6 .
FIG.6.Examples of reconstructing the error budgets of single-qubit gates with realistic experimental parameters.(a) Circuits used as an input for the algorithm, where θ is equal to either π or π/2, depending on which gate we are interested in.We use all 24 = NG 1 + NG 1 × NG 2 possible combinations of G1 and G2.In the data presented above, N = 25, and the initial state is the thermal ground state as described in Sec.IV A. (b) Performance of the budgeting on a test set of 40 π/2 rotation gates with different error parameters, quantified in terms of the R 2 score, weighted by the variance of each feature.Different shapes correspond to the accuracy of the error source contribution predictions to the averaged state infidelity of a series of gates.(c) Similarly as in (b), except for the π rotation.(d) Two input-output pairs of the testing set for the π/2 rotation, with the input on the left and the output on the right, for different examples.The input data is the probability of measuring the excited state population, together with a finite measurement error and finite number of shots, as described in Sec.IV C. We have only included the most dominant error sources from Fig.4in the outputs, and each of the error sources has 5 different columns, corresponding to the infidelity contribution to 1, 3, 5, 7 or 9 gates (in this order).The empty columns represent the predictions of the trained Gaussian Process Regressor model, together with the corresponding uncertainties and the filled columns are the correct value.In an ideal scenario, the filled and empty columns would directly coincide.(e) Identically as in (d), except for the π rotation.
9 7 p A q g g 2 b W o K w o j a r i 8 d I I W x s S x V b g r 9 8 8 l / S O a v 7 l / W L u / N a 4 6 a o o w x H c A y n 4 M M V N O A W W t A G D B K e 4 A V e H e M 8 O 2 / O + 2 K 0 5 B Q 7 h / A L z s c 3 h I u R b A = = < / l a t e x i t > CZ < l a t e x i t s h a 1 _ b a s e 6 4 = " p Y I r C y 5 4 g 7 E O B I s + b F E l 5 2 A b P C a d R d y 7 q 5 / d n t e Z 1 U U c Z D u E I T s C B S 2 j C H b S g D R h S e I Z X e D O e j B f j 3 f i Y j 5 a M Y u c A / s D 4 / A F C 6 5 L Z < / l a t e x i t > G < l a t e x i t s h a 1 _ b a s e 6 4 = " 6 E T k g h + S m / O n l K 6 p V V d I 9 5 E 3 t w 0 s h a 1 _ b a s e 6 4 = " p Y I r C y 5 4 g 7 E O B I s + b F E l 5 2 A b P C FIG. 7.Examples of reconstructing the error budgets of a non-adiabatic two-qubit CZ gate with realistic experimental parameters from TableII.(a) The circuits used as an input for the budget reconstruction.The initial state is the thermal ground state as described in Sec.IV A and two single-qubit gates are used to initiate a superposition state in both qubits, so that all of the elements of the qubitized density matrix are non-zero.The CZ gate is then applied N times, before single-qubit gates are applied, and simple linear inversion state tomography is performed.We use three circuit with N = 3, N = 5 and N = 7 CZ gates.(b) Performance of the budgeting on a test set of 40 CZ gates with different parameters, quantified in terms of the R 2 score, weighted by the variance of each feature.Different shapes correspond to the accuracy of the error source contribution predictions to the averaged state infidelity of different numbers of gates.(c) The detailed performance of the budget reconstruction for each noise source described in Sec.III B. The height of the empty columns represents the relative weight of each feature, given by the variance of that contribution in the test sample, as described in Eq. 34.The filled columns represent how good the predictions are, i.e. a completely filled column indicates a perfect score.(d) Four input-output pairs of the testing set for the CZ gate.The input data plotted are the results of simple linear inversion state tomography on states prepared by running the circuits in (a).The density matrices are transformed into Pauli vectors with elements [|ρ Pauli ] i = tr{ρ Pi}, where Pi is the Kroenecker product of two Pauli matrices indicated on the x-axis.The inputs also include readout errors.Similarly as in Fig.6each of the error sources has 3 different columns, corresponding to the infidelity contribution to 1, 3, or 5 CZ gates (in this order).The empty columns represent the predictions of the trained Gaussian Process Regressor model, together with the corresponding uncertainties and the filled columns are the correct value.In an ideal scenario, the filled and empty columns would directly coincide.
TABLE I .
Table of parameters used to generate the plots in Fig. 4.These values are typical in state of the art devices.
TABLE II .
Table of parameters used to generate the plots in Fig. 5.These values are typical in state of the art devices.
TABLE III .
[9] characterization experiments performed in Ref.[9]that are needed to obtain similar results to Fig.7, with the same assumptions about the noise. | 2023-05-17T01:16:09.645Z | 2023-05-15T00:00:00.000 | {
"year": 2023,
"sha1": "4704da7edf1b1854ee714fd59849d61eed858cbc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4704da7edf1b1854ee714fd59849d61eed858cbc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
10905546 | pes2o/s2orc | v3-fos-license | Severe Hypothyroidism Causing Pre-Eclampsia-Like Syndrome
Objective. Analyzing and managing pre-eclampsia-like syndrome due to severe hypothyroidism. Methods. Presentation of a case of severe hypothyroidism due to Hashimoto's syndrome, associated with a severe early-onset preeclampsia-like syndrome, managed in our Gynecology Department. Results. Severe pre-eclampsia led to miscarriage at 24 weeks of gestational age in a 42-year-old woman, although we attempted to correct hypothyroidism with increasing doses of levothyroxine and liothyronine sodium. Conclusion. Recognizing pre-eclampsia-like syndrome caused by overt hypothyroidism from other forms of pregnancy-induced hypertension is essential for choosing the correct treatment.
Introduction
Overt hypothyroidism (low free thyroxine hormones, elevated thyroid-stimulating hormone) has an incidence of 0.3-0.5% in pregnancy. This condition may cause severe obstetric complications, such as a preeclampsia-like syndrome, as discussed elsewhere [1,2]. Yet, distinguishing it from other forms of pregnancy-induced hypertension is a diagnostic challenge.
Case Presentation
A 17-week pregnant 42-year-old woman, with a history of Hashimoto's thyroiditis, treated with levothyroxine 150 µg/day, and homozygous mutation of methylenetetrahydrofolate reductase (MTHFR), treated with Aspirin 50 mg twice a day, folic acid, and vitamins, was referred to our hospital because of onset of severe early-onset preeclampsia, characterized by high blood pressure (180/108 mmHg), proteinuria, and headache. Nifedipine 1 g three times a day was started.
Her past obstetric history was characterized by vaginal delivery of a healthy baby and three miscarriages before the 12th week of gestational age (GA).
Due to the presence of high TSH (14.9 mU/L) and low levels of free triiodothyronine and thyroxine (resp., 1.6 pg/mL and 0.65 ng/dL), levothyroxine intake was increased from 150 µg to 175 µg per day. The ultrasound examination revealed high uterine arteries' Doppler waveforms resistance (average RI: 0.70).
A 24-hour urine collection revealed a protein amount of 1.575 g which increased to 6.8 g within a week. Renal and adrenal ultrasonography resulted negative. Screening tests for glomerular-based diseases were all negative. Complete screening for other autoimmune diseases resulted negative. Due to the persistent high blood pressure, alpha-metildopa was added to therapy.
TSH and proteinuria reached, respectively, 34.5 mU/L and 9.8 g/24 h, and the levels of free triiodothyronine and free thyroxine continued to decrease (1.4 pg/mL and 0.50 ng/dL). Atenolol 0.5 g/day was added to therapy. An ultrasound Doppler examination performed at 19 + 3 weeks of GA showed increased resistance in the uterine arteries (average RI: 0.77) and bilateral notches; the umbilical artery Doppler waveform presented absent diastolic flow.
Abdominal ascites and pleural bilateral effusion appeared at 20 + 4 weeks of GA, so albumin and diuretics were administered.
Case Reports in Endocrinology
Persistent high blood pressure required the introduction of labetalol 25 mg three times per day; high TSH values lead to an increase in levothyroxine administration (275 µg/day) and proteinuria reached a peak of 13.11 g/24 h. To reduce the fast-growing TSH levels, liothyronine sodium was added to therapy (20 µg twice a day). Furthermore, the woman started to manifest oliguria, treated with fluid infusion and plasma transfusion. Nitrates were added to therapy. In the following days, proteinuria started to decrease, reaching lower levels (3.1 g/24 h).
At 23 + 6 weeks of GA, the ultrasound evaluation showed rapid deterioration of the fetal condition, compatible with fetal acidosis, which resulted in death of the fetus few hours later. Immediately after miscarriage, hydralazine (8 and 6 mL) and magnesium sulphate IV were administered. In the evening, hydralazine administration was suspended and hypertension was controlled with nifedipine (1 g twice a day).
After 3 days, therapy was adjusted with 300 µg/day levothyroxine and labetalol, liothyronine was stopped. Finally, hypertension and hypothyroidism seemed to be well controlled, and the patient was discharged nine days after miscarriage.
Discussion
The association of hypothyroidism and preeclampsia is not surprising, hypothyroidism being an accepted cause of reversible hypertension both in the pregnant and in the nonpregnant population, as discussed elsewhere [3,4]. Hypothyroidism can cause vascular smooth muscle contraction both in systemic and renal vessels, which leads to increased diastolic hypertension, peripheral vascular resistance, and decreased tissue perfusion [1,4]. Thyroid dysfunction can be associated with proteinuria, which is known [5] to result in increased excretion of thyroxine and thyroid-binding globulins. Rare cases, have been reported [6,7] where proteinuria is severe enough to result in losses of thyroid-binding globulins and thyroxine that cannot be compensated by the body.
Given the very early onset of hypertension and proteinuria (at 17 weeks of GA), the concurrent TSH rise, blood pressure elevation, the absence of other possible preeclampsia causes and given the known correlation between hypothyroidism and hypertension (described above), we suspected a preeclampsia-like syndrome caused by that hypothyroidism. This hypothesis was further supported by the fact that the level of proteinuria begun to decline with the normalization of TSH level, before the cessation of pregnancy itself.
In order to treat hypothyroidism-related preeclampsialike syndrome, it is important to achieve a euthyroid state (defined by normal TSH levels) [8], if necessary by employing larger than conventional doses of levothyroxine integrated with liothyronine sodium, especially when proteinuria is a complicating factor, as demonstrated by the case we have presented. | 2016-05-04T20:20:58.661Z | 2012-05-29T00:00:00.000 | {
"year": 2012,
"sha1": "d513c5e974564bed0b736b23685a5d12bb617ae7",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/crie/2012/586056.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1ae5503dd068bb28b7e05ec6cc630099837a64e4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234080972 | pes2o/s2orc | v3-fos-license | Interactions Between Food, Feeding and Diets in Crustaceans: A Review
There are several literatures that cover different views of crustacean food, feeding and behavior aspects, but little was known on its interaction between them and it's also shown a different perspective. Thus, a better understanding of the interactions between food, feeding and diets in crustaceans is vital for developing better quality of seed or broodstock produced in hatchery and its adaptation to the aquaculture environment and system. The aim of the present review is to update the state of the art and to explicit the knowledge regarding food, feeding and diets in crustaceans and challenges and opportunities in the development of formulated diets.
Introduction
Interactions between food types, feeding behavior and formulated diets in the crustacean are important to improve the seed production and enhance fundamental knowledge of the cultured animals [1]. The development of pellet diets or aquafeed for aquaculture species has gained much interest nowadays as pellets offer many advantages compared to the natural feeds. In terms of nutrient content, artificial feed offers a nutritionally balanced diet with known nutrient content such as total lipid and protein that will promote growth and reproduction in the crustacean. By manipulating the level of protein and lipid important for growth and reproduction on crustacean's group, the formulated feed could provide sufficient nutrition to broodstock. Initially, no commercial broodstock diet was available, for some crustacean species. Most formulated feeds are focused on the larval stages. There are not much published studies focused on formulating a feeding diet, particularly to some crustacean broodstock. This is probably because of the unique habitat characteristics which respond differently to different environment conditions [2]. Several species are restricted to certain environments and aquaculture systems that give effect to the type of feeding selection [3][4][5][6].
The growing demand for animal protein from existing competition between human need and aquaculture feeding resulted in the decreasing of fish landings since fish are the sole provider of n-3 PUFA in the diet. This adds to the existing gap between the demand and the supply for fish and fish products. Regarding this issue, many studies have covered some adjustments to the feeding formulation by not depending too much from fish source as the source of protein and lipid. As a replacement to fish oil and fishmeal, the use of terrestrial animal proteins such as meat, poultry by-product, bone meal, and blood meal have confirmed their efficiencies in providing the animals with good protein level [7].
Besides that, the use of protein sources from plant-based materials are getting more attention nowadays. Plant protein sources such as camelina meal, canola meal, and soybean meal can be used as a substitute to fish meal without imposing negative effects to the growth and feed intake [8]. However, the major setbacks lie with the use of protein source from the terrestrial animals and plant origin include the lack attractants and palatability factors [9]. Compared to aquatic animals such as fishmeal, shrimp meal, and squid meal, the lacking of attractants component may result in poor ingestion of feeds thus reducing the rate of feed intake which consequently will retard growth in the animals [10].
In general, the physical form of the pellets depends on the species being cultured. High moisture contents in the pellets are often associated with nutrient leaching since it dissociates easily upon entering the water. Apparently, low pellet stability and durability resulting from high moisture content may not be suitable for the crustacean, which some species are aggressive in handling foods [11]. In addition to that, the proper storage and handlings of the final products are difficult to manage especially for the wet pellet. Since the wet pellets have high moisture content, having a prompt spoilage such as mold problems due to long storage period is unavoidable.
In this review, we attempt to identify the interaction between food types, feeding behavior and formulated diets in crustacean, by summarizing all the available information on the topic using the title of the review as keywords in the Web of Science Core Collection database. Table 1 summarized the crustacean's feed types with pellet and animal performances. Most studies on feeding of crustacean was done at the juvenile stage, especially for shrimp, crayfish, and crabs, meanwhile lobster and prawn (and some on crab), most studies focused on the adult or broodstock stages (Table 2). There are several types of feed for broodstock used in the hatchery for commercial purposes: the wet feed, dry feed, semimoist feed, and the moist feed. These feeds can be differentiated from each other in terms of moisture content where the moisture levels in each feed falls in the range of 45 -70%, 7 -13%, 25 -45% , 15 -25% for wet feeds, dry feeds, moist feeds, and semi-moist feeds respectively [12]. At the same time, water activity (aW) in the pellet defines better protection against bacterial growth where lower aW in the pellets are preferable. The aW differs from the moisture content where aW is defined as the ratio between the vapor pressure of the food in a completely undisturbed balance by the surrounding air media with the vapor pressure of distilled water under identical conditions. In most cases, pellets with aW of lower than 0.79 inhibits the growth of yeast whereas aW of lower than 0.65 successfully stops mold growth [13].Wet, moist and semi-moist diets are more effective in terms of promoting good growth and feed efficiency owing to their soft texture and palatability. In this review, only two basic types will be considered for intensive farming; the dry feeds and the moist feeds (semi-moist will be included as it falls under the same category with moist feeds). [14] *N/A: Not available
Dry pellet
The use of dry pellets can be in variety form; the dry-sinking pellet, extruded sinking pellet, and extruded floating pellet. The suitable feed ingredients selection with proper manufacturing procedures such as the extrusion process ensure good water stability which is the main criteria in producing good feeds. The extrusion method is different from the steam pellet in a way that the extruder does not use any pellet binder to add adhesion to the particles. Extruded pellets are more brittle where they only expand through gelatinization of starch upon cook [51]. During the gelatinization process, the starch becomes activated and absorbs large volumes of water. Tuber starches such as potato and tapioca are popularly used as binding agents since they are high in amylase enzyme [52].
Overall, the production of dry-sinking pellets is more practical for bottom feeders such as crustaceans, particularly prawns, lobsters, and crabs. Among the necessary steps in the formulation of water-stable dry pellets include the use of good binding agents plus finely ground ingredients to ensure maximum adhesion of the binder molecules. Whereas, the application of extruded floating pellets is more suitable for fish species which predominantly feed in the water column such as tilapia, trout, grouper, sea bass, and carp. The use of floating pellets allows observation on feeding activity other than the fish well-being [53].
Moist pellet
Moist pellets or wet pellets consist of a combination of high moisture ingredients and dry pulverized ingredients. The use of moist feed is widely accepted among the aquaculture's practitioners for maturation of broodstocks [36]. Regardless of their acceptance to use in the hatcheries, no commercialization of moist feeds has been produced presently. Due to their high moisture content, the moist pellets have low water stability and are prone to mold problems. Meanwhile, the innovation of semi-moist pellets has been successfully developed at laboratory scale. Compared to moist pellets, the moisture content of semi-moist pellets is under the permissible level with the addition of chemical agents to avoid yeast and mold growth.
Palatability and attractability
Optimization of feed intake is determined by a good physical attribution of the pellet which includes the palatability and acceptability of the animals towards the feed, considering the species behavior and their physiological requirements as well [32]. Priority is given in ensuring that the nutrient is reached to the animal with minimum leaching. Absence of attractants and palatability features in the pellets resulted in the declining feed consumption hence resulted in the poor growth in the crabs. The palatability and the attractability of the feeds are thus necessary which will lead to good ingestion and utilization of the prepared nutrients. Palatability is defined as the acceptance of the animals towards the food, resulting in the increasing of the body weight whereas attractability involves the animal's orientation towards the presence of, one of the two feeds that have been offered [20].
Both palatability and attractability have become a primary factor in the development of cost-effective feed since animals have great sense of smell, taste and sight to search for food. Both physical features ensure higher feeding rates in the animals. Diets of low-palatability and attractability will result in crabs not being able to reach optimum nutritional requirements. Good palatability is determined through the feed intake [54] and low food conversion ratio (FCR) as indicator to the efficiency of the feed or feeding strategy [55]. A good attribution in the pellet such as strong smell and good binding factor will help the crabs find the pellet as well as reducing the risk of nutrient loss due to leaching problems. Insufficient levels of attractants factors can result in low feed intake which eventually resulted in poor growth of the organisms.
Type of binder
Aquatic feed formulation involves the use of good quality binding agents as the primary ingredient that help in stabilizing feed during exposure to water and at the same time enhance feed floatation time [56]. Vast binders have been used while formulating high durability pellets to increase the water stability and minimize nutrient leach by adding cohesion to the particles and reducing the void spaces. This includes agar, starch, gelatine, carrageenan, and carboxymethylcellulose (CMC). Good binder selection with correct inclusion level in the diet formulation will determine the overall pellet performance against nutrient leaching, water stability and turbidity of the ponds. Practically, binders that can be digested and assimilated are chosen. Polysaccharides such as starch plays an important role in the aquafeed development in providing the animals with necessary carbohydrates as well as a binder responsible in the adhesion of the feed components. Extruded pellets depend on the gelatinization in the starch since no binders are used in the formulation. Starch such as maize, millet, guinea corn, wheat, and cassava improve the pellet durability, contain high protein level and make a good binder in the extruded feed pellets [8]. These types of binder are capable of generating air traps in the formulated feeds thus improving the physical integrity of the feeds in the water.
On the other hand, the use of unbranched polysaccharide from the seaweeds such as the agar, sodium alginate, and carrageenan have been widely applied in the field of aquaculture nutrition, mainly as binder. Ruscoe et al. [32] compared the use of carrageenan, CMC, agar, and gelatin as the binders at different concentration in freshwater crayfish and concluded that carrageenan and CMC at 5% concentration were significantly better than both agar and gelatin. Meanwhile, research carried out by Paolucci et al. [57] regarded that agar performed better compared to both sodium alginate and carrageenan during feed manufacturing. Agar is usually activated when heated up to 80 -85°C and the binding of feed components generally begin once the solution cools down to gelling temperature of range 32 -43°C [57].
Water stability and durability
Compared to fish pellets, the pellet disintegration and nutrient leaching in crustacean pellets require more attention because of their nature as a benthic organism and a slow feeder [19]. Physical features such as the pellet stability and durability especially are more critical than other species where larger pellet sizes are used [32] with longer soaking hour and the least possible leaching of nutrients [58]. It is suggested that the crustacean pellets must maintain a minimum of 90% dry matter retention even after 1 hour exposure in water, thus the use of dry pellets is not suitable as it does not solve the nutrient leaching problems [59]. Crustaceans especially crabs and crayfish are very robust in terms of handling foods using their cheliped and the mouth appendages to grasp and break up the food to smaller bites prior to ingestion [32,34] in which sinking and water stable pellets are necessary.
The pellet water stability is defined as the ability of the pellet to retain its integrity and nutrients while in water until consumed by the animal [19]. Meanwhile, durability is defined as the ability of the pellet to maintain its shape while handling, transportation, and inflatable transmission, without breaking it to smaller particles [51]. In aquatic pellets, the stability of pellets while in water is determined by the type of binding agent that holds the pellet together. Good water stability in pellets defines its effectiveness in optimizing feed intake in the crabs from harsh handlings and vigorous mastication so the nutrients required for growth will be fulfilled. Internal factor such as slow feeding rate and external factors such as high water currents and strong aeration in the tank will accelerate pellet disintegration which can result in the nutrient leach [32,51] and consequently increase the water clarity and turbidity from the suspended materials. The use of binders helps in holding the feed components together, minimizing the void spaces, maintaining pellet integrity thus producing a more compact and durable pellet [32].
Buoyancy
For fish, floating feed is fundamental for optimum feed intake since fish are fast swimmers and naturally eat at the water column [60]. The use of different ingredients combinations particularly the binder agent exhibits greater pellet characteristics such as pellet buoyancy, good water stability, digestibility, minimum wastage of raw materials, as well as low water pollution [61]. Fish feeds specifically require good binding agents that will help in stabilizing the feed and prolong the feed floatation period when in water while maintaining its nutritional value. Sinking of the uneaten feeds to the bottoms of the pond as a result of short floatation time will eventually deteriorate the water quality and might end up as fertilizer which triggers the algal blooms from the high nutrient inputs [62]. Plus, additional cost may be incurred while maintaining good water quality due to low feed performance [63]. The good binding agents contribute to minimum wastage and provide the fish with optimum nutrient utilization [56]. The use of floating feeds are advantageous as they help the farmer to closely observe the feeding activity of the fish and the uneaten feed can be discarded immediately thus preventing the low water quality problems since they afloat at the water column [64].
Unlike fish, for a bottom feeder and slow eater particularly, the long-term sinking pellets are preferable, characterized by a less expanded structure and high densities. Compared to floatation fish feeds, the sinking pellets offer longer time taken to float to suit the slow, bottom feeder such as the crayfish [32], shrimp [65,66] and mud crabs [36,67]. For these reasons, the moist or dry sinking pellets are more appropriate since they are high in density compared to the floating pellets. Experiments on soft shell portunid crabs observed that crabs having a hard time grasping the floating feed with their claws signifying that the sinking pellet would be more appropriate [67]. Table 3 shows the macro and micronutrients at different crustacean life stages. The main group of nutrients in crustacean related diet studies are protein, carbohydrates and lipids considered as macronutrients, meanwhile vitamin, minerals and feed additives are the micronutrients group (Table 3). Nutrition plays an important role in the development of ovaries [68]. Although some crustacean species can survive a period of starvation due to insufficient food supply; either from the hatchery of wild, more lipid reserves are used to sustain energy metabolic functions thus retards growth and reproduction activities in the crustacean [69]. Selectivity of feed in the aquaculture will determine the time taken for the crustacean to reach sexual maturation. Lipid and protein are described to be the most important component of the nutrient classes that act as main source of nutrient for embryonic development [70].
Protein requirement for crustacean broodstock
Protein is one of macronutrients in crustacean feed ingredients that takes part in promoting growth, fattening, and reproduction of aquatic animals. Optimum protein levels are especially important in juvenile crabs since they actively grow through molting activities. Inadequate amount of protein supplies hinders growth [71], sometimes causes mortalities especially in the juvenile crabs from the prolonged intermolt period [41]. Yet, dietary protein surplus results in water deterioration from degradation of protein leftovers to form ammonia or urea [72]. Hence, information on the dietary protein requirement is of vital importance to ensure good growth and maturation. Many investigations have been carried out to determine the protein requirement in different crustaceans such as prawns, shrimps, swimming crabs, and mud crabs. The results showed that the protein requirements are species specific, ranging from 22% -60% [37,41,72] where the dietary protein intake in juvenile or early life stage of animals are usually higher compared to the matured animals for most crustacean species.
Lipid requirement for crustacean broodstock
Lipid encompasses various classes of organic molecules such as triacylglycerols, phospholipids, sterols, waxes, carotenoids, and fatty acids [10]. Lipids, along with proteins and carbohydrates share the same importance in terms of providing the body with energy. Lipid differs from protein and carbohydrates in a way that it provides energy twice than both proteins and carbohydrates, serve as the structural components of cell membranes, and as important signaling molecules [75]. Neutral lipids, particularly triacylglycerols or also known as triglycerides are the principal form of energy source found in the adult, egg, and larvae of most crustaceans [50]. The phospholipids primarily functions in the building of the cell membrane [76], whereas, the cholesterol is the best known sterols that serves as a precursor of physiological components including sex hormones, particularly ecdysone that regulates molting activities in the crustaceans [43,77]. Feedings containing dietary cholesterols are essential to ensure good growth and survival in the crabs. Fatty acids govern a wide range of physiological processes including the reproductive performance and egg quality in crustaceans [70].
Studies demonstrated that the lipid levels in most crustaceans increased with size where the adults have higher lipids than the juveniles. Some reserved lipids are catabolized as energy while others are stored in the gonad for structural purposes, such as maturation and eicosanoid synthesis [78]. The information on the lipid requirement is very important for the development of formulated feed to ensure the nutrients suffice for good growth and maturation [10]. Previous studies showed that the determination of lipid requirements are generally species specific and are basically different at different developmental stages. Nevertheless, collective studies on crustaceans concluded that optimum growth can be achieved with a total lipid level from 2 -10% [39] or from 2 -12% of diet dry weight [37].
The fatty acids can be further divided into several classes; the saturated fatty acids (SAFA), monounsaturated fatty acid (MUFA), and polyunsaturated fatty acids (PUFA). Fatty acids that have no double bond are grouped as SAFA, while MUFA are categorized as fatty acids that have a single double bond in their carbon chain. Unlike SAFA and MUFA, polyunsaturated fatty acids (PUFA) contain more than one double bond in its carbon backbone. They act as precursors for animal hormones and play an important role in regulating cell membrane. PUFA are commonly generated from the plant synthesis as the primary producers of carbon in marine ecosystems, synthesizing various important biological molecules such as carbohydrates, proteins and lipids [79]. Most animals do not have the ability to synthesize PUFA de novo except for their capacity to convert one form of PUFA to another form through elongation and desaturation. Such fatty acids are termed as the essential fatty acids (EFA) as they must be taken in through the diet. This includes linoleic and linolenic acid since not all animals have the ability to produce them [80].
Meanwhile, the highly unsaturated fatty acids (HUFA) are the subset of PUFA, having 20 or more carbon atoms with 3 or more double bonds. They are responsible for survival, maintaining high growth rates and reproductive rates as well as high food conversion in both marine and freshwater organisms [80]. Arachidonic acid (ARA, C20:4n6), eicosapentaenoic acid (EPA, C20:5n3), docosahexaenoic acid (DHA, C22:6n3) are among the derived omega-6 and omega-3 long chain of HUFA (n-3 and n-6 LC PUFA; C ≥ 20). The consumption of diet containing EPA and DHA help to optimize animal growth [80] while ARA functions as the precursor for eicosanoids that regulates reproductive success and sexual behaviour of females [81]. In general, EPA and DHA can be obtained from the consumption of plant materials or through series of elongation and desaturation of α-linolenic acid (ALA, C18:3n3). Whereas, elongation and desaturation of linoleic acid (LA, C18:2n6) will produce ARA. During the desaturation and elongation process, both n-3 and n-6 PUFA from ALA and LA will compete for the same desaturation enzyme to produce LC-PUFA [82,83].
Conclusions
The importance of pellet physical characteristics in aquaculture nutrition cannot be overemphasized. The advantages of good quality pellets not only depend on the binding agent alone, but the attractants that enhance palatability as well as the inclusion of the correct proportion of nutrients to boost animal performance. The updated knowledge on the possible interactions between food, feeding and diets in crustaceans needs to be improved to increase the quality of seed or broodstock produced in captivity, especially in commercial aquaculture. The studies performed so far have given the clear interaction between food, feeding and diets especially for the crustacean group, such being graphically explained in the graphical abstract. | 2021-05-10T00:03:41.976Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "46ed8fa007cc87bceafe27db71840cfa21957191",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/11/6/1761/pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "b5a7f17d7a33e76984818fa1387b5afb5109e37d",
"s2fieldsofstudy": [
"Biology",
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
67843206 | pes2o/s2orc | v3-fos-license | Inner and outer star forming regions over the disks of spiral galaxies. I. Sample characterization
Context. The knowledge of abundance distributions is central to understanding the formation and evolution of galaxies. Most of the relations employed for the derivation of gas abundances have so far been derived from observations of outer disk HII regions, despite the known differences between inner and outer regions. Aims. Using integral field spectroscopy (IFS) observations we aim to perform a systematic study and comparison of two inner and outer HII regions samples. The spatial resolution of the IFS, the number of objects and the homogeneity and coherence of the observations allow a complete characterization of the main observational properties and differences of the regions. Methods. We analyzed a sample of 725 inner HII regions and a sample of 671 outer HII regions, all of them detected and extracted from the observations of a sample of 263 nearby, isolated, spiral galaxies observed by the CALIFA survey. Results. We find that inner HII regions show smaller equivalent widths, greater extinction and luminosities, along with greater values of [NII]{\lambda}6583/H{\alpha} and [OII]{\lambda}3727/[OIII]{\lambda}5007 emission-line ratios, indicating higher metallicites and lower ionization parameters. Inner regions have also redder colors and higher photometric and ionizing masses, although Mion/Mphot is slighty higher for the outer regions. Conclusions. This work shows important observational differences between inner and outer HII regions in star forming galaxies not previously studied in detail. These differences indicate that inner regions have more evolved stellar populations and are in a later evolution state with respect to outer regions, which goes in line with the inside-out galaxy formation paradigm.
Introduction
It is widely recognized that the knowledge of abundance distributions in galaxies is very important as a probe of their chemical evolution and star formation histories. H ii regions in external spiral and irregular galaxies provide an excellent means to derive the chemical abundances of different elements, both primordial and product of stellar nucleosynthesis. This information is central to guiding theoretical models of the formation and evolution of galaxies.
Among the different abundance-related parameters employed are: (i) the radial metallicity gradient; (ii) the average metallicity at a given fiducial galactic radius; and (iii) the central metallicity value, where the term "metallicity" usually refers to oxygen abundance, oxygen being the most abundant element in the universe after hydrogen and helium. In fact, the two latter parameters rely on the determination of the first, since they are calculated either by interpolation or extrapolation of the metallicity distribution respectively. However, it should be kept in mind that what is referred to as "abundance" or "radial abundance gradient" of a galaxy in fact represents an observational limitation derived from the need to use fixed aperture and/or long-slit spectroscopy. The information actually required to deeply approach the issue of the formation and evolution of disk galaxies is the map of the abundance distribution of the different elements, that up to now represented a very difficult and highly time consuming task. New multi-slit and integral field spetroscopy data are now readily obtained, which have greatly increased the number of H ii regions analyzed per given galaxy, although our methodologies still suffer from essentially the same systematic uncertainties in the determination of gas abundance distributions.
In general terms, there are two different approaches to derive elemental gas abundances: the so called direct method, which makes use of the measurement of the electron temperature T e from the quotient of (faint) auroral to (strong) nebular lines of different elements (O, N, and S among others), and semiempirical models, in which combinations of (strong) nebular lines are used to infer abundances in regions where the (faint) auroral lines cannot be detected through suitable calibrations. Due to the cooling properties of the nebulae, the regions to which the direct method is applicable are of low metallicity and therefore, in the case of spiral galaxies, tend to reside in the outer disc zones, while the regions that require the application of semiempirical methods, being of higher metallicity, tend to reside in the most inner zones of the discs. These different treatments of outer and inner disc H ii regions can complicate the interpretation of the radial gas abundance gradients, and could even produce artificial effects if inner and outer H ii regions differ in physical properties and ionization structure. A&A proofs: manuscript no. mrodriguezbaras_arxiv Although the emission line spectra of H ii regions in the inner and outer regions of disks look alike, some differences between these two families are recognized. Inner H ii regions seem to show lower [O iii] λ5007/Hβ values than their outermost counterparts, an effect that can be produced by the combination of lower effective temperatures of their ionizing stars, higher dust content and higher metallicity; at the same time, is higher in the spectra of the inner regions, which seems to indicate a lower excitation of the gas; [N ii] λ6583/[O ii] λ3727 is also higher, pointing to a larger N/O, also indicative of a higher overall metallicity (Perinotto 1983). A hint of somewhat higher electron density of the emitting gas in the inner regions has also been reported (Kennicutt et al. 1989;Bresolin et al. 2004;Díaz et al. 2007). At high abundances, as those expected for inner or circumnuclear H ii regions, the density of the nebula affects significantly the strength of emission lines, specially [O iii], due to the competition between collisional and radiative de-excitation in the nebular cooling fine structure O ++ transitions (Oey & Kennicutt 1993).
Higher extinctions, such as could be expected in higher metallicity and higher density regions, could also have an impact on the emission line intensities. The presence of dust can modify the thermal structure of nebulae in several ways. Firstly, the removal of cooling agents from the gas phase via depletion onto grains will increase the electron temperature (Henry 1993). Secondly, dust grains can absorb a given fraction of the Lyman continuum photons and thus modify the ionizing radiation field (Mathis 1986). The absorbed energy will then be re-radiated in the IR (Mas-Hesse & Kunth 1991). Heating or cooling by photoemission or recombination from charged grains can also affect the thermal balance of the nebulae (Baldwin et al. 1991).
There is no doubt that the analysis of high excitation, low metallicity spectra is easier than that of the opposite case, and the larger contribution from the underlying stellar continuum in the case of the innermost regions represents a limitation. Therefore, despite any inferred different physical properties of inner and outer H ii regions, most of the relations currently employed for the derivation of abundances have been derived from observations of outer disk regions and have been assumed to be valid for all the ionized regions over the whole galactic disk. On the other hand, these inferences have been obtained from the study, albeit detailed, of a relatively small number of objects.
Fortunately it is now possible with the advent of multi-object spectrographs (MOS) or integral field units (IFU) to perform a complete spectroscopic mapping of the distributions of H ii regions over the disks of spirals (see e.g., Rosolowsky & Simon 2008;Rosales-Ortega et al. 2011;Bresolin & Kennicutt 2015). A brief account of the results of this kind of work includes the presence of a considerable dispersion in the derived abundances at a given galactocentric distance and the indication of possible azimuthal variations. The first could be due to the different sizes of the H ii regions observed, with the smallest regions being affected by stochasticity in the stellar mass function, and the second could be ascribed to differences in star formation in and between spiral arms and also to differences in mixing in the turbulent interstellar medium.
The recently completed CALIFA (Calar Alto Legacy Integral Field Area) survey provides an excellent opportunity to perform a systematic study of the properties of inner and outer H ii regions over the disks of spiral galaxies, since the homogeneity of the data regarding both observations and handling is a requirement to obtain reliable results. This in turn will give us the possibility of exploring the effects that any existing differences may have in derived properties of the regions themselves, such as elemental abundances, ionization structure, evolutionary state, amongst other. This is the first article of a series and presents the account of the observational properties of inner and outer regions in a sample of 263 nearby, isolated, spiral galaxies. In Section 2 we provide a summary of the observations on which the work is based. Section 3 presents the characteristics of both the galaxy sample and the H ii region sample used. Section 4 presents the results of this characterisation, together with their discussion. Finally our conclusions are summarized in Section 5.
Summary of observations and data reduction
The galaxies used in this work are part of the CALIFA project, one of the most ambitious 2D-spectroscopic surveys to date.
The observations were carried out at the Centro Astronómico Hispano-Alemán (CAHA) 3.5 m telescope. This work is based on the 350 galaxies observed using the low-resolution setup until September 2014. Most of these galaxies are part of the 2nd CALIFA Data Release (DR2, García-Benito et al. 2015), and therefore the datacubes are accessible from the DR2 webpage 1 . The CALIFA survey is already finished, and the complete observations are included in the CALIFA final data (DR3, .
The details of the survey, sample, observational strategy, and data reduction are explained in . All galaxies were observed using the Postdam Multi Aperture Spectrograph (PMAS; Roth et al. 2005) in the PPAK configuration (Verheijen et al. 2004;, that is, a retrofitted bare fibre bundle IFU which expands the fieldof-view (FoV) of PMAS to a hexagonal area with a footprint of 74 × 65 arcsec 2 , which allows us to map the full optical extent of the galaxies up to two to three disk effective radii on average. This is possible because of the diameter selection of the sample (Walcher et al. 2014, hereafter W14). The observing strategy guarantees a complete coverage of the FoV, with a final spatial resolution of full width at half maximum (FWHM) ∼3", corresponding to ∼1 kpc at the average redshift of the survey. The sampled wavelength range and spectroscopic resolution (3745-7500 Å, λ/△λ ∼850 for the low-resolution setup, that we use in this work) are more than sufficient to explore the most prominent ionized gas emission lines and to deblend and subtract the underlying stellar population (e.g., Kehrig et al. 2012;Cid Fernandes et al. 2013). The dataset was reduced using version 1.5 of the CALIFA pipeline. The flux calibration, signal-to-noise ratio (S/N) and related uncertainties of the CAL-IFA data products have been thoroughly discussed in several articles of the CALIFA collaboration (e.g., Cid Fernandes et al. 2014;García-Benito et al. 2015). For the 1st CALIFA Data Release (DR1, Husemann et al. 2013) the collaboration performed a data quality test showing that the sample reached a median limiting continuum sensitivity of 10 −18 erg s −1 cm −2 Å −1 arcsec −2 at 5635 Å, and 2.2 10 −18 erg s −1 cm −2 Å −1 arcsec −2 at 4500 Å, for the V500 and V1200 setup respectively, which corresponds to limiting r-and g-band surface brightnesses of 23.6 mag arcsec −2 and 23.4 mag arcsec −2 , or an unresolved emission-line flux detection limit of roughly 10 −17 erg s −1 cm −2 arcsec −2 and 0.6 10 −17 erg s −1 cm −2 arcsec −2 , respectively. The same limits, or slight improvements, were found in posterior data releases. Table 1. Physical properties of part of the CALIFA galaxies involved in this work, as described in the text. The complete table can be found in this paper online version. The corresponding sources are: (i) Galaxy name. (ii) Redshift. Given by the CALIFA survey, that obtained them from SIMBAD database on January 2010 (see W14). (iii) Morphological type. Own classification of the CALIFA survey, made by a combination of by-eye classification by five collaborators (see W14) (iv) Inclination. From the axis ratios obtained by calculating light moments, and using the expression given by Holmberg (1958
Galaxy sample
The starting point of this work were the 350 CALIFA galaxies observed using the low-resolution setup until September 2014. From this initial sample we selected the spiral galaxies and discarded ellipticals and lenticulars, that have no gas and do not host big processes of stellar formation. We also selected those spirals that are isolated, as our objective is analyzing H ii regions not affected by particular processes of interactions or mergers. Combining the isolated and merging classification and the morphological type designation from the CALIFA survey with the Hyperleda 2 catalog (Makarov et al. 2014) classification as a matter of extra precaution, we finally selected 263 galaxies, that from now on constitute the main sample of our work. The main properties and characteristics of the 263 galaxies are included in a table that can be found in this paper online version. A part of this table is shown as an example in Table 1.
As it was important to ensure that the properties and parameter ranges of the 263 galaxies fulfilled the statistical properties of the whole CALIFA sample, with the only exception of the exclusion of the earlier morphological types, some of the main characteristics of the galaxies are particularly described in the following sections.
Redshifts and distances
Redshift values of our galaxies are those given by the CALIFA survey, that obtained them from the SIMBAD database on January 2010 (W14). Our galaxy sample has a redshift range, shown in Fig.1, that covers the whole range of redshift values selected by CALIFA for its mother sample (0.005 < z < 0.03).
Distances for the CALIFA mother sample were obtained from NED and Hyperleda, finally adopting the NED-infallcorrected ones as their fiducial distances. In this work we adopt the distances calculated from the distance moduli given by Hyperleda, which are corrected for Virgo-centric infall. The distance range for our galaxy sample is also included in Fig. 1, along with the scale range expresed in kpc/". 2 http://leda.univ-lyon1.fr/
Morphological classification
We adopted the morphological classification performed by the CALIFA team (W14). The CALIFA collaboration found that the morphological classifications available from public databases were incomplete for the CALIFA sample (e.g., Galaxy Zoo 2, 535 matches Willett et al. 2013) or missing a consistent classification in Hubble subtypes (NED). Therefore they undertook their own reclassification, using human by-eye classification (see W14).
One of the defining characteristics of CALIFA mother sample is that it contains galaxies of all morphological types. In our case we only have spirals by selection, but our sample also comprises galaxies of all spiral morphological types. The morphological type histogram of our sample (Fig. 2) follows a similar pattern in the Sa-Sm types that the one we can observe in the analogous histogram of the CALIFA mother sample (see W14).
Regarding the presence of bars and rings, using the classification by Hyperleda we find a 40.3% of barred galaxies in our sample, and a 17.1% of galaxies where the presence of a ring can be observed.
Inclination
As described in W14, inclination may be the cause of a selection effect in the CALIFA mother sample, and thus in ours. Isophotal sizes of flattened, transparent (no attenuation) galaxies vary with inclination, due to the projected change of surface brightness (e.g., Opik 1923). It is therefore easier for an inclined disk galaxy to get into a sample defined by a minimum apparent isophotal size than it is for a face-on system of the same intrinsic dimensions. The magnitude of this effect depends on the degree of transparency; it is strongest for a fully transparent galaxy, and it disappears when the system is opaque, so that only its surface is observed. Therefore it can be expected to find an excess of galaxies with high inclinations in the CALIFA sample, at least among disk-dominated systems, and this may happen also in the spiral sample of this work.
During the characterization of their mother sample, the CAL-IFA team detected this effect when studying isophotal major and minor axes delivered by the SDSS photometric pipeline, that can be combined into an axis ratio at the outer 25 mag/arcsec 2 level (see W14). They found that the histogram of isophotal axis was A&A proofs: manuscript no. mrodriguezbaras_arxiv clearly skewed toward low values of b/a, providing an indication of the considered selection effect. Furthermore, they also represented the 55 galaxies of the CALIFA mother sample that have Mr > -18.6, that is, that are below the completeness limit. Nearly all of these galaxies have axis ratios below 0.4, and it can be visually confirmed that these are predominantly disk-dominated systems that are close to edge-on. The CALIFA team presumed that very few, if any, of these galaxies would have been included into the CALIFA sample of seen face-on, their angular sizes have been boosted through inclination, just enough to promote them into the sample. They reached the conclusion that while the CALIFA sample has a higher proportion of inclined disk galaxies at the faint end, the overall effect is not large. Specifically for the galaxies close to and below the low-luminosity completeness limit there is at any rate a clear surplus of galaxies with very high inclinations in the CALIFA sample. We derive the inclination values for our galaxy sample from the b/a axis ratios given by CALIFA, that obtained them by calculating light moments. The final b/a value is the mean of the axis ratios of ellipses containing 50% and 90% of the total flux (see W14). To obtain the final inclination values we use the expression given by Holmberg (1958), where the value of the axial ratio for an edge-on system parameter as a function of the galaxy morphological type is given by Heidmann et al. (1972). In Fig. 3 we represent the inclination values for the whole sample and also for the 31 galaxies that are below the CALIFA completeness limit. We find a distribution skewed toward high inclination values, that is more prominent for the faint galaxies. We consider therefore that we are detecting the selection effect already existing in the CALIFA mother sample, and that specifically affects galaxies with low luminosity. This high number of high inclination or edge-on galaxies have to be taken into account, as it implies that these galaxies will have higher uncertainties in the determination of the distances of their star forming regions to the center of the system, as well as it can affect the morphological classification and other factors.
Effective radius
In this work we used the disk effective radius, classically defined as the radius at which one half of the total light of the system is emitted, as the factor of normalization to analyse the galaxy properties' radial distributions and compared them galaxy to galaxy. Concerning the study of radial gradients and 2D distribution of galactic properties, although there are a high number of studies about the issue, we find a large degree of discrepancy among them. One of the factors that may cause these differences is the fact that it does not exist an uniform method to analyse the gradients. In some cases the physical scales of the galaxies (i.e., the radii in kpc) are used (e.g., Marino et al. 2012). In others the scale-lengths are normalized to the R 25 radius, that is, the radius at which the surface brightness in the B band reach the value of 25 mag/arcsec 2 (e.g., Rosales-Ortega et al. 2011). Finally, a reduced number of studies try to normalize the scalelength based on the effective radii. Díaz (1989) already showed that the effective radius seems to be the best to normalize the abundance gradients. Using the physical scale of the radial distance or the normalized one to an absolute parameter like the R 25 radius does not produce gradients that we can compare galaxy to galaxy, since in both cases the derived gradient is correlated with either the scale-length of the galaxy or its absolute luminosity.
We used the effective radii values estimated by the CALIFA survey, whose calculation is described in . It is based on an analysis of the azimuthal surface brightness profile, derived from an elliptical isophotal fitting of the ancillary gband images collected for the galaxies (extracted from the SDSS imaging survey, York et al. 2000;Mármol-Queraltó et al. 2011). When these ancillary images were not available, the B band was used (Mármol-Queraltó et al. 2011). Our galaxy sample contains a wide range of effective radii, as shown in Fig. 4, wich implies that we selected galaxies with a wide range of sizes.
Color-magnitude diagrams
The color-magnitude diagram of the 263 galaxies of our sample, represented in Fig. 5, show that they fully cover the range in absolute magnitudes where the CALIFA sample is representative of the overall galaxy population. This is consistent with the results obtained by Schawinski et al. (2014), that signaled the fact that late-type galaxies do not separate into a blue cloud and a red sequence, but rather span almost the entire color range without any gap or valley.
Spectroscopic information of the H ii regions sample
The H ii region segregation and their corresponding spectra extraction is performed using a semi-automatic procedure named HIIexplorer, described in and Rosales-Ortega et al. (2012). It is based on the assumptions that: (a) H ii regions are peaky and isolated structures with a strong ionized gas emission, which is significantly above the stellar continuum emission and the average ionized gas emission across the galaxy. This is particularly true for Hα because (b) H ii regions have a typical physical size of about a hundred or a few hundred parsecs (e.g., González Delgado & Pérez 1997;Lopez et al. 2011;Oey et al. 2003), which corresponds to a typical projected size of a few arcsec at the distance of the galaxies. These basic assumptions are based on the fact that most of the Hα luminosity observed in spiral and irregular galaxies is a direct tracer of the ionization of the interstellar medium (ISM) by the ultraviolet (UV) radiation produced by young high-mass OB stars. Since only high-mass, short-lived stars contribute significantly to the integrated ionizing flux, this luminosity is a direct tracer of the current star formation rate (SFR), independent of the previous star formation history. Therefore, clumpy structures detected in the Hα intensity maps are most probably associated with classical H ii regions (i.e., those regions for which the oxygen abundances have been calibrated).
For each region selected by HIIexplorer, we extracted a integrated spectrum of the spaxels belonging to that region. For each individual extracted spectrum we then modeled the stellar continuum using FIT3D, a fitting package described in Sánchez et al. (2006) and Sánchez et al. (2011). The FIT3D ver-A&A proofs: manuscript no. mrodriguezbaras_arxiv sion used at the moment of this fitting adopted a simple SSP template grid with 12 individual populations. It comprises four stellar ages (0.09, 0.45, 1.00, and 17.78 Gyr), two young and two old ones, and three metallicities (0.0004, 0.019, and 0.03, that is, subsolar, solar, or supersolar, respectively). The models were extracted from the SSP template library provided by the MILES project (Vazdekis et al. 2010;Falcón-Barroso et al. 2011). The Cardelli et al. (1989) law for the stellar dust attenuation with an specific attenuation of R V = 3.1 was adopted, assuming a simple screen distribution.
Individual emission line fluxes were measured in the stellarpopulation subtracted spectra performing a multicomponent fitting using a single Gaussian function. The equivalent widths for each H ii region and line were estimated using the results from the fitting analysis instead of the classical procedure, by dividing the emission line integrated intensities by the underlying continuum flux density. The continuum was estimated as the median intensity in a bandwidth of 100 Å, centered in the line, using the gas-subtracted spectra provided by the fitting procedure. For further details about both processes, see . The errors in the determination of the emission line fluxes and their reliability are discussed extensively in and for the Pipe3D/FIT3D fitting technique.
After applying all the process to the 263 CALIFA galaxies datacubes, we detected a total of 12891 H ii regions. Nevertheless, not all these regions can be accepted as confirmed H ii regions due to the high level of noise of some spectra or the nonphysical values of some parameters such as Hα/Hβ . Therefore we apply a quality control process, considering the following criteria to ensure that we are working with physical bona fide H ii regions and avoid selection uncertainties: (i) EW(Hα) > 6Å, following Cid Fernandes et al. (2010); Sánchez et al. (2015). (ii) Hα/Hβ > 2.7. We consider the theoretical value for the intrinsic line ratio Hα/Hβ from Osterbrock & Ferland (2006), assuming case B recombination (optically thick in all the Lyman lines), an electron density of n e =100 cm −3 and an electron temperature of T e =10 4 K. Lowering the electron temperature to 5000K, keeping the electron density constant, increases the Balmer decrement Hα/Hβ by a factor of 1.05 and translates to an uncertainty of 0.04dex in c(Hβ) for the reddening curve employed. We also have included a certain margin to account for uncertainties in the observational values of the emission lines. (iii) Hα/Hβ < 6. This value corresponds to an extinction of ∼2.3 mag. We consider that beyond this point values are not physical. (iv) F(Hβ) > 0.5 10 −16 erg s −1 cm −2 . To avoid lines with very small S/N (v) F([O iii] λ5007) > 0.5 10 −16 erg s −1 cm −2 . This condition was introduced due to non-physical values of the [O iii] λ5007/Hβ emission-line ratio observed for some regions in a preliminar version of the data. We finally obtained a sample of 9281 selected H ii regions.
Inner regions sample
We considered as inner regions those that fulfil the criterium established by Álvarez-Álvarez et al. (2015), based on the observed separation between nuclear and disk region rings as a function of the galaxy luminosity. Following this criterium, inner regions are those located closer to the center than the distance defined by the expression: We calculated the B-band magnitude from the g and r magnitudes from SDSS, using the transformation given by Lupton (2005) 3 . After applying this criterium to the primary regions sample we obtained a total of 794 inner regions. Nevertheless, by detailed examination of the region extracted spectra we note that not all the regions have clear visible emission from other typical H ii region spectral lines, apart from Hα. This could be previously expected, due to the strong stellar continuum found in the inner part of the galaxies, which prevents the detection of weaker spectral lines. Therefore we made a second selection process by examination by-eye, discarding those regions whose spectra were dominated by stellar continua with the presence of only weak Halpha emission, or by some gas emission features not clearly detectable. After that we obtained a final sample of 725 regions with spectra where Hα, Hβ, [O iii] λ5007, [N ii] λλ6548,6583 or the [S ii] λ6717,6731 doublet are measurable.
Outer regions sample
We considered as external H ii regions those that are located at a distance larger than two effective radii (R e f f ) from the center of the galaxy. It is around this radius where a certain amount of flatness is found in abundance gradients in spirals (Díaz 1989;Sánchez et al. 2014;Marino et al. 2016;Sánchez-Menguiano et al. 2016). From the primary sample of 9281 regions obtained, a total of 1027 regions were located beyond the considered distance to the center of the system. Nevertheless, as happened with the inner regions, not all of them show H ii region emission features or do not have a high enough S/N. We therefore applied a second selection by eye, obtaining a final sample of 671 inner regions.
Observational and functional parameters
After the process of extraction and selection we get a final sample of 725 inner regions and 671 outer regions. Some of the inner regions spectra are shown as an example in Fig. 6, while some of the outer region spectra are shown in Fig. 7.
The number of inner and outer regions included in the final samples allows us to develop a statistical analysis of some spectroscopic properties, especially those based on the the strongest detected emission lines, such as the following: (i) EW(Hα), the equivalent width of Hα, which is directly related to the fraction of very young stars in the region (ii) A V , the dust attenuation, calculated using the Balmer decrement according to the reddening function of Cardelli et al. (1989), assuming R ≡ A V /E(B − V) = 3.1. Theoretical value for the intrinsic line ratio Hα/Hβ was considered as explained in Sect. 3.2 (iii) L(Hα), the Hα luminosity, obtained from the reddeningcorrected Hα flux, considering the distances to the corresponding galaxies (iv) [N ii] λ6583/Hα line ratio, related to the oxygen abundance of the ionized gas, and that along with the [O iii] λ5007/Hβ provides information about the nature of the ionization source of the region λ5007 line ratio, related to the ionization parameter log u, a measurement of the strength of the ionization radiation (Díaz et al. 2000) (vi) [S ii] λ6717/[S ii] λ6731 line ratio, related to the electron density (n e ) of the ionized gas. Firstly, EW(Hα) histograms show smaller EW(Hα) values for inner H ii regions. Sánchez et al. (2014), that work with a sample of 7016 H ii regions from 227 CALIFA galaxies also selected and extracted with HIIexplorer, finds a strong loglinear correlation between EW(Hα) and the percentage of young stars in the regions, obtained from the FIT3D fitting of the underlying stellar population. This correlation is valid for regions with EW(Hα) > 6Å and with a percentage of young stars over 20%. All our regions have EW(Hα) over 6Å, as it is one of our selection criteria. We consider the smaller values of EW(Hα) for our inner regions sample to be caused by the greater influence of the underlying stellar populations in those regions, and therefore to smaller percentages of young population (see Sect. 4.5). Secondly, on the middle histograms we observe larger A V values for inner regions, denoting greater dust attenuation. Finally, L(Hα) histograms reveal larger luminosities for inner regions. This is concordant with previous observations of very luminous H ii regions located close to their galactic nucleus (Álvarez-Álvarez et al. 2015), although it may be also influenced by a selection bias causing that in central regions, where the underlying continuum have a great influence, only the more luminous H ii regions can be detected. This possible selection bias will be studied afterwards in this section.
We find differences between the equivalent width and extinction values of inner regions as a function of the morphological type of the galaxies they belong to, as can be seen in Fig. 9. In early type spiral galaxies the greater prominence of the galaxy bulge implies greater influence of older underlying population, which means a decrease of the ionizing population percentage and of the equivalent width values. It also implies higher amounts of dust and therefore higher extinction values for these regions. On the contrary, late-type spirals, with little to none bulge component, have increasingly higher equivalent width values and less extinction. For the outer regions, differences between different morphological types are almost negligible.
Clear differences between inner and outer regions are also detected in histograms representing emission-line ratios, in which is therefore higher in the inner regions, as could be expected for more evolved stellar populations and more enriched interstellar medium. Inner regions also have greater values of [O ii] λ3727/[O iii] λ5007 ratio, indicating in this case smaller values of the ionization parameter. In the case of [S ii] λ6717/[S ii] λ6731 histograms we find similar average values for outer and inner regions, corresponding to electron density values smaller than 10 cm −3 (Osterbrock & Ferland 2006). All the emission-line ratio histograms show sharper distributions for inner regions, while the outer regions are more scattered. This may be caused by lower uncertainties in the inner regions, whose spectra have higher S/N values.
Using the relation between the Hα luminosity and the number of ionizing Lyman continuum photons given by Gonzalez-Delgado et al. (1995) we calculate the number of ionizing photons for every H ii region. Results obtained for both inner and outer samples are shown in Fig. 10. Inner regions have higher values, as could be expected from their higher Hα luminosity values. As we have mentioned, this could be an intrinsic property or the result of a selection bias, caused by the fact the smaller inner regions are not detected due to the lack of spatial resolution and/or contrast with respect to the bright bulge. In order to study the magnitude of this possible bias we represent the histograms of the angular area in arcsec 2 of the inner and outer regions, which are included in Fig. 11. We can see that although outer regions do, in fact, have a tail of smaller regions that is not present in the inner regions histogram (which is probably caused by this bias), the number of regions included in this tail is not enough to cause the difference of values ranges observed in the Hα luminosity and the number of ionizing photons histograms. Therefore we conclude that, although this selection bias has a small influence, there is an intrinsic difference of luminosity and number of ionizing photons between inner and outer H ii regions.
Systematics in diagnostic diagrams
Emission-line diagnostic diagrams (introduced by Baldwin et al. 1981, hereafter BPT) are a powerful way to study the nature of the dominant ionizing sources and changes in the physical conditions of the ionized nebulae, either from galaxy to galaxy, within each galaxy or within a particular nebulae. BPT diagrams work by exploring the location of certain line ratios, involving several strong emission lines with a dependence on the ionization degree and, to a lesser extent, on temperature or abundance. Through the application of different classification criteria (Kewley et al. 2001;Kauffmann et al. 2003) diagnostic diagrams allow the sep- aration of galaxies or galaxy regions into those dominated by ongoing star formation and the ones dominated by non-stellar processes.
A&A proofs: manuscript no. mrodriguezbaras_arxiv Fig. 9. Left column: EW(Hα) and A V histograms from the inner H ii regions sample as a function of their galaxies morphological types. Regions with Sa, Sab and Sb type galaxies are colored in dark red, those with Sbc, Sc and Scd type galaxies in orange and those with Sd, Sdm and Sm type galaxies are colored in gold. EW(Hα) is expressed in units of Åand A V in magnitudes. Right column: Histograms with the same parameters as those from the outer H ii regions sample. Regions with Sa, Sab and Sb type galaxies are colored in dark blue, those with Sbc, Sc and Scd type galaxies in blue and those with Sd, Sdm and Sm type galaxies are colored in light blue.
The distributions of CALIFA inner region sample and outer region sample in one of the most classical BPT diagrams, the [O iii] λ5007/Hβ vs [N ii] λ6583/Hα diagram, are displayed in Fig. 12. In the case of the outer regions sample, data is colorcoded according to their distance to the center of the galaxy bins that are indicated in the plot. It can be observed that the inner regions are mostly located to the bottom-right corner of the classical star forming branch, with the exception of a few regions located in the active galactic nuclei (AGN) zone demarcated by the Kewley et al. (2001) classification line, that will be analyzed afterwards. On the contrary, outer regions are more distributed along the star forming branch, and are generally located closer to the top-left corner. This is not surprising, as regions closer to the center of the galaxies are expected to have higher metallicities. Furthermore, Sánchez et al. (2014) and Sánchez et al. (2015) found a clear correlation reflected in their H ii regions distribution on the BPT diagrams, relating lower percentages of young stars with higher values of [N ii] λ6583/Hα. This is consistent with our results, as we would expect a larger contribution of the underlying stellar populations in the inner regions, and then lower percentage of young stars than in the outer regions.
We have studied the spectra of the H ii regions that, according to Kewley et al. (2001) classification criteria, are located in the AGN region of the BPT diagram in Fig. 12. In the case of the outer regions sample, we have five regions located in AGN zone, belonging to the galaxies IC2101 (2 regions), IC1528, UGC09542 and UGC 09598. All the spectra have low S/N, with specifically very low values of the [O iii] λ5007 emission line. Considering this and the magnitude of the errors associated, we consider that the location of those regions in the AGN zone is due to the uncertainties associated to the emission line flux measurements.
In the case of the inner regions sample, we have six regions located in the AGN zone. Three of them are close to the classification line, while the other three are far inside the AGN region. These last three regions belong to the galaxies NGC2410 and UGC03973 (two regions). NGC2410 is classified as a Seyfert 2 by NED (Véron-Cetty & Véron 2006) and UGC 03973 is classified as a Seyfert 1 by NED (Contini et al. 1998). Therefore we consider that the location of these regions in the AGN zone is due to their corresponding nucleus emission influence, and in fact the active nucleus emission features are easily detectable in their spectra. On the other hand, the other three regions belong to the galaxies IC2247, UGC00005 and UGC03151, that are not classified as active. Their spectra have low S/N values and, as explained for the outer regions case, we consider that their location in the AGN regions is due to the uncertainties in the emission line measurements.
Observations of H ii regions with higher-spatial resolution
During the analysis of our CALIFA H ii regions sample we considered the possibility of including other IFS observations of H ii regions with higher-spatial resolution such as the sample extracted from the PPAK IFS Nearby Galaxy Survey (PINGS; Rosales-Ortega et al. 2010), in order to extend our data. While CALIFA galaxies have a redshift range of 0.005 < z < 0.03 (see Sect. 3.1.1), PINGS galaxies have much lower redshifts. This implies a loss of resolution that was studied by Mast et al. (2014) using some of the PINGS galaxies, that were simulated at higher redshifts to match the characteristics and resolution the galaxies observed by the CALIFA survey. Regarding the H ii region selection, the authors conclude that at z ∼0.02 the H ii clumps can contain on average between one and six of the H ii regions obtained from the original data at z ∼ 0.001. This prevents a complete combined analysis of CALIFA and PINGS regions, as parameters depending on the H ii regions size (luminosity, masses) are not comparable. Despite that, we can consider PINGS regions when analyzing properties where emission-line ratios are involved, that are independent of the size of the regions. From a total sample of 17 nearby spiral galaxies included in the PINGS galaxy sample, we consider for this work those that are not involved in interaction or merging processes, as we did for the CALIFA galaxies. Our PINGS galaxy sample is therefore composed by four galaxies: NGC 628, NGC 1058, NGC 1637 and NGC 3184. Table 2 includes the main properties and characteristics of these four galaxies.
From the H ii regions catalog published by Rosales-Ortega (2009) we select those that fulfil our inner region criteria, specified in Sect. 3.2.1, obtaining a total of 79 inner regions, distributed along the four PINGS galaxies. The specific number of total and inner regions for each PINGS galaxy are indicated in Table 2. We obtain no outer regions from the PINGS galaxies, as their spatial linear coverage is smaller than the one of the CAL-IFA galaxies and no regions further than 2 R e f f are extracted.
The distribution of the PINGS inner region sample in the [O iii] λ5007/Hβ vs [N ii] λ6583/Hα diagram, along with that from the CALIFA inner regions sample, is shown in Fig. 12. We can see that the PINGS inner regions follow the same pattern than the CALIFA inner regions: they have high [N ii] λ6583/Hα values, related to very high oxygen abundances and low percentages of young populations, and very low [O iii] λ5007/Hβ, due to low excitation values. The PINGS observations, with higher spatial resolution, allow the detection of inner regions located closer to their galaxy centers than CALIFA inner regions. Therefore the location of PINGS inner regions in the BPT diagram show the continuity of the trend already indicated by CALIFA inner regions.
The comparison with high-resolution circunmnuclear star forming region (CNSFR) observations, that go deeper in the high-metallicity, high-density region around the galactic nucleus, is also a case of great interest. (Díaz et al. 2007, hereafter D07) studied long-slit observations of 12 CNSFR located in the early-type spiral galaxies NGC 2903, NGC 3351 and NGC 3504. As in the case of PINGS observations, different spatial resolution prevents comparison between properties depending on the region sizes, but does not affect those properties related with the emission-line ratios. Data from these 12 CNSFR are included in the inner regions BPT diagram in Fig. 12, confirming the trend of high oxygen abundances and low excitation values.
An interesting case of IFS observations of H ii regions located in different environments is the study by (López-Hernández et al. 2013, hereafter LH13), that compares the central region of M33 with IC 132, a H ii region located at 19 arcmin (4.69 kpc) from the galactic center. These observations were obtained with the CAHA 3.5-m telescope, using the PMAS instrument in the PPAK mode, as were CALIFA and PINGS observations. Data from the central region and from IC 132 region are included in the corresponding inner and outer BPT diagrams in Fig. 12. While M33 central region confirms the highmetallicity, low-excitation values indicated by this work and by PINGS and Díaz et al. (2007) data, ID 132 region expand the A&A proofs: manuscript no. mrodriguezbaras_arxiv Table 2. Physical properties of the PINGS galaxies involved in this work, obtained from the following sources: (i) Galaxy name. (ii) Redshift, references: NGC 628, Lu et al. (1993); NGC 1058, NGC 3184, Springob et al. (2005)); NGC 1637, Haynes et al. (1998) trend of the outer regions, with lower metallicity and higher excitation.
Other diagnostic diagrams
The study of the relation between several emission-line ratios, that depend on the shape of the ionizing continuum and the physical conditions of the cloud, provide information on physical properties as ages, degree of ionization or abundances.
is shown in Fig. 13 emission-line ratio, denoting a higher degree of ionization than the inner regions sample. As only line ratios are involved and therefore the different spatial resolution has no influence, the PINGS inner regions sample is also included, as well as the CNSFR studied by D07 and M33 central and IC 132 regions studied by LH13. Their location in the plots confirms and expand the trend followed by the CALIFA inner and outer region samples, as was already seen in the BPT diagrams. Figure 14 shows Fig. 13, the PINGS inner regions sample and the D07 and LH13 data are included in the figure, showing the same trend than the CALIFA inner and outer regions sample.
The relation between EW(Hβ) and [O ii] λ3727/[O iii] λ5007
emission-line ratio (see Fig. 15) provides information about the evolution of the star formation processes within a given galaxy. This line ratio is a proxy for the ionization parameter, which in turn is proportional to the quotient of the density of Ly continuum photons to the electron density. The number of hydrogen ionizing photons decreases with the evolution of the ionizing cluster and, other things being equal, lowers the ionization parameter, hence increasing the [O ii] λ3727/[O iii] λ5007 line ratio (Hoyos & Díaz 2006). The trend of decreasing EW(Hβ), and therefore increasing age for the ionizing population, and increasing
Furthest regions
The use of IFS techniques allows the detection of H ii regions located much further from the center of the galaxy than it was possible so far (see e.g., Ferguson et al. 1998;van Zee et al. 1998;Werk et al. 2010). In this work 10 of our 671 outer regions are located beyond 6 R e f f , and although five of these ten regions are located close to the projected axis of high inclination galaxies, and therefore have high uncertainties in the determination of their distances to their galactic centers, we consider this group worthy of specific study. Two of these region spectra are included as an example in Fig. 16, and their most prominent properties are included in Table 3. Mean values of these properties for the whole outer H ii regions sample are also included in the table for comparison. These ten furthest regions were in fact already highlighted in the outer regions BPT diagram in Fig. 12, where it can be observed that they fulfil the general trend followed by the outer regions: they have low [N ii] λ6583/Hα emission-line ratio values and high [O iii] λ5007/Hβ emission-line ratio values, implying low oxygen abundances and high excitations. Other parameters, as EW(Hα), A V and L(Hα) also confirm the outer regions trends, as they are in general in good agreement with the outer region average values but slighty above or below, following the corresponding tendency observed in this work for the evolution of the parameter along the galactocentric distance.
Color-magnitude diagrams
Magnitudes and colors of the outer and inner H ii regions are calculated from their extracted spectra, for a first approach to the spectroscopic properties of the stellar populations. For the calculation of the magnitudes we followed the process indicated in Mollá et al. (2009) and García-Vargas et al. (2013). The filters we considered are the B and V bands from the Johnson's system, and g and r bands from the Sloan SDSS ugriz system, as they are the ones comprised in our data wavelength range.
where λ 1 and λ 2 are the passband limits in each filter, L λ is the stellar SED luminosity, L i is the integrated luminosity in the narrow line for the line i and T i the line filter transmission. We assumed that the line width is much narrower that the broadband filter passband. For Johnson's filters, C is the constant for flux calibration in the Vega system. According to the Girardi et al. (2002) prescriptions, Vega is taken as the average of Lejeune et al. (1997) In order to remove the contribution of the emission lines, which for the SDSS filter system can imply differences in colors of up to one magnitude for young ages (García-Vargas et al. 2013), we calculate the magnitudes masking the mentioned emission lines in the regions spectra. The g-r vs M r colormagnitude for both outer and inner samples, calculated for the pure continuum without emission lines, is shown in Fig. 17. The color-magnitude diagram of the regions shows that inner regions are redder and have higher luminosities than outer regions, as expected from older regions and with higher metal content, and in good agreement with results obtained in Sects. 4.1 and 4.2.
Ionizing and photometric masses
The estimation of ionizing and total stellar masses provides more information about the average evolutionary stages for both region samples. Considering the number of ionizing photons for each H ii region calculated in Sect. 4.1, we estimated the ionizing cluster masses of the regions using the total number of ionizing photons per unit mass provided by the PopStar models (Mollá et al. 2009) for a zero-age main sequence with Salpeter initial mass function with lower and upper mass limits of 1 and 100 M ⊙ and Z = 0.008. For the inner regions sample we obtain a range of values between 2.43 × 10 3 − 7.66 × 10 6 M ⊙ , whereas a range of 4.66 × 10 2 − 7.36 × 10 5 M ⊙ is obtained for the outer regions sample. In principle, these values are lower limits of the ionizing masses, as we are considering an unevolved stellar population with no photon escape and no dust absorption. The maximum effect of the stellar population evolution can be estimated considering the total number of ionizing photons per unit mass given by the PopStar models with the same IMF and metallicity for a population of 5.2 Myr, as PopStar models consider that clusters older than this age do not produce a visible emissionline ionizing spectrum (Martín-Manjón et al. 2010). With those conditions the ionizing mass ranges obtained are one order of magnitude larger than those obtained for the zero-age main sequence population. Fig. 16. Spectra from two of the ten H ii regions located further than 6 R e f f from their galaxy center, before the subtraction of the stellar continuum. Galaxy name, ID of the region in the galaxy and distance to the center are shown in the titles. Flux is expressed in units of 10 −16 erg s −1 cm −2 . One can also estimate photometric masses for the observed regions from the V magnitudes and B-V colors obtained as explained in Sect. 4.4, applying the mass-to-light relation described in Bell & de Jong (2001) for a scaled Salpeter IMF and a formation epoch model with bursts. Figure 18 shows the relation between the obtained photometric mass values and the ratio between ionizing and photometric masses for both region samples. We observe that inner regions have photometric masses two orders of magnitude bigger than outer regions on average. This is to be expected, due to the higher preponderance of underlying stellar populations from the galaxy bulge in the most internal circumnuclear regions. But again we could also think about a possible selection bias, causing that only the biggest inner regions were detected, but the study of the regions angular areas in Sect. 4.1 already showed that the influence of this bias is small, and therefore there is an intrinsic difference between inner and outer photometric masses. Interestingly enough, the ratios between ionizing masses and photometric masses are similar for inner and outer regions, with the outer regions' ratio values being slightly higher. Fig. 18. Relation between the photometric mass values and the ratio between ionizing and photometric masses for the inner (red diamonds) and outer (blue crosses) regions samples.
Summary and conclusions
We have analyzed and compared a sample of 725 inner H ii regions, defined following the criterium by Álvarez-Álvarez et al. (2015), and a sample of 671 outer H ii regions, located further than 2 R e f f from their corresponding galactic center. The H ii regions were detected and extracted applying the HIIexplorer procedure Rosales-Ortega et al. 2012) to the observations of a sample of 263 isolated spiral galaxies, part of the CALIFA survey . Different trends and values of the main physical properties of H ii regions are observed in the comparison between inner and outer regions samples. Inner regions show lower hydrogen line equivalent width values, higher extinction and higher luminosities and number of ionizing photons, as well as larger values of [N ii] λ6583/Hα and [O ii] λ3727/[O iii] λ5007 line ratios, related to higher oxygen abundances and smaller ionization parameters respectively. According to these facts we conclude that inner regions have more evolved stellar populations and are in a later A&A proofs: manuscript no. mrodriguezbaras_arxiv evolution state with respect to the outer regions. The distribution of both region samples across several diagnostic diagrams confirm this conclusion.
We have calculated magnitudes and colors from the regions extracted spectra, observing that inner regions are redder and have higher luminosities, as expected. We have also estimated the photometric stellar masses and the ionizing stellar masses of the regions, obtaining higher masses for the inner regions and slightly higher M ion /M phot values for the outer regions.
This characterization of observational properties of two homogeneous and coherent inner and outer regions samples confirm and expand previous results about intrinsic differences depending on the location of the regions and the influence of the environment, related to different evolution stages and therefore providing information about the formation and evolution processes of the galaxies. These different aspects will be further explored in the second paper of this series by combining both stellar population and photoionization models. | 2018-01-22T14:41:11.000Z | 2018-01-22T00:00:00.000 | {
"year": 2018,
"sha1": "f8d1a90e162539a1728f028125ec03c8f199a554",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2018/01/aa31592-17.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "f8d1a90e162539a1728f028125ec03c8f199a554",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
198910783 | pes2o/s2orc | v3-fos-license | Suicide Risk in Bipolar Disorder: A Brief Review
Bipolar disorders (BDs) are prevalent mental health illnesses that affect about 1–5% of the total population, have a chronic course and are associated with a markedly elevated premature mortality. One of the contributors for the decreased life expectancy in BD is suicide. Accordingly, the rate of suicide among BD patients is approximately 10–30 times higher than the corresponding rate in the general population. Extant research found that up to 20% of (mostly untreated) BD subjects end their life by suicide, and 20–60% of them attempt suicide at least one in their lifetime. In our paper we briefly recapitulate the current knowledge on the epidemiological aspects of suicide in BD as well as factors associated with suicidal risk in BD. Furthermore, we also discuss concisely the possible means of suicide prevention in BD.
Introduction
With a lifetime prevalence of 1.3-5.0%, type-I and -II bipolar disorders (BD-I; BD-II) are among the most common psychiatric ailments [1,2]. Patients with BD have poor life expectancies as these patients have a decreased lifespan of about 9-17 years compared with the general population. Furthermore, some studies from different countries (e.g., Denmark and UK) suggest that this mortality gap has become larger over the last decades. Although the largest number of excess death cases in BD may be attributed to natural (e.g., due to cardiovascular diseases or diabetes) and not unnatural causes, suicide is also quite prevalent in the population of subjects with BD [1][2][3][4][5].
At a global scale, approximately 800,000 suicide deaths occur every year (which corresponds to a global suicide rate of 11.4/100,000/year); thus, suicide may be considered a major public health issue [6,7]. Although the great majority (≈90%) of suicide cases occur among subjects with major mental-typically mood-disorders, the majority of patients with mood disorders never become involved in suicidal behaviour. Accordingly, in addition to major mood disorders, other risk factors (including special clinical features of the mental illness as well as some demographic, personality and familial factors) should contribute to suicidality, which therefore should be deemed as a multicausal phenomenon [2,[8][9][10]. Hereinafter, we provide a concise summary of our current knowledge about suicidality in BD based on a review of current literature (mainly review papers, book chapters, meta-analyses, treatment guidelines of international societies, etc.).
Epidemiology of Suicidal Behaviour in Bipolar Disorder
Suicidal behaviour is quite frequent among subjects with BD, as up to 4-19% of them ultimately end their life by suicide, while 20-60% of them attempt suicide at least once in their lifetime [2]. In BD, the risk of suicide death is up to 10-30 times higher than that of the general population [2,5,8,[10][11][12]. The estimated annual suicide rate in patients with BD is about 200-400 / 100,000 [8]. BD-associated cases account for about 3-14% of all suicide deaths [13].
It is important to mention that the ratio of suicide attempts to suicide deaths (i.e., the lethality index) is much lower for patients with BD than for the members of the general population (one study, for example, reported that rate as 35:1 and 3:1 for the general population and for BD patients, respectively) [2,8,9]. A possible explanation for this phenomenon may be that BD subjects usually employ more lethal suicide methods compared with members of the general population [2,8,9]. Nevertheless, attempts-to-suicide ratios lower than in the general population are not specific for BD, as it is also observable for instance among patients with schizophrenia or major depressive disorder (MDD) [2,14]. Unsurprisingly, suicidal ideation is also far more frequent in patients with BD (43% past-year prevalence) than in the general population (9.2% life-time prevalence) [7,15].
Though it is indisputable that mood disorders are associated with markedly elevated levels of suicidality, it is hard to pick out from the results of various studies whether there are relevant differences in the risk of suicidal behaviour between different kinds of mood disorders. Accordingly, higher, similar or lower levels of suicidality in BD patients compared to MDD patients have also been reported [9,10,16]. In a similar fashion, based on the published information it is hard to disentangle whether any BD subtype (BD-I or BD-II) is associated with a higher level of suicidality than the other [2,8,11,[16][17][18][19].
It is known that a relatively high proportion (8-55%) of patients with MDD has a history of subthreshold hypomanic symptoms. This so called subthreshold bipolar subgroup of MDD patients differs from MDD patients without subthreshold hypomanic manifestations in several ways. For instance, a wide array of studies demonstrated that subthreshold bipolarity is associated with increased levels of suicidality [20][21][22][23].
Risk Factors of Suicide in Bipolar Disorder
Several approaches exist to classify risk factors for suicide in BD. One of the most common systems divides risk factors into proximal and distal ones, where proximal (or precipitating) factors are close to suicidal behaviour in time whereas distal factors are rather considered as traits or predispositions and, accordingly, they are enduring [10,24]. Other classifications assign suicide risk factors to conceptual categories (e.g., risk factors associated with genetic or sociodemographic components or illness characteristics or life events) [8,25,26]. Based on different conceptual backgrounds complex models were conceived for the description of the whole process of suicide (e.g., the diathesis-stress model, the bipolar suicidality model, the interpersonal theory of suicide, the three-step theory model or the recently elaborated "neurocognitive model of suicide in the context of bipolar disorders") [10].
In the current paper-without the ambition to be exhaustive-we list and briefly discuss the most relevant risk and protective factors of suicide in BD. In regard to clinical history, previous suicide attempt(s) is considered as one of the most powerful single predictors of future attempts and suicide death. The period soon after hospital discharge may be characterized by extremely high levels of suicidality. This finding draws attention to the importance of avoiding premature discharges and inappropriate follow-ups. In addition, risk of suicide is increased during the period immediately after hospital admission. Frequent and/or great number of prior hospitalizations are also associated with heightened risk of suicidal self-harming behaviour. Early age at onset is also associated with suicidality in BD. The early years after the diagnosis represent a high-risk period for suicide. Comorbidity with other psychiatric, addictive or severe somatic disorders also increase the risk of all forms of suicidal behaviour. Rapid-cycling course and predominant depressive polarity during the prior course are also associated with higher risks of self-destructive behaviour. One of the most important determinants of suicidal behavior in BD is the type/polarity of the current mood episode/state: pure major depressive episodes and mixed states carry the highest risk, while suicidal behaviour is rarely present in (euphoric) mania, hypomania and during euthymic periods. However, some recent results indicated that there is no elevated risk of suicidal behaviour during mixed state over the risk attributable to its depressed component. Furthermore, these studies suggest that the majority of suicide risk elevation related to having previous mixed states is not an aftermath of the mixed state itself, but can rather be attributed to a depression-predominant course of the disorder. Longer duration of untreated illness (i.e., long time lag from the beginning of the affective symptoms until treatment initiation) is also associated with higher hazards of suicidal behaviour. Regarding sociodemographic factors, male gender is a risk factor for lethal suicides, while, according to some results, female gender is a risk factor for attempts. These gender differences are similar-but weaker-to those observable in the general population; accordingly, in this otherwise high-risk population gender seems not to be a significant predictor for suicidal behaviour). Suicidality is also more frequent among those bipolar subjects who are divorced, unmarried or single-parents or living in social isolation. Age is a further important sociodemographic factor: BD subjects under 35 years of age and above 75 years of age are at higher risk for engaging in suicide-related behaviours. Occupational problems and unemployment also contribute to elevated levels of suicidality. Adversities in personal history and acute stressors, such as experiencing sexual or physical abuse and parental loss in childhood or bereavement, breaking the law/criminal conviction and financial disasters are important precipitants of suicidality as well. Some personality attributes, for instance impulsive/aggressive traits, hopelessness and pessimism also increase the risk of suicide. Certain types of affective temperaments (first and foremost cyclothymic) have also been demonstrated to be associated with more frequent suicidal behaviour in BD. Family history of suicide acts and/or major mood disorders are also strong risk factors for suicide in subjects with BD. Some results also suggest that living in geographical locations where there are large differences in solar insolation between winter and summer (i.e., near the poles) may be associated with increased risks of attempted suicide in patients with BD-I [2,7,8,[10][11][12]15,17,19,[25][26][27][28][29][30][31][32][33][34].
Protective Factors of Suicide in Bipolar Disorder
In contrast to the above discussed several risk factors for suicide in BD, only a few protective factors have been identified so far [2]. For instance good family and social support, parenthood and the use of adaptive coping strategies seem to have some protective effects. Furthermore, a strong perceived meaning of life and hyperthymic affective temperament are also a protective factors [2,10,24,29]. The possible protective role of religiosity has emerged but results are somewhat inconclusive [2,26,[35][36][37]. Last but not least, it is important to note that treatment (and even more so a good response to treatment) is protective against suicide in BD (see also the section "Suicide prevention in bipolar disorder"). In consonance with the fact that treatment may decrease heightened suicidality, it is not surprising that the majority of suicide victims are untreated affective disorder patients [8][9][10][11]13,38,39].
Suicide Prevention in Bipolar Disorder
From a pharmacological perspective, lithium seems to possess the greatest suicide-preventive potential in patients with BD. Intriguingly, the suicide protective effect of lithium is not confined to bipolar patients as it has also been demonstrated among patients with MDD (it is not surprising since, as we have discussed it previously, a considerable proportion of "unipolar" MDD patients have subthreshold bipolar features) [5,8,15,[40][41][42]. Overall, compared to placebo, lithium appears to decrease the risk of suicide by more than 60% in mood disorders [8,40,42]. Some results suggest that lithium is protective against suicide, albeit in a decreased manner, even in those BD patients who are moderate/poor responders to the phase-prophylactic effect of it. This finding may suggest that in the case of lithium non-response in a patient who is at high risk for suicide, instead of switching lithium to another mood stabilizer, the clinician should retain lithium (even in a lower dose) and combine it with another mood stabilizer [1,41].
A solid suicide-protective effect related to the administration of anticonvulsant-type mood stabilizers (e.g., valproic acid, carbamazepine, lamotrigine) to BD patients has not been proven so far. On the other hand, the concern of the FDA about the potential for an increased risk of suicidality associated with anticonvulsants seems not to be applicable to patients with BD (i.e., in this population the use of these agents is not associated with increased levels of suicidality). According to our current knowledge, in regard to suicide prevention lithium is superior than these agents [2,8,15,41,43,44].
The role of antidepressants (ADs) in suicide prevention in individuals with BD seems to be negligible, and, in fact, concerns have been raised that administration of ADs may increase suicidality in BD. It is remarkable that findings are also inconsistent regarding the ability of ADs to prevent suicides in patients with MDD. AD monotherapy should be avoided in BD [2,8,15,41].
Considering their increasing use in BD for instance as maintenance treatment, it is justifiable to ask whether (atypical) antipsychotics have any beneficial effects on suicidal behaviour in BD. Unfortunately, there are no high-quality data to answer this question at present, so further studies should elucidate whether treatment with antipsychotics has any benefits in this respect [2,8,15,41].
Ketamin as a possible antidepressant agent has mainly been tested in patients with MDD and only a few studies have been conducted among patients with bipolar depression. According to the results of these small proof-of-concept investigations, ketamin shows similar antidepressive efficacy in bipolar as in unipolar depression. In line with its possible efficacy, ketamin is recommended by the clinical guideline of International College of Neuropsychopharmacology (CINP) for the treatment of bipolar depression, but only as a fourth-line agent and in combination with a mood stabilizer. Similarly, until now, the antisuicidal activity of ketamine was assessed mainly in MDD patients and only a small number of investigations have been conducted in BD patients. These have mainly positive outcomes, but further studies are needed to reveal whether ketamin has a similar antisuicidal effect in BD than in MDD [45][46][47][48][49][50][51].
It is well-known that electroconvulsive therapy (ECT) shows a similar efficacy in the treatment of depressive episodes in MDD and BPD (and some studies even found it more effective against bipolar than unipolar depression). In line with its antidepressive effects, ECT is also considered as an effective antisuicidal treatment modality, and it has been recently demonstrated that it is superior in this regard to psychopharmacons both in unipolar and bipolar depression (and its antisuicidal efficacy is comparable to the efficacy of psychopharmacons in bipolar mixed states and mania) [2,8,41,52,53].
Summary and Clinical Implications
BD is a relatively common psychiatric disorder that is associated with increased mortality due to both natural and unnatural causes. Accordingly, the risk of suicide is highly elevated in this patient population. Because of this, a thorough assessment of suicide risk should take place at all clinical visits. This clinical assessment should include, inter alia, the comprehensive examination of the mental state, and the inquiry about the existence and features of current suicidal intents (e.g., duration and intensity), the methods intended to be used, the access to means (e.g., weapons) as well as the compliance to prescribed medications. In addition, it is essential to gain information about previous suicidality. Whenever possible, hetero-anamnestic data should be gathered as well. The management of suicidal behaviour in patients with BD represents a clinical challenge. Appropriate long-term treatment of the disorder seems to be associated with the reduction of suicidality. Furthermore, in acutely suicidal patients the removal of access to obvious means for suicide is essential and, in severe cases, hospitalization may be justifiable as well. Prevention strategies should include the provision of psychoeducation (for example, via information leaflets and/or by the members of the health care staff) to the patients, as well as to relatives and friends, in order that they become able to recognize the warning signs of suicidal behaviour, be aware of the risk periods and the importance of adherence to treatment, avoid isolation and call for help in emergency situations. A written list of sources of support which are available during a suicidal crisis may also be helpful [2,10,15,59].
Conflicts of Interest:
The authors declare no conflict of interest. | 2019-07-27T13:05:05.927Z | 2019-07-24T00:00:00.000 | {
"year": 2019,
"sha1": "7a32105d6dd101dac9f123b5e421e0931c106f28",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1648-9144/55/8/403/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "38f1101e3351572ee7e8fefc5de48c0bfc37363d",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
110701921 | pes2o/s2orc | v3-fos-license | Determination of Surface Qualities on Inclined Surface Machining with Acoustic Sound Pressure
Die parts used in automotive and aviation industry have complicated surfaces that require multiaxis machining. In machining of inclined surfaces with ball-end milling, the process is of great significance for its correctness and accuracy. In this study, Acoustic Sound Pressure (ASP) generated during the machining of a workpiece at a vertical machining centre has been measured. The experiments have been conducted in association with cutting velocity, feed rate and step over parameters determined by using different surface forms and different cutter path strategies. Therefore, the aim of this study is to understand the relationships between the generated sound signals and surface roughness in the machining of inclined concave and convex surfaces. In the experiments, the workpiece material of EN X40CrMoV5-1 hot work tool steel, which is commonly used in the related die industry, has been chosen. The ball end mills with two indexable inserts with three different coatings of TiC, TiN, and TiAlN have been used. The results show that there is a rise at the value of surface roughness with a rise at the value of acoustic sound pressure and that surface roughness could be figured out with acoustic sound pressure level.
INTRODUCTION
Machining is one of the most important methods in manufacturing [1].To change the shape of a workpiece to a desired geometrical shape, it is necessary to use an appropriate machine tool and cutting tools to obtain the required dimensions and surface quality [2].One of the most important factors to increase the quality in the manufacturing processes is to control surface quality of the product.Surface quality control is a costly process and a difficult task for the parts that are in the production line.Time devoted to quality control process and cost can be minimized by the help of prediction models and systems.Therefore, real-time model-based quality control is used by monitoring measurable processes [3].The reason behind the monitoring of machining operations is to generally prevent undesired machining consequences such as chip formation and chip shape classification, tool wear, dimensional tolerances, surface texture (roughness and waviness), and tool deflection [4].Researchers have been working on online monitoring with video based approaches, so that they can screen tool working conditions [5].However, they are both difficult and costly, so that the implementation of these systems into industry is almost impossible.Nevertheless, when the costs of these systems are reduced, it is likely to use them in many measurements such as tool wear, cutting force, Acoustic Emission (AE) and vibrations [6].Ghosh et al. [7] focused on the prediction of tool wear in CNC milling using sensors in integration with neural networks.In their study, they found that the average flank wear of the main cutting edge was predicted by the signals such as cutting force, cutting tool vibration, and sound pressure level obtained from the machining region.Marinescu and Axinte [8] worked on the analysis of emission signals efficiency to determine damages on tool and workpiece in milling operations.At the end of the study, the damage on the tool has been determined with emission signals.Marinescu and Axinte [9] also focused on monitoring of time-acoustic emission frequency to describe surface defects on a workpiece in milling with more than one cutting edge.This was carried out with new methods for supervising cutting processes with multiple teeth cutting simultaneously, i.e. milling, by using of AE signals backed-up by force data.By means of this work, the researchers took signals into consideration for all simultaneous cutting edges.The results showed for the first time that identification of milling conditions (i.e.cutting with one tooth and two-three teeth) is possible using only AE signal in time-frequency domain.Additionally, surface deformations, related to the wearing of cutting edges, can be determined.Rivero et al. [10] worked on evaluation of the suitability of a tool wear monitoring system based on machine tool internal signals.The sensor data from internal signals were compared and analyzed, assessing the deviation in representative variables in time and frequency domains.As a result, tool wear has been estimated.Parallel to these studies, Wilcox et al. [11] worked on the use of cutting force and AE signals for the monitoring of tool insert geometry during rough face milling.In their studies, they simulated different forms of naturally occurring wear such as crater, notch and flank wear, local changes Die parts used in automotive and aviation industry have complicated surfaces that require multiaxis machining.In machining of inclined surfaces with ball-end milling, the process is of great significance for its correctness and accuracy.In this study, Acoustic Sound Pressure (ASP) generated during the machining of a workpiece at a vertical machining centre has been measured.The experiments have been conducted in association with cutting velocity, feed rate and step over parameters determined by using different surface forms and different cutter path strategies.Therefore, the aim of this study is to understand the relationships between the generated sound signals and surface roughness in the machining of inclined concave and convex surfaces.
In the experiments, the workpiece material of EN X40CrMoV5-1 hot work tool steel, which is commonly used in the related die industry, has been chosen.The ball end mills with two indexable inserts with three different coatings of TiC, TiN, and TiAlN have been used.The results show that there is a rise at the value of surface roughness with a rise at the value of acoustic sound pressure and that surface roughness could be figured out with acoustic sound pressure level.Key words: ball end milling, acoustic emission and sound pressure, linear regression, surface roughness in rake angle and edge breakdown.Weingaertner et al. [12] evaluated the influence of high speed end milling dynamic stability through audio signal measurements both experimentally and analytically.In the study, the stability evaluation was based on the workpiece surface finish and on the audio signals measured with a unidirectional microphone.The experimental and analytical results have been found very close to one another.Tekiner and Yeşilyurt [13] studied the cutting parameters depending on process sound during the turning machining process of AISI 304 austenitic stainless steel.In their study, the best cutting speed and feed rate values were determined according to the flank wear, built up edge, chip form, surface roughness of the machined samples and machine tool power consumption.In addition to the above mentioned studies, Salgado and Alonso [14] focused on Tool Condition Monitoring System (TCMS) for on-line tool wear monitoring in turning.In the study, the monitoring signals were feed motor current and sound signal.The tool wear has been found by TCMS.Ravindra et al. [15] worked on acoustic emissions for tool condition monitoring in metal cutting.Moreover, Haber et al. [16] studied tool-wear monitoring in a high-speed machining process on the basis of the analysis of different signals' signatures in time and frequency domains.In their study, time and frequency domain analyses were confirmed wwith the relevance of cutting-force and vibration signals' signatures for tool-wear monitoring in High-Speed Machining processes.Tool wear has been assessed as a result of their analysis.Quadro and Branco [17] carried out analysis of acoustic emission during drilling test.They used AISI D3, drills of high speed steel with TiN coating.In their measurements, profilometry and light microscopy were used to characterize and quantify the wear on the drills' cutting edges along the tests.In another study, Guo and Ammula [18] developed a real-time acoustic emission monitoring system to investigate the sensitivity of broad AE signal parameters including RMS, frequency, amplitude, and count rate to white layer and corresponding surface finish and tool wear.In this way, tool wear has been observed as real time.Furthermore, Asilturk et al. [19] conducted a study on modeling with regression of surface roughness dependent on cutting parameters, vibration and acoustic emission.In their model, the first degree, the second degree and logarithmic multiple regressions were used.In this way, it was found that the feed rate on the surface roughness parameter was the most effective and the better results were also attained at the second order regression model.Horvat et al. [20] studied on evaluation and monitoring gas metal arc welding process by using an audible sound signal.In this way, the welding process has been assessed in terms of robustness and quality.In addition to this, a new algorithm based on the measured welding current has been established for the calculation of emitted sound during the welding process.The results of experimental and theoretical measurements were found to be in good agreement.Kek and Grum [21] analysed acoustic emission (AE) signals obtained during laser cutting of a steel plate.The acoustic emission signals in the plate have been measured with contact PZT sensors.During the laser cutting process, continuous AE signals have been captured by the action of a cutting gas.AE signals as a result of their research have demonstrated that an important indicator for quality of laser cut.The number of studies related to 3D machining [22] and tool life [23] were also studied by the researchers.
The previous studies focused on acoustic emission, tool wear and deformation, vibrations stability and best cutting parameters.In this study, the experiments were carried out in order to understand the relationships between the sound signals generated and surface roughness in the machining of inclined concave and convex surfaces.
Cutter Path Styles
It has been experimentally shown that the right choice on tool paths including different cutting movements affects production time, status of machining surfaces and cost [24].Therefore, in these experimental studies, contouring and ramping tool paths styles are used to produce inclined surfaces.The tool at the machining of free form and inclined surfaces makes movements of ramping and contouring.Accordingly, for the implementation of up milling and down milling strategies at the machining of inclined surface, contouring and ramping are the inevitable choice of tool path styles.In contour operation, the tool moves parallel to the axis parts.As a consequence, the chip is easily disposed from the cutting zone.At the ramping operations, the effective diameter in ball end mill tools is easily visible and their effects to changes in the responses are easily observed.In the experiments, 40×30 mm islands on the 220×135×50 mm sized block were machined.
In contouring tool path styles, the cutter scans the inclined surface with the lines in parallel to surface radius (Fig. 1a).On the other hand, in ramping tool path styles, the cutter scans the inclined surface with Determination of Surface Qualities on Inclined Surface Machining with Acoustic Sound Pressure the lines in vertical to surface radius (Fig. 1b).In Fig. 1, the feed rate and spindle speed are depicted by Vf and W, respectively.In both tool path styles, step over values are constant.After machining each step, the cutter moves one step sideways in a position in which it returns back to the beginning level of that step and then processes the next step.Under these conditions, four tool path styles were generated as shown in Fig. 2. In Fig. 2, the form radius of workpiece, milling position angle, nominal depth of cut and step over are shown by R, θ, a and f p , respectively.High machinability of the materials used in dies, automotive and space industry is of great importance for surface roughness of the workpiece production.EN X40CrMoV5-1 (Böhler W302) hot work tool steel, which is commonly used in these industries, was preferred in the study.Chemical compositions of the material presented in Table 1.The material has 22 to 25 HRC hardness with yield strength of 1650 N/ mm 2 .When it is subjected to heat process under 1020 to 1080 °C for 15 to 30 minutes and cooled in oil, its hardness rises up to 50 to 54 HRC.The pre-hardening process is not implemented in its machining process.
After the machining operations are completed, the material is subjected to the heat process.As a cutter body, CoroMill (Sandvik Company) with an indexable (R216-16A20-045), Ø16 mm cylindrical shank, two fluted and 30° helix angle end mill were used.Moreover, the ball end inserts (Sandvik Company) with TiC, TiN and TiAlN coated (R216-16 03 M-M H13A) were used.Every insert has a coating of 3 micron thickness.In addition to tool path styles, three different variable parameters were used.These were; cutting velocity (Vc), feed rate (Vf) and cutting step over (fp).Cutting tool step over affects the tracks on the surface made by the cutter, the load on the cutter and the processing time in a direct manner [24].
Step over value was chosen as 5% of the tool diameter and this value was set as the lower level of fp.Cutting velocity and feed rate values were picked up with reference to the catalogue values of Sandvik Company.These reference values were determined by carrying out a number of experiments for each tool coating and by taking the common use of the material for the industry into account (Table 2).
Experimental Apparatus and the Implementation of Experiment
Semi finishing operations were used in the experiments and coolant was not used due to constituting a layer between cutting edge and workpiece, and this layer causes shear in little slices on inclined forms.The experiments were carried out at a vertical machining center, John Ford VMC 550 CNC, having 12000 rev/min with a 12 kW engine power.The stages of the experimental system are depicted in Fig. 3.In pre-machining, tool wear did not occur in the machining of several blocks.Thus, every five blocks were machined by different cutting inserts.
After completing the pre-machining, a semi finishing operation was carried out using different cutter path styles and parameters on the workpiece.For the combinations of cutting parameters, L'16 standard orthogonal array was chosen as four different levels for each parameter appointed.For concave surface, a number of 48 experiments were conducted with TiC, TiN, and TiAlN coatings (16 experiments per each).
Similarly, a number of 48 experiments were made with TiC, TiN, and TiAlN coatings (16 experiments per each) for the convex surface.
Measurement of Machining Sound Pressure Level
Tool defects can be observed by analyzing the sounds generated in machining.In the study, the sound sensor was placed to the closest position to the cutting tool.The acoustic sound pressures were collected by a sound sensor (microphone) at the sampling of 100 ms in mV units.An algorithm written in MATLAB is used for digitizing and collecting the sound pressures from the sound sensor.For calibration of the sound sensor, the same sound values were measured at the same time with a sound measurement device of CEMDT 8850.The real sound pressure values were determined by evaluating the differences found via the algorithm.The signals that were recorded in mV were analyzed in a time scale and its arihe metic mean was calculated.Then, signals were transformed into ASP values in dB.The transformations were made with the Eq. ( 1).
where, V output is the arithmetic mean of the collected signals in Volt and V o is the lowest recorded signals in Volt at the same experiment.Some significant acoustic sound pressures for the related experiments are given below.In Fig. 4, experiment 3 has the lowest raw sound signal value of -0.5903 mV.On the other hand, the experiment 9 has -1 mV as the lowest raw sound signal value.The arithmetic mean of the collected signals for the related experiments is given in Table 3.In the experiments, MahrSurf PS1 surface roughness measurement device was used.The measurements were performed at a direction normal to cutting tool paths with an angle of 45° to standing position in order to take the effective cutting diameter into consideration in every sample.Average surface roughness (R a ) is represented as in Eq. ( 2).
Table 3. Acoustic sound pressure values [mV] of sample experiments
where, R a represents deviation from the average line, Y represents the ordinate of profile curve and L represents the measurement length.In the experimental measurements, every measurement was carried out three times and their average values have been taken into account.
EXPERIMENTAL RESULTS
One of the most important criteria for determining surface quality in cutting is surface roughness.The acoustic emission can be employed for predicting surface roughness of machining surfaces [25].In other words, the relation between surface roughness values and variation of ASP levels can be established.The changes in sound pressure levels have been seen available to determine an average surface roughness.
For concave surface forms, the ASP [dB] and surface roughness [µm] values which are acquired from the experiments for the related coatings are shown in Fig. 5.When the acquired ASP and R a graphics for TiC, TiN and TiAIN coatings were examined, it wasobserved that surface roughness values increase with the increase in acoustic sound pressure values.However, surface roughness values decrease with the decrease of acoustic sound pressure values.
According to Fig. 5, for the cutter with TiC coating, the largest ASP of 105.2 dB causes the largest R a value of 5.09 µm at the experiment 4. On the other hand, the lowest ASP of 83.1 dB is observed at the R a value of 1.51 µm at the experiment 1.For the cutter with TiN coating, the largest ASP of 104.3 dB resulted in the largest R a value of 5.69 µm at the experiment 13.The lowest ASP of 86.1 dB is observed at the R a value of 1.50 µm at the experiment 1. Lastly, for the cutter with TiAlN coating, the largest ASP of 101.3 dB resulted in the largest R a value of 5.66 µm at the experiment 4. On the other hand, the lowest ASP of 74.1 dB is observed at the R a value of 1.68 µm at the experiment 1.When the results are evaluated in terms of parameters (Vf and fp), ASP and R a values show an increase with the increase of feed rate and step over.The effects of feed rate and step over values on obtained ASP and R a values are given in Fig. 6.According to Fig. 6a, for the TiC coating, the largest values of R a and ASP have been observed at the feed rate of 318 mm/rev and the step over of 2 mm.The lowest values of R a and ASP have been observed at the feed rate of 223 mm/rev and the step over of 0.8 mm.In Fig. 6b, for the TiN coating, the largest values of R a and ASP have been observed at the feed rate of 414 mm/rev and the step over of 2 mm.The lowest values of R a and ASP have been observed at the feed rate of 318 mm/rev and the step over of 0.8 mm.Finally in Fig. 6c, for the TiAlN coating, the largest values of R a and ASP have been detected at the feed rate of 445 mm/rev and the step over of 2 mm.The lowest values of R a and ASP have been observed at the feed rate of 350 mm/rev and the step over of 0.8 mm.
In this study, linear regression analysis was carried out to determine the relationship between the sound pressure level and surface roughness average.Linear regression is represented as [26]: where, S represents the response and β i represents a regression factor of i th and residual.ε is a random defect term and is presumed to show normal distribution having average zero with σ 2 variance.The standardized residual is the residual divided by the standard deviation, where the residual is the difference between the data response and the fitted response.In other words, it is residual standardized to have standard deviation 1 [26].According to linear regression analysis, it was found that the relationship between sound pressure level and surface roughness is positive, linear and statistically significant (respectively R 2 =0.875; 0.822 and 0.873) for TiC, TiN, and TiAIN (Fig. 7).When all cutting parameters are taken into consideration, it is determined that the correlation coefficient between sound pressure level and surface roughness is better than R 2 = 0.8.This shows that it is beneficial to adopt the sound pressure level during manufacturing.Similarly, for convex surface forms, the ASP [dB] and surface roughness [µm] values which are According to linear regression analysis, it has been found that the relationship between the sound pressure level for convex surface type and surface roughness is positive, linear and statistically significant (respectively R 2 =0.888; 0.899 and 0.916) for TiC, TiN, and TiAIN (Fig. 10).Depending on all cutting parameters, it was determined that the correlation coefficient between surface roughness and sound pressure level is better than R 2 = 0.8.Again, this shows that it is beneficial to adopt the sound pressure level during manufacturing.
CONCLUSION
In this study, contouring and ramping cutting path styles set on down milling and up milling strategies were generated in concave and convex surfaces in semi finishing machining operations.In the experiments, the different tools with different coatings were employed.In addition, the relationship between surface roughness and cutting sound pressure level was observed with different levels of cutting velocity, feed rate and step over values.The indexable insert of ball end mills with TiC, TiN, and TiAlN coatings were used for machining the formed inclined surfaces.As a result: • The previous studies were focused on estimation of tool wear, vibrations stability and best cutting parameters by means of acoustic emission.In this study, the experiments on EN X40CrMoV5-1 material were carried out in order to understand the relationships between the sound signals generated and surface roughness in the machining of inclined concave and convex surfaces.The previous studies do not investigate the studied experimental cases of the inclined surfaces.• The obtaining smaller ASP levels in concave surface shape machining (Fig. 5) in comparison to convex surface shape machining (Fig 8) might be explained by the surface form in which convex surface allows the microphone to gather the sounds with less obstruction.• In convex surface shape machining when the all coatings are considered, the greater ASP levels of 111.9 and 113.4 dB in the TiN coating have been seen at the experiments 10 and 13 as they are depicted in Fig. 8, respectively.The reason for that is that the ball end mill has less contact The reason for that can be explained by chatter mechanism.The chatter occurred because of the type of surface, longer cutting tool and cutting tool's movement from less chip volume to more chip volume.• ASP and R a values of convex surface type in down milling strategy are higher than up milling.The overlap of cutting edges of the cutting tool on the workpiece increased both, sound pressure and surface roughness because up milling and the machined part were convex.• It has been observed that in contouring, the ASP, which influences R a , decreases with the increase in milling position angle.• In ramping, the ASP, which again influences R a , is hardly affected by the milling position angle.
Fig. 1 .
Fig. 1. a) Contouring and, b) ramping cutter path styles for inclined concave and convex surfaces
Fig. 3 .
Fig. 3.The stages of the experimental system
Fig. 5 .
R a versus ASP for; a) TiC, b)TiN, and c) TiAlN coatings in concave surface
Fig. 6 .
R a versus ASP for; a) TiC, b) TiN , and c) TiAlN as to feed rate and step over levels in concave surface
Fig. 7 .Fig. 8 .Fig. 9 .
ASP as a function of Ra for; a) TiC, b) TiN, and c) TiAlN coatings in R a versus ASP for; a) TiC, b) TiN, and c) TiAlN coatings in convex surface Gok, A. -Gologlu, C. -Demirci, I.H. -Kurt, M. R a versus ASP for; a) TiC, b) TiN, and c) TiAlN as to feed rate and step over levels in convex surface acquired from the experiments for the related coatings are shown in Fig. 8.When it is examined the acquired ASP and R a graphics for TiC, TiN, and TiAIN coatings in Fig. 8, it is observed that surface roughness values increase with the increase of acoustic sound pressure as in concave surface type.According to Fig. 8, for the cutter with TiC coating, the largest ASP of 99.9 dB causes the largest R a value of 4.95 µm at the experiment 13.On the other hand, the lowest ASP of 85.3 dB is observed at the R a value of 1.25 µm at the experiment 16.For the cutter with TiN coating, the largest ASP of 113.4 dB resulted in the largest R a value of 5.01 µm at the experiment 13.The lowest ASP of 84 dB is observed at the R a value of 1.59 µm at the experiment 16.Lastly, for the cutter with TiAlN coating, the largest ASP of 104.1 dB resulted in the largest R a value of 5.16 µm at the experiment 4. On the other hand, the lowest ASP of 83.7 dB is observed at the R a value of 1.51 µm at the experiment 16.Upon the evaluation of parameters (Vf and fp), ASP and R a values show an increase with the increase of feed rate and step over as happened in concave surface type.The effects of feed rate and step over values on acquired ASP and R a values are given in Fig.9.As it is seen in Fig.9a, for the TiC coating, the largest values of R a and ASP have been observed at the feed rate of 318 mm/rev and the step over of 2 mm.The lowest values of R a and ASP have been seen at the feed rate of 223 mm/rev and the step over of 0.8 mm.On the other hand, in Fig.9b, for the TiN coating, the largest values of R a and ASP have been seen at the feed rate of 414 mm/rev and the step over of 2 mm.The lowest values of R a and ASP have been observed at the feed rate of 318 mm/rev and the step over of 0.8 mm.Finally, in Fig.9c, for the TiAlN coating, the largest values of R a and ASP have been detected at the feed rate of 445 mm/rev and the step over of 2 mm.The lowest values of R a and ASP have been observed at the feed rate of 350 mm/rev and the step over of 0.8 mm.
Fig. 10 .
ASP as a function of surface roughness for; a) TiC, b) TiN, and c) TiAlN coating in convex surfacewith the workpiece during the machining and it produces chatter mechanism.The surface roughness values belonging to those experiments are greater than of the other coatings as seen in Fig.8.• Acoustic sound pressure level decreased in the concave surface because the tool worked inside the workpiece and contacted the workpiece with the larger cutting edge in inner surface.• After the evaluation of the sound pressure and surface roughness in terms of feed rate and step over parameters, the increase in the parameters raised ASP and R a values without a dependency of tool coatings and surface shape.• As for the effects of tool coatings on the values of R a ; in the experiments, TiAlN coating for low cutting velocity displayed a good performance, and the increase in feed rate and step over values raised R a values independent of tool coatings.• The greater ASP and R a values are formed for concave surface type in up milling strategy.
Table 2 .
Assignment of the levels to factors | 2019-01-02T04:55:52.642Z | 2012-10-15T00:00:00.000 | {
"year": 2012,
"sha1": "263c67c1d6c7677cc986f3120f1d403a35bc40ec",
"oa_license": "CCBY",
"oa_url": "https://www.sv-jme.eu/?id=2936&ns_articles_pdf=/ns_articles/files/ojs/352/public/352-2765-1-PB.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "263c67c1d6c7677cc986f3120f1d403a35bc40ec",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
59292010 | pes2o/s2orc | v3-fos-license | A minimal-length approach unifies rigidity in under-constrained materials
We present a novel approach to understand geometric-incompatibility-induced rigidity in under-constrained materials, including sub-isostatic 2D spring networks and 2D and 3D vertex models for dense biological tissues. We show that in all these models a geometric criterion, represented by a minimal length $\bar\ell_\mathrm{min}$, determines the onset of prestresses and rigidity. This allows us to predict not only the correct scalings for the elastic material properties, but also the precise {\em magnitudes} for bulk modulus and shear modulus discontinuities at the rigidity transition as well as the magnitude of the Poynting effect. We also predict from first principles that the ratio of the excess shear modulus to the shear stress should be inversely proportional to the critical strain with a prefactor of three, and propose that this factor of three is a general hallmark of geometrically induced rigidity in under-constrained materials and could be used to distinguish this effect from nonlinear mechanics of single components in experiments. Lastly, our results may lay important foundations for ways to estimate $\bar\ell_\mathrm{min}$ from measurements of local geometric structure, and thus help develop methods to characterize large-scale mechanical properties from imaging data.
This manuscript was compiled on March 13, 2022 We present a novel approach to understand geometricincompatibility-induced rigidity in under-constrained materials, including sub-isostatic 2D spring networks and 2D and 3D vertex models for dense biological tissues. We show that in all these models a geometric criterion, represented by a minimal length min , determines the onset of prestresses and rigidity. This allows us to predict not only the correct scalings for the elastic material properties, but also the precise magnitudes for bulk modulus and shear modulus discontinuities at the rigidity transition as well as the magnitude of the Poynting effect. We also predict from first principles that the ratio of the excess shear modulus to the shear stress should be inversely proportional to the critical strain with a prefactor of three, and propose that this factor of three is a general hallmark of geometrically induced rigidity in under-constrained materials and could be used to distinguish this effect from nonlinear mechanics of single components in experiments. Lastly, our results may lay important foundations for ways to estimate¯ min from measurements of local geometric structure, and thus help develop methods to characterize large-scale mechanical properties from imaging data.
biopolymer networks | vertex model | constraint counting | underconstrained | minimal length | rigidity | strain stiffening A material's rigidity is intimately related to its geometry.
In materials that crystallize, rigidity occurs when the constituent parts organize on a lattice. In contrast, granular systems can rigidify while remaining disordered, and arguments developed by Maxwell (1) accurately predict that the material rigidifies at an isostatic point where the number of constraints on particle motion equal the number of degrees of freedom.
Further work by Calladine (2) highlighted the important role of states of self stress, demonstrating that an index theorem relates rigidity to the total number of constraints, degrees of freedom, and self stresses. Recent work has extended these ideas in both ordered and disordered systems to design materials with geometries that permit topologically protected floppy modes (3)(4)(5).
A third way to create rigidity is through geometric incompatibility, which we illustrate by a guitar string. Before it is tightened, the floppy string is under-constrained, with fewer constraints than degrees of freedom, and there are many ways to deform the string at no energetic cost. As the distance between the two ends is increased above the rest length of the string, this geometric incompatibility together with the accompanying creation of a self-stress rigidifies the system (3,6). Any deformation will be associated with an energetic cost, leading to finite vibrational frequencies. This same mechanism has been proposed to be important for the elasticity of rubbers and gels (6) as well as biological cells (7).
In particular, it has been shown to rigidify underconstrained, disordered fiber networks under applied strain, with applications in biopolymer networks (8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22). Just as with the guitar string, rigidity arises when the size and shape of the box introduce external constraints that are incompatible with the local segments of the network attaining their desired rest lengths. For example, when applying external shear, fiber networks strongly rigidify at some critical shear strain γ * (9, 14, 16, 18-20, 22, 23), although it remains controversial whether the onset of rigidity is continuous (14,15,20,24) or discontinuous (18) in the limit without fiber bending rigidity. Similarly, fiber networks can also be rigidified by isotropic dilation (10), and the interaction between isotropic and shear elasticity in these systems is characterized an anomalous negative Poynting effect (19,21,(25)(26)(27), i.e. the development of a tensile normal stress in response to externally applied simple shear. However, it has as yet remained unclear how all of these observations and their critical scaling behavior (9,16,18,20,28) are quantitatively connected to the underlying geometric structure of the network. Moreover, while previous works have remarked that several features of stiffening in fiber networks are surprisingly independent of model details (13), it has remained elusive whether there are generic underlying mechanisms.
Significance Statement
What do a guitar string and a balloon have in common? They are both floppy unless rigidified by geometrically induced prestresses. The same kind of rigidity transition in underconstrained materials has more recently been discussed in the context of disordered biopolymer networks and models for biological tissues. Here, we propose a general approach to quantitatively describe such transitions. Based on a minimal length function, which scales linearly with intrinsic fluctuations in the system and quadratically with shear strain, we make concrete predictions about the elastic response of these materials, which we verify numerically and which are consistent with previous experiments. Finally, our approach may help develop methods that connect macroscopic elastic properties of disordered materials to their microscopic structure. Table 1. Models discussed in this article. For the spring networks, the values indicated apply to a system size of 2N/z = 1024 nodes, and for all cellular models values apply to a system size of N = 512 cells. For each model, we indicate the respective dimension d of the "length springs" and the spatial dimension D, as well as the numbers of degrees of freedom (dof) as well as constraints (i.e. length + area springs). (22). Very recently, some of us showed that the 3D Voronoi model exhibits a rigidity transition driven by geometric incompatibility (46), similar to fiber networks. This has also been demonstrated for the 2D vertex model, using a continuum elasticity approach based on a local reference metric (42). For the case of the 3D Voronoi model, we found that there was a special relationship between properties of the network geometry and the location of the rigidity transition, largely independent of the realization of the disorder (46).
The provided values for transition point
Here, we show that such a relationship between rigidity and geometric structure is generic to a broad class of under-constrained materials, including spring networks and vertex/Voronoi models in different dimensions ( Table 1, Figure 1). We first demonstrate that all these models display the same generic behavior in response to isotropic dilation. Understanding key geometric structural properties of these systems allows us to predict the precise values of a discontinuity in the bulk modulus at the transition point. We then extend our approach to include shear deformations, which allows us to analytically predict a discontinuity in the shear modulus at the onset of rigidity. Moreover, we can make precise quantitative predictions of the values of critical shear strain γ * , scaling behavior of the shear modulus beyond γ * , Poynting effect, and several related critical exponents. In each case, we numerically demonstrate the validity of our approach for the case of spring networks.
We also compare our predictions to previously published experimental data, and highlight some new predictions, including a prefactor of three that we expect to find generically in a scaling collapse of the shear modulus, shear stress, and critical strain.
We achieve these results by connecting macroscopic mechanical network properties to underlying geometric properties. In the case of the guitar string, the string first becomes taut when the distance between the two ends attains a critical value * 0 equal to the intrinsic length of the string, so that the boundary conditions for the string are geometrically incompatible with the intrinsic geometry of the string. As the string is stretched, one can predict its pitch (or equivalently the effective elastic modulus) by quantifying the actual length of the string relative to its intrinsic length. While this is straightforward in the one-dimensional geometry of a string, we are interested in understanding whether a similar geometric principle, based on the average length of a spring¯ governs the behavior near the onset of rigidity in disordered networks in 2D and 3D.
Here, we formulate a geometric compatibility criterion in terms of the constrained minimization of the average spring length¯ min in a disordered network. Just as for the guitar string, this length¯ min attains a critical value * 0 at the onset of rigidity. As the system is strained beyond the rigidity transition, we demonstrate analytically and numerically that the geometry constrains¯ min to vary in a simple way with two observables: fluctuations of spring lengths σ l , and shear strain γ. Because¯ min is minimized over the whole network, it is a collective geometric property of the network.
Just as with the guitar string, the description of the geometry given by¯ min then allows us to calculate many features of the elastic response, including the bulk and shear moduli. This in turn provides a general basis to analytically understand the strain-stiffening responses of under-constrained materials to both isotropic and anisotropic deformation within a common framework. Even though¯ min describes collective geometric effects, our work may also provide an important foundation to understand macroscopic mechanical properties from local geometric structure.
2D spring networks consist of nodes that are connected by in total N springs, where the average number of springs connected to a node is the coordination number z. We create networks with a defined value for z by translating jammed configurations of bidisperse disks into spring networks and then randomly pruning springs until the desired coordination number z is reached (9,27). We use harmonic springs, such This scattering is due to insufficient energy minimization in these cases. In panels b, d, and f, shaded regions indicate the standard error of the mean.
D R A F T
that the total mechanical energy of the system is: Here, the sum is over all springs i with length li and rest length l0i, which are generally different for different springs.
For convenience, we re-express Eq. (1) in terms of a mean spring rest length 0 = [( i l 2 0i )/N ] 1/2 , which we use as a control parameter acting as a common scaling factor for all spring rest lengths. This allows us to rewrite the energy as: with rescaled spring lengths i = 0li/l0i and weights wi = (l0i/ 0) 2 , such that i wi = N (for details, see Supplemental Information, section IA). In simple constraint counting arguments, each spring is treated as one constraint, and here we are interested in sub-isostatic (i.e under-constrained, also called hypostatic) networks with z < zc ≡ 4.
The tissue models describe biological tissues as polygonal (2D) or polyhedral (3D) tilings of space. For the Voronoi models, these tilings are Voronoi tessellations and the degrees of freedom are the Voronoi centers of the cells. In contrast, in the 2D vertex model, the degrees of freedom are the positions of the vertices (i.e. the polygon corners). Forces between the cells are described by an effective energy functional. For the 2D models, the (dimensionless) energy functional is: [3] Here, the sum is over all N cells i with perimeter pi and area ai. There are two parameters in this model: the preferred perimeter p0 and the relative area elasticity kA. For the 3D Voronoi model, the energy is defined analogously: The sum is again over all N cells i of the configuration, with cell surface area si and volume vi, and the two parameters of the model are preferred surface area s0 and relative volume elasticity kV . All four of these models are under-constrained based on simple constraint counting, as is apparent from the respective numbers of degrees of freedom and constraints listed in Table 1. We stress that Calladine's constraint counting derivation (2, 3) also applies to many-particle, non-central-force interactions.
Throughout this article, we will often discuss all four models at once. Thus, when generally talking about "elements", we refer to springs in the spring networks and cells in the tissue models. Similarly, when talking about "lengths " (of dimension d), we refer to spring lengths in the spring networks, cell perimeters p in the 2D tissue models, and cell surface areas s in the 3D tissue model (Table 1). Finally, when talking about "areas a" (of dimension D), we refer to cell areas a in the 2D tissue models as well as cell volumes v in the 3D tissue model.
Here we study the behavior of local energy minima of all four models under periodic boundary conditions with fixed dimensionless system size N , i.e. the model is non-dimensionalized such that the average area per element is one (41,44,46). Under these conditions, a rigidity transition exists in all models even without area rigidity. In particular, for the 2D vertex and 3D Voronoi models, we discuss the special case kA = 0 separately (Table 1). Moreover, the athermal 2D Voronoi model does not exhibit a rigidity transition for kA > 0 (44), and thus we will only discuss the case kA = 0 for this model.
Results
A. Rigidity is created by geometric incompatibility corresponding to a minimal length criterion. We start by comparing the rigidity transitions in the four different models using Figure 1, where we plot both the differential bulk modulus B and the differential shear modulus G versus the preferred length 0. In this first part, we use for all models the preferred length 0 as a control parameter. Note that because 0 is non-dimensionalized using the number density of elements, changing 0 corresponds to applying isotropic strain (i.e. a change in volume with no accompanying change in shape). Later, we will additionally include the shear strain γ as a control parameter.
In all models, we find a rigid regime (B, G > 0) for preferred lengths below the transition point * 0 , and a floppy regime (B = G = 0) above it, with the transition being discontinuous in the bulk modulus and continuous in the shear modulus. For the spring networks, we find that the transition point * 0 depends on the coordination number, where close to the isostatic point zc ≡ 4, it scales linearly with the distance ∆z = zc − z to isostaticity (Figure 1b inset), as previously similarly discussed in (10). Something similar has also been reported for a 2D vertex model (48).
For the cellular models, we find that the transition point for the case without area rigidity, kA = 0, is generally smaller than in the case with area rigidity, kA > 0 (Figure 1d,f, Table 1). Moreover, our 2D vertex model transition point for kA > 0 is somewhat higher than reported before (37). Here we used a different vertex model implementation than in (37) (Supplemental Information, section IVC), and the location of the transition in vertex models depends somewhat on the energy minimization protocol (44), a feature that is shared with other models for disordered materials (55). Also, in Figure 1d,f the averaged shear modulus always becomes zero at a higher value than the respective average transition point listed in Table 1. This is due to the distribution of transition points having a finite width (see also finite width of 0 regions with both zero and nonzero bulk moduli in panels c and e).
We find that in all these models, the mechanism creating the transition is the same: rigidity is created by geometric incompatibility, which is indicated by the existence of prestresses. We have already shown this for the 3D Voronoi model (46) and the 2D Voronoi model with kA = 0 (44), while others have shown this for the ordered 2D vertex model (42). Furthermore, our data confirms that this is the case for the 2D spring networks and the kA = 0 cases of both (disordered) 2D vertex and 3D Voronoi models (Supplemental Information, section IIA).
We find something similar for the disordered 2D vertex model for kA > 0. Although there are special cases where prestresses appear also in the floppy regime (Supplemental Information, section IIA), to simplify our discussion here, we only consider configurations without such typically localized prestresses.
We observe that in all of these models, a geometric criterion, which we describe in terms of a minimal average length¯ min, determines the onset of prestresses. For example, we can exactly transform the spring network energy Eq. (2) into (Supplemental Information, section IA): Here,¯ = ( i wi i)/N and σ 2 = ( i wi( i −¯ ) 2 )/N are weighted average and standard deviation of the rescaled spring lengths. This means that¯ and σ are average and standard deviation of the actual spring lengths li, each measured relative to its actual rest length l0i. In particular, the standard deviation σ vanishes whenever all springs i have the same value of the fraction li/l0i, even though the absolute lengths li may differ among the springs. Moreover, importantly, the mean rest length 0 enters the definitions of¯ and σ , but only via the ratios l0i/ 0, which characterize the relative spring length distribution. Hence, the "rescaled" geometric information contained in both¯ and σ is a combination of the actual spring lengths and the relative rest length distribution, but is independent of the absolute mean rest length 0. According to Eq. (5), energy minimization corresponds to a simultaneous minimization with respect to |¯ − 0| and σ : In the floppy regime we find numerically that both quantities can vanish simultaneously and thus, all lengths attain their rest lengths, i = 0 (Supplemental Information, section IIA). In contrast in the rigid regime, |¯ − 0| and σ cannot both simultaneously vanish, creating tensions 2( i − 0), which are sufficient to rigidify the network. The transition point * For the cellular models with kA > 0, we analogously find that the transition point is given by the minimal cell perimeter (surface in 3D) under the constraint of no cell perimeter and area fluctuations σ = σa = 0, which now additionally appear in the energy Eq. (5) (46). Again, this is a geometric criterion, which also explains why the transition point * 0 is independent of kA for kA > 0 (Figure 1d,f). Moreover, we can understand why the transition point is smaller for kA = 0: in this case the energy does not constrain the area fluctuations, and the transition point is given by the minimal perimeter under the weaker constraint of having no perimeter fluctuations. Thus, the transition point will generally be smaller for the kA = 0 case than for the kA > 0 case.
B. The minimal length scales linearly with fluctuations.
We next study the scaling of the minimal length in the rigid vicinity of the transition. In the rigid regime, the system must compromise between minimizing |¯ − 0| and σ (and possibly σa in cellular models). To understand how, we must account for geometric constraints, which we express in terms of how the minimal length¯ min = min¯ depends on the fluctuations: min =¯ min(σ , σa). In the rigid regime the observed average length is always greater than the preferred length,¯ > 0, and so the average length instead takes on its locally minimal possible value¯ =¯ min(σ , σa). Therefore, knowing the functional form of¯ min(σ , σa) will allow us to predict how the system energy e (and thus also the bulk and shear moduli) depend on the control parameter 0 (Supplemental Information, section IC-E).
In section IB of the supplement, we show analytically that in the absence of prestresses in the floppy regime, the minimal Table 1.
D R A F T
length¯ min depends linearly on the standard deviations σ and σa. This is directly related to the state of self-stress that is created at the onset of geometric incompatibility at 0 = * 0 ≡¯ min(0, 0) (3). To check this prediction, we numerically simulate these models, and observe indeed a linear scaling of the¯ min(σ ) functions close to the transition point ( Figure 2). In particular, for 2D spring networks and the kA = 0 cases of the cellular models, we find:¯ min(σ ) = * 0 − a σ [6] with scaling coefficient a . We list its value in Table 1 for the different models. Interestingly, we find that the coefficient a is largely independent of the random realization of the system, in particular for cellular models with kA = 0. For 2D spring networks, a depends on the coordination number z and approximately scales as a ∼ ∆z −1/2 (Figure 2a inset). This scaling behavior of a can be rationalized using a scaling argument based on the density of states (Supplemental Information, section IF).
For cellular models where area plays a role, Eq. (6) is extended (Figure 2b,c): Again the coefficients a and aa are listed in Table 1 for 2D vertex and 3D Voronoi models. The coefficients a differ significantly between the kA > 0 and kA = 0 cases of the same model, which makes sense because Eq. (6) and Eq. (7) are linear expansions of the function¯ min(σ , σa) at different points (σ , σa).
C. Prediction of the bulk modulus discontinuity.
Knowing the behavior of the minimal length function¯ min(σ , σa) in the rigid phase near the transition point provides us with an explicit expression for the energy in terms of the control parameter 0 (Supplemental Information, section IC): with Z = 1+a 2 +a 2 a /kA, where for models without an area term the a 2 a /kA term is dropped. Because changes in 0 correspond to changes in system size, we can predict the exact value of the bulk modulus discontinuity, ∆B, at the transition in all models (Figure 1a-c, Supplemental Information, section IE): This equation is for a model with d-dimensional "lengths" embedded in a D-dimensional space (see Table 1). For the special case of a hexagonal lattice in the 2D vertex model, this result is consistent with Ref. (56). More generally, for disordered networks the geometric coefficients a and aa appear in the denominator, because they describe non-affinities that occur in response to global isotropic deformations (Supplemental Information, section IE). A comparison of the predicted ∆B to simulation results is shown in Figure 3. D. Nonlinear elastic behavior under shear. As shown before (8-10, 12, 14-16, 18-21), under-constrained systems can also be rigidified by applying finite shear strain. We now incorporate shear strain γ into our formalism and test our predictions on the 2D spring networks. However, we expect our findings to equally apply to the cell-based models (Supplemental Information, section IC,D). We also numerically verified that our analytical predictions also apply to 2D fiber networks without bending rigidity (Supplemental Information, section IIC).
D R A F T
To extend our approach, we take into account that the minimal-length function¯ min(σ ) can in principle also depend on the shear strain γ. We thus Taylor expand in γ: where the linear term in γ is dropped due to symmetry when expanding about an isotropic state (in practice, for our finitesized systems we drop the linear term in γ by defining the γ = 0 point using shear stabilization, Supplemental Information, sections ID and IV). While at the moment we have no formal proof that min is analytic, and the ultimate justification for Eq. (10) comes from a numerical check (see next paragraph), we hypothesize that for most systems min will be analytic in γ, up to randomly scattered points γ where singularities in the form of plastic rearrangements occur. For a fixed value of γ, the interface between solid and rigid regime is again given by¯ min(σ = 0, γ), and the corresponding phase diagram in terms of both control parameters γ and 0 is illustrated in Figure 4a. Indeed, we also numerically find a quadratic scaling for the transition line, 0 − * 0 = b(γ * ) 2 , extending up to shear strains of γ ∼ 0.1 (Figure 4c, see also Supplemental Information, section IIB). We find that for spring networks the coefficient b depends on ∆z approximately as b ∼ ∆z −1 (Figure 4c inset), which can be understood from properties of the density of states (Supplemental Information, section IF). To optimize precision, values of b have been extracted from the relation G = 4b(¯ − 0) in this plot (see below, cf. Figure 4f).
Knowing the functional form of¯ min(σ , γ) close to the transition line allows us to explicitly express the energy in the rigid regime in terms of both control parameters (Supplemental Information, section IC): This allows us to explicitly compute the shear modulus G = (d 2 e/dγ 2 )/N . We obtain for both floppy and rigid regime: [12] where Θ is the Heaviside function. We now discuss several consequences of this expression for the shear modulus (Figure 4b).
D R A F T
In particular for γ = 0, because ( * 0 − 0) = (1 + a 2 )(¯ − 0), we obtain the simple relation G = 4b(¯ − 0), which explains the collapse in the shear modulus scaling for different kV in the 3D Voronoi model that some of us reported earlier (46).
We also obtain explicit expressions for both shear stress σ = (de/dγ)/N and isotropic stress, i.e. negative pressure −p (Supplemental Information, sections ID,E). For the latter, we find a negative Poynting effect with coefficient χ ≡ p/γ 2 = −2db * 0 /D(1 + a 2 ) at 0 = * 0 . Moreover, we find the following relations for the shear modulus: Indeed, we observe a collapse of our simulation data for the 2D spring networks in both cases ( Figure 5 & inset), where we use that close to the onset of rigidity, γ γ * .
Discussion
In this article, we propose a unifying perspective on underconstrained materials that are stiffened by geometric incompatibility. This is relevant for a broad class of materials (6), and has more recently been discussed in the context of biopolymer gels (8,(12)(13)(14)21) and biological tissues (31,37,42,46). Just as with a guitar string, we are able to predict many features of the mechanical response of these systems by quantifying geometric incompatibility -we develop a generic geometric rule¯ min for how generalized springs in a disordered network deviate from their rest length. Using this minimal average length function¯ min, we then derive the macroscopic elastic properties of a very broad class of under-constrained, prestressrigidified materials from first principles. We numerically verify our findings using models for biopolymer networks (9,14) and biological tissues (34,38,46).
Our work is relevant for experimentalists and may explain the reproducibility of a number of generic mechanical features found in particular for biopolymer networks (12,17,21,25). While we neglect here a fiber bending rigidity that is included in many biopolymer network models (12)(13)(14)(15)21), future work that includes such a term will further refine our theoretical results and the following comparison to experiments (see below). For shear deformations with 0 sufficiently close to * 0 and close to the onset of rigidity γ γ * , we predict a linear scaling of the differential shear modulus G with the shear stressσ, where (G − ∆G * )/σ ∼ 1/γ * , which has been reported before for biopolymer networks (12,13,21). However, here we additionally predict from first principles that the value of the prefactor is exactly 3, a factor consistent with previous experimental results (12,21). Moreover, our work strongly suggests that the relation (G − ∆G * )/σ = 3/γ is a general hallmark of prestress-induced rigidity in under-constrained materials. We thus propose it as a general experimental criterion to test whether an observed strain-stiffening behavior can be understood in terms of geometrically induced rigidity. If applicable to biopolymer gels, this could help to discern whether strain-stiffening of a gel is due to the nonlinear mechanics of single filaments or is dominated by prestresses, a long-standing question in the field (8,57).
We can also apply these predictions to typical rheometer geometries (Supplemental Information, section IG). We predict that an atypical tensile normal stress σzz develops under simple shear, which corresponds to a negative Poynting effect, that σzz scales linearly with shear stress and shear modulus: σzz ∼σ ∼ (G−∆G * ) (Eq. (13) and Supplemental Information, section IG). This is precisely what has been found for many biopolymer gels like collagen, fibrin, or matrigel (12,21,25,26). However, in contrast to Ref. (21), our work suggests that the scaling factor between σzz and (G − ∆G * ) should be largely independent of γ * . While these effects can also be explained by nonlinearities (25,(57)(58)(59), and have already been discussed in the context of prestress-induced rigidity (13,19,21), we show here that they represent a very generic feature of prestressinduced rigidity in under-constrained materials.
Our work also highlights the importance of isotropic deformations when studying prestress-induced rigidity, as demonstrated experimentally in Ref. (17). While previous work (8,9,12,14,15,18,20,21) focused almost (10) entirely on shear deformations, we additionally study the effect of isotropic deformations represented by the control parameter 0. First, due to the bulk modulus discontinuity, our work predicts zero normal stress under compression and linearly increasing normal stress under expansion, consistent with experimental findings on biopolymer networks (17) (assuming the uniaxial response is dominated by the isotropic part of the stress tensor, see Supplemental Information, section IG). Second, we also correctly predict that the critical shear strain γ * increases upon compression, which corresponds to an increase in 0 (17) (cf. Figure 4a). While we also predict an increase of the shear modulus G under extension, which was observed as well (17), additional effects arising from the superposition of pure shear and simple shear very likely play an important role in this case. While we consider this outside the scope of this article, it will be straight-forward to extend our work by this aspect.
D R A F T
In summary, we have developed a new approach to understand how many under-constrained disordered materials rigidify in a manner similar to a guitar string. While it is clear that the one-dimensional string becomes rigid precisely when it is stretched past its rest length, we show that in two-and three-dimensional models, rigidity is governed by a geometrical minimal length function¯ min with generic features (e.g. linear scaling with intrinsic fluctuations, quadratic scaling with shear strain). This insight allows us to make accurate predictions for many of the scaling functions and prefactors that describe the linear response of these materials. In addition, by performing numerical measurements of the geometry in the rigid phase to extract the coefficients of the¯ min function, we can even predict the precise magnitudes of several macroscopic mechanical properties.
In addition, these predictions help unify or clarify several scaling collapses that have been identified previously in the literature. For 2D spring networks derived from jammed packings, we studied the dependence of our geometric coefficients on the coordination number z, and find that approximately, a ∼ ∆z −1/2 and b ∼ ∆z −1 . Combined with our finding that the value of 0 right after initialization depends linearly on z, such that ( 0 − * 0 ) ∼ ∆z ( Figure S5a inset in the Supplemental Information), we obtain that the critical shear strain γ * scales as γ * ∼ ∆z β with β = 1. Similarly, we find for the associated shear modulus discontinuity ∆G * ∼ ∆z θ with θ = 1. While both exponents are consistent with earlier findings by Wyart et al. (9), our approach highlights the importance of the initial value of 0 for the elastic properties under shear. In other work, bond-diluted regular networks yielded different exponents β and θ (16), which is not surprising because the scaling exponents of a and b with ∆z are likely dependent on the way the network is generated. More generally, while we observed that the values of * 0 , a , aa, and b depended somewhat on the protocol of system preparation and energy minimization, they were relatively reproducible among different random realizations of a given protocol (55).
Moreover, we analytically predict and numerically confirm the existence and precise value of a shear modulus discontinuity ∆G * with respect to shear deformation, whose existence for fiber networks without bending rigidity has been controversially discussed more recently (14,15,18,20,24). We also predict a generic scaling of the shear modulus beyond this discontinuity: Smaller values for f that have been reported before for different kinds of spring and fiber networks (14,15,18,20) are likely due to higher order terms in¯ min. Given the very generic nature of our approach, we expect to find a value of f = 1 in these systems as well, if probed sufficiently close to 0 = * 0 . One major obstacle in determining elastic properties of disordered materials is the appearance of non-affinities, which can lead to a break-down of approaches like effective medium theory close to the transition (10). In our case, effects by non-affinities are by construction fully included in the geometric coefficients a , aa, and b. However, while measures for non-affinity have been discussed before (9,15,20,28,60), these are usually quite distinct from our coefficients a , aa, and b. For example for spring networks, such earlier definitions typically include spring rotations, while our coefficients represent changes in spring length only. Hence, while earlier definitions reflect much of the actual motion of the microscopic elements, our coefficients only retain the part directly relevant for the system energy and thus the mechanics. In other words, the coefficients a , aa, and b (and * 0 ) can be regarded as a minimal set of parameters required to characterize the elastic system properties close to the transition.
There are a number of possible future extensions of this work. First, we have focused here on transitions created by a minimal length, where the system is floppy for large 0 and rigid for small 0. However, there is in principle also the possibility of a transition created by e.g. a maximal length, which is for example the case in classical sphere jamming. Although we have occasionally seen something like this in our spring networks close to isostaticity, we generally expect this to be less typical in under-constrained systems due to buckling.
Second, while we studied here the vicinity of one local minimum of¯ min depending e.g. on γ, it would be interesting to study the behavior of the system beyond that, by including higher order terms in¯ min, and by also explicitly taking plastic events into account (61). In the case of biological tissues, plastic events typically correspond to so-called T1 transitions (62), which in our approach would correspond to changing to a different¯ min "branch".
Third, it will be important to study what determines the exact values of the geometric coefficients a , aa, and b, how they depend on the network statistics, and why they are relatively reproducible. For the cellular models with area term, preliminary results suggest that the ratio of both "a" coefficients can be estimated by aa/a ≈ d * 0 /D, because the self-stress that appears at the onset of rigidity seems to be dominated by a force balance between cell perimeter tension and pressure within each cell.
Fourth, because we separated geometry from energetics, it is in principle possible to generalize our work to other interaction potentials, e.g. the correct expression for semiflexible filaments (57,59), and to include the effect of active stresses (54,(63)(64)(65). Note that our work directly generalizes to any analytic interaction potential with a local minimum at a finite length. Although in this more general case Eq. (5) would include higher order cumulants of i, these higher order terms will be irrelevant in the floppy regime and we expect them to be negligible in the rigid vicinity of the transition, where we make most of our predictions.
Fifth, this work may also provide foundations to systematically connect macroscopic mechanical material properties to the underlying local geometric structure. For example for biopolymer networks, properties of the local geometric structure can be extracted using light scattering, scanning electron microscopy, or confocal reflectance microscopy (21,66,67). In particular, our simulations indicate that in models without area term the¯ min function does not change much when increasing system size by nearly an order of magnitude (Supplemental Information, section IID), which suggests that local geometry may indeed be sufficient to characterize the largescale mechanical properties of such systems. Remaining future challenges here include the development of an easy way to compute our geometric coefficients from simple properties characterizing local geometric structure without the need to simulate, and to find ways to detect possible residual stresses that may have been built into the gel during polymerization.
Finally, our approach can likely be extended to also include D R A F T isostatic and over-constrained materials. For example, it is generally assumed that the mechanics of biopolymer networks is dominated by a stretching rigidity of fibers that form a subisostatic network, but that an additional fiber bending rigidity turns the network into an over-constrained system (12-15, 21, 22). The predictions we make here focus on the stretchingdominated limit where fiber bending rigidity can be neglected, which is attained by a weak fiber bending modulus and/or in the more rigid parts of the phase space. A generalization of our formalism towards over-constrained systems will allow us to extend our predictions beyond this regime and thus refine our comparison to experimental data. | 2018-09-16T03:57:33.890Z | 2018-09-05T00:00:00.000 | {
"year": 2018,
"sha1": "8b3a244d8d753811e543fd6dbf6f2c21c459a260",
"oa_license": null,
"oa_url": "https://www.pnas.org/content/pnas/116/14/6560.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "f1383577e71638a4e3a6d316c60c3e0926bae81e",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Biology",
"Medicine"
]
} |
20804724 | pes2o/s2orc | v3-fos-license | Specific functional interactions of nucleotides at key − 3 and + 4 positions flanking the initiation codon with components of the mammalian 48S translation initiation complex
Eukaryotic initiation factor (eIF) 1 maintains the fidelity of initiation codon selection and enables mammalian 43S preinitiation complexes to discriminate against AUG codons with a context that deviates from the optimum sequence GCC(A/G)CCAUGG, in which the purines at − 3 and + 4 positions are most important. We hypothesize that eIF1 acts by antagonizing conformational changes that occur in ribosomal complexes upon codon–anticodon base-pairing during 48S initiation complex formation, and that the role of − 3 and + 4 context nucleotides is to stabilize these changes by interacting with components of this complex. Here we report that U and G at + 4 both UV-cross-linked to ribosomal protein (rp) S15 in 48S complexes. However, whereas U cross-linked strongly to C 1696 and less well to AA 1818–1819 in helix 44 of 18S rRNA, G cross-linked exclusively to AA 1818–1819 . U at − 3 cross-linked to rpS5 and eIF2 (cid:1) , whereas G cross-linked only to eIF2 (cid:1) . Results of UV cross-linking experiments and of assays of 48S complex formation done using (cid:1) -subunit-deficient eIF2 indicate that eIF2 (cid:1) ’s interaction with the − 3 purine is responsible for recognition of the − 3 context position by 43S complexes and suggest that the + 4 purine/AA 1818–1819 interaction might be responsible for recognizing the + 4 position.
Eukaryotic ribosomes locate the initiation codon on most mRNAs by a scanning mechanism. A 43S complex comprising a ribosomal 40S subunit, eukaryotic initiation factors (eIFs) 1, 1A, and 3, and an eIF2-GTP/Met-tRNA Met i complex binds to the 5Ј-cap-proximal region of mRNA with the help of eIF4A, eIF4B, and eIF4F and scans downstream to the initiation codon to form a 48S complex. Initiation codon recognition and base-pairing with the Met-tRNA Met i anticodon triggers eIF5-mediated hydrolysis of eIF2-bound GTP and, most importantly, subsequent release of phosphate (Algire et al. 2005). The prevailing model is that this leads to release of eIF2-GDP from the 40S subunit, retaining Met-tRNA Met i in the ribosomal P site, after which eIF5B mediates displacement of other factors and joining of the 60S ribosomal subunit to form an 80S ribosome Unbehaun et al. 2004).
The initiation codon is recognized by base-pairing with the anticodon of Met-tRNA Met i (Cigan et al. 1988) and is usually the first AUG triplet from the mRNA's 5Ј end. Scanning 40S subunits can bypass the first AUG triplet if it is <10 nucleotides (nt) from the 5Ј end of mRNA or if its context deviates from the optimum sequence GCC(A/G)CCAUGG, particularly at − 3 and + 4 positions (in bold) (Kozak 1986(Kozak , 1991. These two context nucleotides are conserved features of mammalian mRNAs and together can enhance translation 20fold; in yeast, the nucleotide context is less important for initiation codon recognition, and its only common feature is a purine at the − 3 position (Kozak 1986;Cavener and Ray 1991). eIF1 enhances the processivity of scanning and plays the key role in ensuring the fidelity of initiation codon selection by enabling 43S complexes to discriminate against 48S complex formation on non-AUG triplets, on AUG triplets located near the 5Ј end of mRNA, and on AUG triplets with suboptimal context (Yoon and Donahue 1992;Pestova et al. 1998;Pestova and Kolupaeva 2002). It also ensures the fidelity of initiation codon selection at the later stage of ribosomal subunit joining by inhibiting premature GTP hydrolysis by eIF2 and by coupling initiation codon recognition with activation of eIF2's GTPase activity (Unbehaun et al. 2004;Valasek et al. 2004;Maag et al. 2005). eIF1 binds to the interface surface of the 40S subunit between the platform and initiator tRNA, facing the codon-anticodon base pairs but not contacting them directly (Lomakin et al. 2003). This suggests that it promotes scanning and performs its monitoring function indirectly, by influencing the conformation of the platform and the positions of Met-tRNA Met i and mRNA in ribosomal complexes. To explain eIF1's mechanism of action, we propose that binding of eIF1-43S complexes induces a scanning-competent conformation that is favorable for rejection of codon-anticodon mismatches and does not permit activation of hydrolysis of eIF2bound GTP by eIF5. However, on recognizing the initiation codon, a 43S complex would have to undergo conformational changes upon base-pairing to form a 48S complex, which would be antagonized by eIF1. In our model, the conformation of an arrested 48S complex would be stabilized by codon-anticodon base-pairing and by elements within mRNA such as 5Ј-flanking sequences and context nucleotides. If a 48S complex assembled without eIF1 is insufficiently stable, due to the presence of a noncognate initiation codon, poor context or to the absence of 5Ј-flanking sequences, it dissociates on delayed addition of eIF1 (Pestova and Kolupaeva 2002). An implication of this model is that the role of context nucleotides (particularly of − 3 and + 4 positions) is to stabilize an arrested ribosomal complex by interacting specifically with its constituents. Hypotheses that context nucleotides interact with 18S rRNA (e.g., Kozak 1986;Cavener and Ray 1991) in a functionally analogous manner to the Shine-Dalgarno interaction of 16S rRNA and prokaryotic mRNAs have not been substantiated.
In this study we used mRNAs with either 4-thiouridine ("thioU") or 6-thioguanosine ("thioG") at − 3 and + 4 positions (hereafter, [ − 3] and [ + 4]) for "zero-length" UV cross-linking of ribosomal proteins, 18S rRNA, and initiation factors in 48S complexes to identify nucleotidespecific interactions of purines and pyrimidines at [ − 3] and [ + 4] that could account for the nucleotide context rule. U and G at [ + 4] both cross-linked to ribosomal protein (rp) S15, but whereas U[ + 4] specifically cross-linked mostly to C 1696 and to some extent to AA 1818-1819 in helix 44 of 18S rRNA, G[ + 4] cross-linked exclusively to AA 1818-1819 . The base specificity of the interaction of the [ + 4] purine with AA 1818-1819 is therefore likely responsible for recognition of this context nucleotide by 43S complexes, so that in addition to monitoring the fidelity of elongator tRNA selection (Ogle et al. 2001), AA 1818-1819 might also play a role in initiation codon selection. U[ − 3] specifically and equally efficiently cross-linked to rpS5 and to eIF2␣, whereas G[ − 3] cross-linked exclusively to eIF2␣. The functional involvement of eIF2␣ in recognizing the [ − 3] nucleotide was confirmed by assaying 48S complex formation in the presence of ␣-subunit-deficient eIF2. In the absence of eIF1, the effect of the lack of eIF2␣ on the efficiency and specificity of 48S complex formation on AUG triplets with different nucleotide contexts was minor. However, in the presence of eIF1, 43S complexes assembled without eIF2␣ could no longer discriminate the nature of the − 3 nucleotide, and 48S complex formation was much less efficient, irrespective of the nucleotide at [ − 3]. This suggests that interaction of the [ − 3] nucleotide with eIF2␣ is generally important for 48S complex formation in the presence of eIF1, but that eIF2␣ interacts more strongly with a purine than with a pyrimidine residue, increasing the resistance of 48S complexes to dissociation by eIF1, and that this accounts for the − 3 nucleotide context rule. The fact that without sucrose density gradient centrifugation 85%-90% of eIF2 remained associated with 48S complexes formed on AUG triplets with G[ − 3] after eIF5induced hydrolysis of eIF2-bound GTP could account for the resistance of 48S complexes to eIF1's dissociating influence after GTP hydrolysis and before the actual ribosomal subunit joining.
48S complex formation on (CAA)nAUG(CAA)m mRNAs containing thioU and thioG
We hypothesized that the role of [ − 3] and [ + 4] context nucleotides could be to stabilize conformational changes in 48S complexes that occur upon base-pairing, by interacting with elements of these complexes. We investigated their interactions with components of the 48S complex by UV cross-linking using mRNAs that had a single uridine or guanosine (in addition to the AUG codon) at these positions (Fig. 1A). In two mRNAs the context of AUG triplets was good (purines at [ − 3] and [ + 4]) and in two it was suboptimal (a pyrimidine at [ − 3] or [ + 4]). Flanking AUG codons with multiple CAA triplets to avoid additional U or G nucleotides also minimized secondary structure and increased initiation efficiency. mRNAs were transcribed in vitro in the presence of thioU or thioG (Fig. 1B), which can be specifically crosslinked to proteins and nucleic acids by low-energy (360nm) irradiation, yielding "zero-length" cross-links that represent direct contacts with 48S complex constituents. Differences in the specificity/intensity of cross-links between mRNAs containing either thioU or thioG could be indicative of the nucleotide specificity of interactions. ThioG is incorporated less efficiently than thioU into transcripts and may exist in a thiol-thione equilibrium that could lead to its misincorporation (Sergiev et al. 1997;Favre et al. 1998). Toe-printing analysis done in the absence of eIF1 (to avoid potential differences in efficiency of 48S complex formation due to context differences of initiation codons) showed that 48S complex assembled equally and efficiently on mRNAs containing thioG or thioU, which were therefore functional (Fig. 1C).
Contacts of the nucleotide at position [ + 4] of mRNA with components of the 48S complex
mRNA transcripts used for UV cross-linking contained thioU or thioG and were labeled with 32 P-CTP. 48S complexes assembled from 40S subunits, eIF2, eIF3, eIF4A, eIF4B, eIF4F, eIF1, eIF1A, and Met-tRNA Met i were purified from unincorporated components by sucrose density gradient centrifugation and cross-linked by irradiation at 360 nm. U[ + 2] and G[ + 3] of the initiation codon base-pair with the Met-tRNA Met i anticodon and thus cannot cross-link to other components of 48S complexes, so radiolabeling of factors, ribosomal proteins, and 18S rRNA can be attributed exclusively to interactions with [ − 3] and [ + 4] nucleotides. In control experiments, thioU[ + 2] or thioG[ + 3] did not cross-link to components of 48S complexes assembled on mRNA with U and G only in the initiation codon (V.G. Kolupaeva, A.V. Pisarev, C.U.T. Hellen, and T.V. Pestova, in prep.).
Because eIF1 is a functional analog of prokaryotic IF3 (which causes rearrangement of mRNA on 30S subunits) (La Teana et al. 1995;Shapkina et al. 2000), mRNA crosslinking was assayed in 48S complexes assembled with and without eIF1. RNase-treated samples were analyzed by SDS-PAGE and two-dimensional (2D) gel electrophoresis to identify cross-linked proteins.
A single protein of the same mobility was cross-linked in 48S complexes assembled with or without eIF1 on mRNAs with thioU or thioG at [ + 4] ( Fig. 2A,B, lanes 1,3). The specificity of UV cross-linking of thioU-and thioGcontaining mRNAs did not differ, but consistent with other studies, thioU cross-linked more efficiently than thioG (Nikiforov and Connolly 1992; Fig. 2A,B, lanes 1,3), likely reflecting intrinsic differences in cross-linking efficiencies of these thionucleotides. The low molecular weight of the cross-linked protein indicated that it was a ribosomal protein. Covalently bound mRNA nucleotides cause cross-linked ribosomal proteins to shift "northwest" in 2D gels. Taking this into consideration, cross-linking to [ + 4] was attributed to rpS15 (Fig. 2C,D). Its identity was confirmed by mass-spectrometry sequencing of EAPPMEKPEVVK and GVDLDQLLDM-SYEQLMQLYSAR peptides. rpS15 is a homolog of prokaryotic rpS19, whose position in the crystal structure of the Thermus thermophilus 30S subunit is shown in Figure 4 (see below).
To identify the approximate region of cross-linking of [ − 3] and [ + 4] nucleotides to 18S rRNA, it was extracted after irradiation of 48S complexes, hybridized with DNA oligonucleotides complementary to different regions, digested with RNase H, and separated by electrophoresis. Attribution of individual 32 P-labeled UV-cross-linked fragments of 18S rRNA took into account their reduced mobility due to covalently linked mRNA. The exact cross-linked nucleotide was identified by primer extension.
As with cross-linking to ribosomal proteins, crosslinking of 18S rRNA to the [ + 4] nucleotide was identical in 48S complexes assembled with or without eIF1, and thioG cross-linked less efficiently than thioU (Fig. 3A,B). In contrast to cross-linking of ribosomal proteins, crosslinking of thioU and thioG at [ + 4] to 18S rRNA differed significantly. Both nucleotides cross-linked to nucleotides 1652-1863, but further analysis showed that thioG[ + 4] cross-linked exclusively to nucleotides 1815-1863 ( Fig. 3A,B, lanes 3), whereas thioU[ + 4] cross-linked to this region weakly but cross-linked strongly to nucleotides 1652-1796 ( Fig. 3A,B, lanes 6). Cross-linking sites were then determined precisely: ThioU[ + 4] cross-linked mostly to C 1696 and to some extent to AA 1818-1819 , whereas thioG[ + 4] cross-linked only to AA 1818-1819 ( Fig. 3C-E). In control experiments ( Fig. 3C-E, lanes 2), primer extension was done on 48S complexes assembled on mRNAs containing thioU or thioG at [ − 3]. The positions of C 1696 and AA 1818-1819 in h44 of 18S rRNA are shown on the secondary structure of 18S rRNA (Fig. 3F,G) and are mapped onto the corresponding nucleotides of 16S rRNA in the crystal structure of the T. thermophilus 30S subunit (Fig. 4). In conclusion, in 48S complexes, U[ + 4] in mRNA specifically cross-linked to C 1696 and to some extent to AA 1818-1819 , whereas G[ + 4] cross-linked exclusively to AA 1818-1819 . U and G both also cross-linked to rpS15. Unlike the [ + 4] nucleotide, thioU or thioG at [ − 3] did not cross-link to 18S rRNA, but thioU at [ − 3] cross-linked specifically with equal efficiency to two proteins whereas thioG cross-linked only to the larger one ( Fig. 2A,B, lanes 2,4). As for the [ + 4] nucleotide, cross-linking was identical in 48S complexes formed with or without eIF1 and was less efficient with thioG than thioU. The size of the smaller protein (∼21 kDa) indicated that it was a ribosomal protein. Taking into consideration the "northwest" shift of cross-linked proteins in 2D gels, we attributed cross-linking of U[ − 3] to rpS5 (Fig. 2E,F). Its identity was confirmed by mass-spectrometry sequencing of QAVDVFPLR and TIAEC*LADELINAAK peptides. It is a homolog of prokaryotic rpS7, shown on the crystal structure of the T. thermophilus 30S subunit (Fig. 4). The ∼38-kDa molecular weight of the larger protein indicated that it could be eIF2␣ or a subunit of eIF3. To identify it, we assembled 48S complexes using eIF2 with a truncated ␣-subunit ("⌬eIF2␣") from HeLa cells (Fig. 2G) and then exploited the observation that eIF5-induced hydrolysis of eIF2-bound GTP in 48S complexes releases eIF2 but not eIF3 (Unbehaun et al. 2004). The N-terminal sequence of ⌬eIF2␣ (PGLS, identical to that of intact eIF2␣) and its mobility in SDS-PAGE indicated that it was C-terminally truncated by 1.5-2 kDa; such cleavage is mediated by caspases (Satoh et al. 1999). eIF2 containing ⌬eIF2␣ was ∼30% as active in 48S complex formation as intact eIF2 (data not shown). The lower activity may be due to the substoichiometric amount of ⌬eIF2␣ in eIF2 compared with eIF2 with intact eIF2␣ (Fig. 2G). The ∼38-kDa protein cross-linked to thioU and thioG at [ − 3] was identified as eIF2␣ by cross-linking 48S complexes assembled with eIF2 containing ⌬eIF2␣ on mRNA with thioU[ − 3] (which yielded a cross-linked protein with altered mobility) and by cross-linking 48S complexes after incubation with eIF5 (which led to specific loss of this band) (Fig. 2H, lanes 2,3). Cross-linking of eIF2␣ to [ − 3] was specific: No cross-linking was observed to [ − 4] and only very little to [ − 2] (V.G. Kolupaeva, A.V. Pisarev, C.U.T. Hellen, and T.V. Pestova, in prep.). In conclusion, in 48S complexes, thioU[ − 3] in mRNA cross-links specifically to rpS5 and eIF2␣, whereas thioG[ − 3] cross-links exclusively to eIF2␣.
48S complex formation on CAA-GUS mRNAs with upstream AUGs in different nucleotide contexts in the presence of ␣-subunitand -subunit-deficient mammalian eIF2
UV cross-linking data showed that G residues at [ + 4] and [ − 3] in mRNA specifically bound AA 1818-1819 of 18S rRNA and eIF2␣, respectively. To prove eIF2␣'s functional role in recognizing initiation codon context, we compared 48S complex assembly using complete eIF2, eIF2 lacking either eIF2␣ (eIF2␥) or eIF2 (eIF2␣␥), or eIF2␥ with recombinant eIF2␣ on (CAA)n-AUGbad/ bad-GUS, (CAA)n-AUGbad/good-GUS, and (CAA)n-AUGgood/bad-GUS mRNAs. These mRNAs have an un- (Anthony et al. 1990; Materials and Methods). eIF1 is the principal factor that allows recognition of initiation codon context by scanning 43S complexes, so 48S complexes were assembled with and without eIF1. 48S com-plex formation did not depend on when eIF1 was added, and in all experiments described in this section identical data were obtained if eIF1 was added simultaneously with other translation components or if 48S complexes were first assembled without eIF1 and were then incubated with eIF1 for 15 min more. Consistent with our previous report (Pestova and Kolupaeva 2002), ∼90% of 43S complexes assembled with eIF1 and complete eIF2 scanned to the GUS initiation codon of (CAA)n-AUGbad/bad-GUS mRNA, whereas in the absence of eIF1, 48S complexes assembled mostly on the first AUGbad/ bad triplet despite its poor context (Fig. 5C, lanes 2,3). 48S complexes formed with eIF2␣␥ just as with complete eIF2 (Fig. 5C,lanes 4,5). No differences in 48S complex formation were detected on (CAA)n-AUGbad/good-GUS and (CAA)n-AUGgood/bad-GUS mRNAs in the presence of eIF2␣␥ or complete eIF2 (data not shown). Consistently, there was no difference in cross-linking of eIF2␣ to thioU or thioG at [ − 3] in 48S complexes formed with complete eIF2 or eIF2␣␥ (Fig. 5D, lanes 1,2; data not shown).
The role of eIF2␣ in 48S complex formation was investigated using (CAA)n-AUGbad/good-GUS and (CAA)n-AUGgood/bad-GUS mRNAs. This combination of mRNAs containing upstream AUG triplets with only one purine at either [ − 3] or [ + 4] was optimal for these studies because 48S complex formation on the first AUG triplet of (CAA)n-AUGbad/bad-GUS mRNA in the presence of eIF1 is inefficient even with complete eIF2, and quantitating possible reductions in 48S complex formation with eIF2␥ reliably would be difficult. On the other hand, 48S complex formation on the first AUG of (CAA)n-AUGgood/good-GUS mRNA would be too efficient to permit detection of potential leaky scanning.
In addition, (CAA)n-AUGbad/good-GUS and (CAA)n-AUGgood/bad-GUS mRNAs allow the relative effects of purines at different positions on the efficiency of 48S complex formation to be compared. With complete eIF2 and without eIF1, 48S complexes formed almost exclusively on the first AUG triplet on both mRNAs (Fig. 5F,G, lanes 5). In the presence of eIF1, complex formation on the first AUG triplet with a [ − 3] purine was more efficient, constituting ∼80% of 48S complexes whereas 48S complex formation on the first AUG with a [ + 4] purine constituted ∼50% of the total (Fig. 5F,G, lanes 6). The [ − 3] purine was therefore relatively more important for these mRNAs than that at [ + 4]. In the absence of eIF1, total 48S complex formation on the two AUGs for both mRNAs was only 10% lower with eIF2␥ than with complete eIF2. Initiation on both mRNAs was slightly leakier: 10%-15% of 48S complexes formed on the GUS AUG with eIF2␥, whereas only ∼4% of 48S complexes formed there with complete eIF2 (Fig. 5F,G, lanes 1,5).
Although it had only a minor effect in the absence of eIF1, the lack of eIF2␣ strongly affected 48S complex formation in eIF1's presence. Total 48S complex formation with eIF2␥ and eIF1 on the two AUG triplets of both mRNAs was reduced threefold (Fig. 5F,G, cf. lanes 2,6). The relative reduction in 48S complex formation on the first AUG triplet was higher with (CAA)n-AUGgood/ bad-GUS mRNA, in which case the ratio between 48S complex formation on the first and second AUG triplets fell to 1:1 from 4:1 in the presence of complete eIF2 (Fig. 5G, lanes 2,6) and became similar to the ratio of 48S complex formation on the first (with an unfavorable [ − 3] pyrimidine) and second AUGs of (CAA)n-AUGbad/good-GUS mRNA (Fig. 5F, lanes 2,6). This result suggests that in the absence of eIF2␣, 43S complexes cannot sense the nature of the [ − 3] nucleotide. The similar relative efficiencies of 48S complex formation on the two AUG triplets on both mRNAs despite the first AUG triplet of (CAA)n-AUGbad/good-GUS mRNA having a favorable [ + 4] purine suggest that the " + 4 nucleotide rule" might be secondary to the " − 3 nucleotide rule" and may not function efficiently in the absence of the eIF2␣/[ − 3] nucleotide interaction. The fact that 48S complex formation with eIF2␣-deficient eIF2 was strongly reduced on both AUGbad/good and AUGgood/bad suggests that interaction of eIF2␣ with the [ − 3] nucleotide is, irrespective of its nature, generally important for resistance of 48S complexes to dissociation by eIF1. Addition of recombinant eIF2␣ to reaction mixtures containing eIF2␥ restored the efficiency of 48S complex formation on both AUG codons to the level observed with complete eIF2 (Fig. 5F,G, lanes 2,4,6). Consistently, recombinant eIF2␣ added to reaction mixtures with eIF2␥ was cross-linked to thioU or thioG at [ − 3] in 48S complexes (Fig. 5D [lanes 3,4], E). 43S complexes became slightly less leaky and fewer 48S complexes assembled on the GUS AUG codon with eIF2␥ and recombinant eIF2␣ than with native complete eIF2 (Fig. 5F,G, lanes 4,6). The eIF2␣ N-terminal domain may interact with the [ − 3] nucleotide, in which case the N-terminal tag may influence this interaction, but we were reluctant to tag the C terminus of eIF2␣ because its C-terminal domain interacts with eIF2␥. In conclusion, these data suggest that interaction of eIF2␣ with the [ − 3] nucleotide is generally important for 48S complex formation in the presence of eIF1, and that eIF2␣ likely interacts more strongly with a purine at [ − 3], protecting 48S complexes more from dissociation by eIF1.
Interaction of eIF2␣ with the [ − 3] nucleotide of mRNA in 48S complexes after eIF5-induced hydrolysis of eIF2-bound GTP
Our data suggest that eIF2␣'s interaction with the [ − 3] nucleotide stabilizes 48S complexes against dissociation by eIF1. However, it is generally accepted that eIF5-induced hydrolysis of eIF2-bound GTP leads eIF2 to dissociate from 48S complexes. If eIF1 can dissociate aberrant initiation complexes after GTP hydrolysis, then one would expect that if ribosomal subunit joining does not occur immediately after this event, then even 48S complexes assembled on AUG triplets with a [ − 3] purine would be dissociated at this stage. However, 48S complexes formed with eIF1 on the good context AUG codon of the GUS ORF of (CAA)n-AUGbad/bad-GUS mRNA remained intact after 15 min incubation with eIF5 (Fig. 6A, lanes 1,2). This result was not due to a hypothetical inability of eIF1 to discriminate the context of the initiation codon after hydrolysis of eIF2-bound GTP, because ∼95% of 48S complexes formed on the first bad context AUG of the same mRNA without eIF1 could still be dissociated by eIF1 after incubation with eIF5 (Fig. 6B, lanes 2,4). The upstream AUG triplet had poor context at [ − 3] and [ + 4] whereas the downstream AUG triplet had good context at both positions. The [ + 4] purine could conceivably be sufficient to stabilize 48S complexes after hydrolysis of eIF2-bound GTP. However, re- taining a purine only at [ − 3] of the first AUG codon in (CAA) n -AUGgood/bad-GUS mRNA yielded 48S complexes that were as resistant to eIF1-mediated dissociation after GTP hydrolysis as eIF5-untreated 48S complexes (data not shown). To account for this result, we tested if interaction of the [ − 3] purine switches from eIF2␣ to another component of the 48S complex after GTP hydrolysis, which could render 48S complexes resistant to eIF1-mediated dissociation in the absence of eIF2. Just as for eIF5-untreated 48S complexes, no specific interaction was detected between thioG[ − 3] and 18S rRNA or any ribosomal protein after eIF5-induced hydrolysis of eIF2-bound GTP ( Fig. 6D; data not shown).
Whereas the affinity to aminoacylated tRNA of elongation factor EF-Tu/GTP and EF-Tu/GDP-bound differs by a factor of 10 4 , the affinity of yeast eIF2-GDP to Met-tRNA Met i is only ∼20-fold lower than of eIF2-GTP: This small difference might lead to incomplete dissociation of eIF2 from 40S subunits upon GTP hydrolysis (Kapp and Lorsch 2004). Although eIF2 dissociated entirely from 40S subunits in our experiments (Fig. 2H, lane 3; Unbehaun et al. 2004), they all included sucrose density gradient centrifugation of 48S complexes after eIF5-induced hydrolysis of GTP prior to analysis of eIF2's association with 40S subunits. To determine whether the stringency of sucrose density gradient centrifugation dissociated eIF2 from 40S subunits in these experiments, we assayed its influence on cross-linking of thioU and thioG at [ − 3] in mRNA in eIF5-treated 48S complexes. As expected, no cross-linking to eIF2␣ was observed with either mRNA if eIF5-treated 48S complexes were subjected to sucrose density gradient centrifugation before irradiation (Fig. 6C,D, lanes 1,2). However, if this step was omitted, after GTP hydrolysis 30%-35% and 85%-90% of eIF2 still cross-linked to thioU and to thioG at [ − 3], respec-tively (Fig. 6C,D, lane 3). This indicates that hydrolysis of eIF2-bound GTP does not cause complete dissociation of eIF2 from 48S complexes. Moreover, the extent of eIF2 release depends on the nature of the [ − 3] nucleotide and is much lower when it is a purine. Addition of eIF5B to a reaction mixture with eIF5 almost completely abrogated cross-linking of eIF2␣ to thioU[ − 3] and reduced cross-linking to thioG[ − 3] by 70% (Fig. 6D, lane 4; data not shown). These results suggest that eIF5B promotes dissociation of eIF2 from the 40S subunit after hydrolysis of eIF2-bound GTP, which is nevertheless not complete if the mRNA has a [ − 3] purine. The absence of UV crosslinking of eIF2␣ to either thioU or thioG at [ − 3] after treatment of 48S complexes with eIF5, eIF5B, and 60S subunits (Fig. 6D, lane 5; data not shown) indicated complete conversion of 48S complexes into 80S ribosomes and confirmed that incubation with eIF5 alone or together with eIF5B in identical conditions (Fig. 6D, lanes 3,4) led to complete hydrolysis of eIF2-bound GTP. In case some eIF1 was lost from 48S complexes during their initial purification by sucrose density gradients, we compared the effect of adding eIF5 alone or together with eIF1-48S complexes on UV cross-linking of eIF2␣ to thioG[ − 3]: No difference was detected, which means that eIF2 release was not affected (data not shown).
eIF5-induced hydrolysis of eIF2-bound GTP, therefore, does not completely dissociate eIF2 from 48S complexes, and the fact that the nature of the [ − 3] nucleotide influences eIF2 release suggests that mRNA stabilizes binding of eIF2-48S complexes after GTP hydrolysis through interaction of eIF2␣ with the [ − 3] nucleotide. The fact that only a small fraction of eIF2 was released from 48S complexes assembled on AUG codons with a [ − 3] purine upon hydrolysis of eIF2-bound GTP could account for resistance of these complexes to dissociation by eIF1 with components of 48S complexes before and after incubation with eIF5, eIF5B, and 60S subunits, as indicated. In lanes 2 48S complexes incubated with eIF5 were subjected to sucrose density gradient centrifugation before UV cross-linking. Cross-linked proteins were assayed by SDS-PAGE and autoradiography. eIF2␣ and rpS5 are indicated on the right. even after treatment with eIF5. Interaction of eIF2␣ with the [ − 3] purine likely also contributes to the stability of 48S complexes after GTP hydrolysis even in the absence of eIF1, because in this case treatment with eIF5 of 48S complexes assembled on a bad context AUG codon led to 65% dissociation (Fig. 6B, lane 3). Association of eIF2 with 48S complexes after hydrolysis of eIF2-bound GTP might also prevent Met-tRNA Met i from dissociating from 40S subunits before ribosomal subunit joining.
Discussion
eIF1's position on the 40S subunit between the platform and initiator tRNA suggests that it acts indirectly to ensure the fidelity of initiation codon selection and, specifically, to enable 43S complexes to discriminate against AUG triplets in suboptimal context (Pestova and Kolupaeva 2002;Lomakin et al. 2003). The finding that the C-terminal domain of prokaryotic IF3 (which is not homologous to eIF1) can bind the same region of the 40S subunit and perform many of eIF1's functions in initiation codon selection, including enabling 43S complexes to recognize initiation codon context, also favors an indirect mode of action for eIF1 (Lomakin et al. 2006). Our hypothesis that eIF1 acts by antagonizing conformational changes in the 48S complex that occur as a result of initiation codon recognition and base-pairing with the anticodon suggests that the role of the key − 3 and + 4 context nucleotides is to stabilize such changes by interacting with components of the 48S complex. Here, we used UV cross-linking to characterize and compare the specificity of interactions of thioU and thioG at these positions with constituents of this complex. In a separate study, we used mRNAs containing single thioU residues at positions − 26 to + 11 to map the mRNA path on the 40S subunit in 48S complexes (V.G. Kolupaeva, A.V. Pisarev, C.U.T. Hellen, and T.V. Pestova, in prep.). Its similarity to the mRNA path on the 70S ribosome as determined by crystallography (Yusupova et al. 2001) justifies using the structure of the mRNA/30S subunit complex to model our cross-linking data.
UV cross-linking in 48S complexes formed with and without eIF1
IF3, a functional analog of eIF1, alters the position of mRNA on 30S subunits (La Teana et al. 1995;Shapkina et al. 2000), so we assayed interactions of the − 3 and + 4 nucleotides in 48S complexes assembled with and without eIF1. The interactions of thioU or thioG at both positions were unaffected by eIF1. Even if eIF1 influences the positions of mRNA or Met-tRNA Met i in scanning ribosomal complexes, the final conformation of 48S complexes with established codon-anticodon base-pairing appears not to depend on eIF1's involvement in their assembly. We detected eIF1 in 48S complexes after eIF5induced hydrolysis of eIF2-bound GTP (Unbehaun et al. 2004), but the observation that eIF1 was released from minimal yeast initiation complexes following codon-anticodon base-pairing suggests that in mammalian 48S complexes, eIF1 might be displaced from its original location on the 40S subunit but be retained in these complexes by interaction with eIF3. If this is so, the apparently identical position of mRNA in 48S complexes assembled with and without eIF1 is not surprising.
UV cross-linking to the [ + 4] position
Both thioU[ + 4] and thioG[ + 4] cross-linked to rpS15. However, whereas thioU cross-linked weakly to AA 1818-1819 and strongly to C 1696 in h44 of 18S rRNA, thioG crosslinked exclusively to AA 1818-1819 . Specific mRNA crosslinking to components of the 48S complex has not previously been analyzed, so we compared our data with mRNA cross-linking in eukaryotic 80S complexes phased by cognate tRNA and in prokaryotic 70S complexes. Cross-linking of thioU[ + 4] to rp15 in 48S complexes was consistent with the same interaction in 80S complexes (Bulygin et al. 2005). rpS19, the prokaryotic homolog of rpS15, is located in the head of the 30S subunit ( Fig. 4; Wimberly et al. 2000). Its C-terminal tail points toward the interface side but does not reach the A-site codon, so cross-linking of rpS15 is likely due to Nor C-terminal extensions relative to prokaryotic rpS19.
Cross-linking of mRNA to AA 1818-1819 has been detected with midrange nucleotide derivatives but not with "zero-length" cross-linkers: No cross-linking of AA 1818-1819 to thioU[ + 4] was observed in phased or unphased 80S complexes (Demeshkina et al. 2000;Bulygin et al. 2005). The equivalent prokaryotic nucleotides (AA 1492-1493 in T. Thermophilus) flip out upon binding of cognate aminoacyl tRNA to the A-site during elongation and interact with the minor groove of the first two base pairs of the base-paired codon-anticodon helix, thereby monitoring the fidelity of elongator tRNA selection (Ogle et al. 2001). Flipping out of AA 1492-1493 also occurs during prokaryotic initiation when IF1 binds to the Asite area of the 30S subunit; these bases splay apart whereas they stack together when cognate tRNA binds to the A-site . Binding of eIF1A, the eukaryotic IF1 homolog (Battiste et al. 2000) or other factors to the 40S subunit might also alter the conformation of the upper part of h44 and flip out AA 1818-1819 . Such conformational changes could account for "zerolength" cross-linking of thioU and thioG at [ + 4] to AA 1818-1819 in 48S but not 80S complexes. Cross-linking of thioU[ + 4] to C 1696 in 48S complexes was not consistent with cross-linking of thioU[ + 4] to the equivalent of rabbit C 1691 in H28 of 18S rRNA in human 80S complexes (Bulygin et al. 2005). This discrepancy cannot be explained by the difference in positions of mRNA in 48S and 80S complexes: In our recent experiments C 1691 cross-linked specifically to thioU[ + 8] in 48S and 80S complexes (V.G. Kolupaeva, A.V. Pisarev, C.U.T. Hellen, and T.V. Pestova, in prep.), consistent with cross-linking of thioU[ + 8] to the equivalent nucleotide (C 1395 ) in prokaryotic 70S complexes (Rinke-Appel et al. 1993).
In prokaryotes, C 1400 and AA 1492-1493 (equivalents of rabbit C 1696 and AA 1818-1819 ) are opposite each other, flanking the mRNA (Fig. 4). Cross-linking of thioU to both sites suggests that structural rearrangements in 48S complexes cause them to be closer to each other than their equivalents in prokaryotic 30S subunit/70S ribosome crystal structures. The inability of thioG to crosslink to C 1696 might be due to its specific interaction with A 1818 and/or A 1819 , which could cause further structural adjustments that preclude this cross-link.
The mRNA path in 48S complexes has not been studied, so specific cross-linking of eIF2␣ to [ − 3] in mRNA has not been reported. eIF2␣ consists of structured Nterminal and C-terminal domains that are mobile relative to each other; the latter binds eIF2␥ (Yatime et al. 2004). eIF2␣ might thus bind the [ − 3] nucleotide either through the N-terminal domain or through its unstructured C-terminal tail. The absence of the ∼10 C-terminal amino acids of eIF2␣ and, interestingly, of eIF2 did not influence this interaction. Consistent with the affinities to Met-tRNA Met i of eIF2-GTP and eIF2-GDP differing by only one order of magnitude (Kapp and Lorsch 2004), in the absence of sucrose density gradient centrifugation, eIF5-induced hydrolysis of eIF2-bound GTP did not lead to complete dissociation of eIF2 from 48S complexes so that 30%-35% and 85%-90% of eIF2␣ could still crosslink to thioU[ − 3] and thioG[ − 3], respectively. The fact that the nature of the [ − 3] nucleotide influenced its crosslinking to eIF2␣ after eIF5-induced GTP hydrolysis suggests that the eIF2-mRNA interaction influences release of eIF2 during subunit joining. It is possible that without this interaction, GTP hydrolysis would result in greater and even complete eIF2 dissociation.
The finding that eIF5B enhances release of eIF2 from 48S complexes after GTP hydrolysis merits special attention. Although unlike its prokaryotic homolog IF2, binding of eIF5B to Met-tRNA Met i has not been shown directly, this interaction might occur on the 40S subunit and after binding to 48S complexes, eIF5B might compete with eIF2 for interaction with Met-tRNA Met i . Weakening of eIF2/Met-tRNA Met i binding after hydroly-sis of bound GTP could permit an interaction between Met-tRNA Met i and the C-terminal domain IV of eIF5B to be established, and consequently promote release of eIF2. However, complete release of eIF2 from 48S complexes assembled on mRNA containing thioG[ − 3] occurred only after ribosomal subunit joining, which suggests that eIF2 is completely released only during the actual ribosomal subunit joining event promoted by eIF5B. mRNA, therefore, influences release of eIF2 as well as of eIF3 from initiation complexes (Unbehaun et al. 2004).
Activities of ␣-subunitand -subunit-deficient eIF2 in 48S complex formation Specific UV cross-linking to thioG[ − 3] in mRNA in 48S complexes suggests that eIF2␣ is involved in recognition of initiation codon context and thus in initiation codon selection. The functionality of this interaction was confirmed in experiments on 48S complex formation in the presence of eIF2␣-deficient eIF2␥ on two mRNAs, both containing two AUG triplets, of which the first had a purine residue either at [ − 3] or at [ + 4]. With complete eIF2 but without eIF1, 48S complexes formed almost exclusively on the first AUG triplets of both mRNAs, but in the presence of eIF1, 48S complex formation was more efficient on the AUG triplet with the [ − 3] purine (80% of total 48S complexes) than with the [ + 4] purine (50% of total 48S complexes). In the absence of eIF1, the lack of eIF2␣ had little effect on the efficiency or specificity of 48S complex formation so that 43S complexes stopped efficiently on the first AUG triplet irrespective of its context. In eIF1's presence, the lack of eIF2␣ strongly influenced 48S complex formation. First, the combined efficiency of 48S complex formation on two AUG triplets on both mRNAs was threefold lower than with complete eIF2. Second, whereas the ratio of 48S complexes formed on the first AUG triplet with a [ − 3] purine and on the second AUG triplet was 4:1 in the presence of complete eIF2, it fell to 1:1 in the absence of eIF2␣ and became similar to the ratio of 48S complex formation on mRNA with two AUG triplets in which the first was flanked by a [ − 3] pyrimidine. In the presence of eIF1, 43S complexes assembled without eIF2␣ therefore could not sense the nature of the [ − 3] nucleotide and 48S complexes formed with equal efficiency on AUG triplets whether there was a purine or a pyrimidine at [ − 3]. This result confirmed the suggested role for eIF2␣ in discriminating the [ − 3] context nucleotide. The reduced efficiency of 48S complex formation on the AUG triplet with a [ − 3] pyrimidine in the absence of eIF2␣ also suggests that eIF2␣'s interaction with the [ − 3] nucleotide, irrespective of its nature, is generally important for 48S complex formation in the presence of eIF1 but that it is the strength of interaction (which is higher for purines) that is responsible for the [ − 3] context rule. Our finding that eIF2 is not fully released from 48S complexes upon eIF5-induced GTP hydrolysis and that the extent of its release depends on the nature of the [ − 3] nucleotide (being only 10%-15% with G at this position) could ac-count for the resistance of 48S complexes to eIF1-mediated dissociation after hydrolysis of eIF2-bound GTP and before the ribosomal subunit joining step.
Although interaction of the [ + 4] nucleotide with rpS15 was not base-specific and rpS15 is also cross-linked to thioU[ + 5] (V.G. Kolupaeva, A.V. Pisarev, C.U.T. Hellen, and T.V. Pestova, in prep.), we cannot exclude the possibility that the rpS15-[ + 4] nucleotide interaction is important for initiation codon selection. We cannot directly test the functional importance of interaction of the [ + 4] nucleotide with AA 1818-1819 , but the base specificity of this interaction points to the fact that AA 1818-1819 are involved not only in monitoring the fidelity of elongator tRNA selection, but also in selection of the initiation codon during initiation. By analogy with eIF2␣, the interaction of the [ + 4] nucleotide with components of the 48S complex (AA 1818-1819 and/or rp S15) might also be generally important to stabilize 48S complexes assembled on AUG triplets whether they have a purine or a pyrimidine at [ + 4].
Purification of factors and ribosomal subunits, and aminoacylation of initiator tRNA
40S and 60S subunits, eIF2, eIF3, eIF4F, and eIF5B were purified from rabbit reticulocyte lysate (RRL) and recombinant eIF1, eIF1A, eIF4A, eIF4B, eIF5, and E. coli methionyl-tRNA synthetase were expressed in E. coli BL21(DE3) and purified as described (Pestova et al. 1996(Pestova et al. , 1998Lomakin et al. 2006). ␣-Subunit-deficient eIF2 was purified as described (Anthony et al. 1990). -Subunit-deficient eIF2 is always obtained in small amounts during eIF2 purification from RRL as a peak eluted from MonoQ two fractions earlier than complete eIF2. eIF2 with a truncated ␣-subunit was purified in small quantities from HeLa cells using the purification procedure previously described for eIF2 from RRL as a peak eluted from MonoQ slightly earlier than complete eIF2. Recombinant eIF2␣ was expressed in E. coli BL21(DE3) and purified on Ni 2+ -NTA (Qiagen) and MonoQ. Total native rabbit tRNA (Novagen) was aminoacylated by recombinant methionyl-tRNA synthetase as described (Pestova et al. 1996).
Identification of cross-linked proteins
To identify UV-cross-linked eIFs, ∼20 µL of cross-linked ribosomal fractions containing equal amounts of counts were treated with RNase A and subjected to electrophoresis in NuPAGE 4%-12% Bis-Tris-Gel (Invitrogen) followed by autoradiography. UV-cross-linked ribosomal proteins were identified by acidic-SDS 2D gel electrophoresis. Complete crosslinked peak fractions (∼200,000 c.p.m.) were combined, transferred to buffer B (20 mM Tris-HCl at pH 7.5, 50 mM KCl, 2 mM MgCl 2 , 2 mM DTT, 0.1 mM EDTA), concentrated on microcon-YM10 centrifugal filter units (Millipore) to 100 µL of final volume, and treated with RNase A for 30 min at 37°C. These samples were combined with 100 µL of 40S subunits (OD 260 = 100 o.u./mL) in buffer B. Proteins were extracted from these mixtures with 100 mM MgCl 2 in 67% acetic acid and precipitated with acetone (Hardy et al. 1969). Samples were then resuspended in 8 M urea, 1% 2-mercaptoethanol, 10 mM bis-tris acetate (pH 4.2); incubated for 15 min at 37°C; and subjected to first-dimension electrophoresis (Yusupov and Spirin 1988) in 120-mm-long glass tubes with a 2.4-mm inner diameter. Firstdimension gels were incubated for 10 min in cathode buffer and combined with second-dimension gels, which had been prepared as described (Schagger and von Jägow 1987). The separating gel (16.5% T and 3% C) contained 13.3% w/v glycerol. Gels were run for 12 h at 40 mA, stained with Simply Blue Safe Stain (Invitrogen), and destained with water for LC-nanospray tandem mass spectrometry of peptides derived by in-gel tryptic digestion at an in-house facility, or fixed with 10% methanol/5% glycerol for drying and autoradiography.
Identification of cross-linked nucleotides in 18S rRNA
After irradiating 48S complexes, rRNA, mRNA, and tRNA were phenol-chloroform extracted and ethanol precipitated. Regions of 18S rRNA cross-linked to 32 P-labeled mRNA were first identified by RNase H digestion of 18S rRNA hybridized with a panel of ∼20-mer DNA oligonucleotides complementary to different regions of 18S rRNA essentially as described (Dontsova et al. 1992). 18S rRNA fragments were separated by electrophoresis in 12% denaturing PAGE. Cross-linked and uncross-linked 18S rRNA fragments were visualized by autoradiography and methylene blue staining, respectively. Cross-linked regions were identified and attributed to corresponding uncross-linked fragments of 18S rRNA on stained gels taking into account the reduced mobility of cross-linked rRNA fragments due to covalently bound 64-nt mRNA. Precise identification of cross-linked nucleotides in 18S rRNA was done by primer extension inhibition using primers 5Ј-CAAGTTCGACCGTCTTC-3Ј and 5Ј-CC TTCCGCAGGTTCACC-3Ј complementary to nucleotides 1783-1799 and 1840-1856 of 18S rRNA respectively, chosen on the basis of RNase H digestion. | 2018-04-03T04:47:30.209Z | 2006-03-01T00:00:00.000 | {
"year": 2006,
"sha1": "e0d36eed94a87260523324083daffa46aba15575",
"oa_license": null,
"oa_url": "http://genesdev.cshlp.org/content/20/5/624.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2ef72f847a54664452c607808c6f990f5da20f24",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
35451606 | pes2o/s2orc | v3-fos-license | Responses of root physiological characteristics and yield of sweet potato to humic acid urea fertilizer
Humic acid (HA), not only promote the growth of crop roots, they can be combined with nitrogen (N) to increase fertilizer use efficiency and yield. However, the effects of HA urea fertilizer (HA-N) on root growth and yield of sweet potato has not been widely investigated. Xushu 28 was used as the experimental crop to investigate the effects of HA-N on root morphology, active oxygen metabolism and yield under field conditions. Results showed that nitrogen application alone was not beneficial for root growth and storage root formation during the early growth stage. HA-N significantly increased the dry weight of the root system, promoted differentiation from adventitious root to storage root, and increased the overall root activity, total root length, root diameter, root surface area, as well as root volume. HA-N thus increased the activity of superoxide dismutase (SOD), peroxidase (POD), and Catalase (CAT) as well as increasing the soluble protein content of roots and decreasing the malondialdehyde (MDA) content. HA-N significantly increased both the number of storage roots per plant increased by 14.01%, and the average fresh weight per storage root increased by 13.7%, while the yield was also obviously increased by 29.56%. In this study, HA-N increased yield through a synergistic increase of biological yield and harvest index.
Introduction
Plant roots are the main organ for crops to absorb nutrients and water, and they are the place where physiologically active substances, such as some amino acid and hormones are synthesized. The morphology and physiological characteristics of roots affect growth, development and yield formation of crop [1,2,3,4]. Dysplasia or physiological dysfunction of plant roots will severely affect plant growth and development [5]. One of the main cultivation measures to increase yield of sweet potato is the application of nitrogen; nitrogen affects root growth and differentiation of sweet potato, and ultimately increasing yield [6]. Below a certain range of nitrogen application, the increase of nitrogen into the soil during the early growth stage can increase the total biomass of roots, while root total biomass differentiating to storage roots a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 gradually decreases [7]. Furthermore, a low-level nitrogen application rate (! 10mM nitrate) will inhibit root growth [6]. Consequently, the rational use of nitrogen to effectively regulate the balance between absorption and storage functions as well as to realize a synergetic increase of nutrient absorbance and differentiation from adventitious root to storage root are problematic.
Previous studies have shown that humic acid (HA) promotes root growth and the formation of lateral roots, and enlarges the root's effective absorption area. HA also improves biomass, overall activity and absorption ability of the root system [8,9,10], increases crop carbon and nitrogen metabolism capabilities [11], and promotes differentiation from adventitious root to storage root and increases yield [12]. HA has a strong capability for on complexation and absorption, it easily complexates with urea, and shows significant slow-release effects on nitrogen release and utilization [13,14]. Compared with inorganic fertilizers, HA coated fertilizers convert into plant available nutrients at a slower rate after application, thus making them advantageous by reducing fertilizer use and being less labor intensive. In addition, as decomposed coal is a relatively cheap raw material, HA coated fertilizer can have lower costs. These factors have resulted in an increase in use of HA slow-release fertilizers to gradually become a main research area for new fertilizers in China [15]. Previous studies have shown that HA fertilizer improves fertilizer use efficiency, growth and development of crops (such as potato, maize, and wheat), root activity, dry matter accumulation, yield and quality of crops [13,16]. However, few studies have specifically focused on the effects of HA-N on root morphology, active oxygen metabolism and yield of sweet potato. Therefore, we investigated the effects of HA-N on root biomass, root activity, physiological characteristics of root senescence, and yield formation under field conditions. Results of this study will provide a basis for technical guidance for the reasonable application of HA-N for sweet potato cultivation.
Field design
Field experiments were undertaken from June to October, 2014 and from June to October, 2015 at the Anhui Agricultural University experimental station (33˚16 0 N, 117˚45 0 E), China. Rainfall rate during the sweet potato growing season were 634 mm (2014) and 620 mm (2015). No irrigation was applied either year. Xushu-28,a white-fleshed and widely cultivated sweet potato in China was selected for this experiment. The growing medium was a sandy loam, and the 0-20 cm soil layer contained 1.02% organic matter, 0.54 g kg -1 total nitrogen, 30.09 mg kg -1 available nitrogen, 18.07 mg kg -1 available phosphorus, and 83.26 mg kg -1 available potassium. The pH of the soil was 7.83.
The experimental field used in this study belongs to the Anhui Agricultural University, which is a comprehensive research institution, and it has a research ethics review committee to ensure experiments do no harm to crops, animals and humans. Our study was approved by this university, so no specific permissions were required for the described field experiments. The sampling locations were not privately-owned or protected in any way, and this field study did not involve any endangered or protected species. In addition, there was also no vertebrate species in this study.
Five treatments were designed in this study: control (C), humic acid urea treatment (HA-N, completely mixed with weathered coal, HA activator and nitrogen fertilizer, extruding granulation, 16% N, 562.5 kg hm -2 ), weathered coal treatment (HA, 135 kg hm -2 , humic acid content being equal to that in the HA-N treatment), urea treatment (N, 195.7 kg hm -2 , nitrogen content equal to that in the HA-N treatment), and humic acid and urea treatment (HA+N, completely mixed with weathered coal, and nitrogen fertilizer, extruding granulation, 16% N, 562.5 kg hm -2 ). Each treatment was replicated three times in a randomized block design. For all treatments, 150 kg hm -2 of phosphorus fertilizer and 225 kg hm -2 of potassium fertilizer were also applied. All fertilizers were used as base fertilizers. Other management procedures followed standard agricultural practices. Row spacing was 0.8 m and plant spacing was 0.25 m. Plant density was 50,000 plants hm - The production procedure of HA slow release fertilizer resulted in two features of the HAcoated fertilizer: (1) a specific concentration of sodium hydroxide can increase the activity of weathered coal HA and (2) activated HA can significantly increase the absorption ability of nutrient ions. These features can be used as slow-release mechanisms of HA-coated slowrelease fertilizer. Firstly, weathered coal was activated with a specific concentration of sodium hydroxide before being filtered and washed using water, the pH was finally adjusted using ammonia. Subsequent to pH adjusting, the weathered coal sample was mixed with nitrogen for adsorption. The adsorbed sample was then fitted with inorganic fertilizer and granulated via disc granulation, this subsequently being as HA slow-released fertilizer after drying.
The root observation experiment was carried out within the plot experiment. Before ridges were formed in the field, 10 micro-plots were separated using roofing (50 cm height) per treatment. A 30μm nylon net was horizontally tied at the base of each micro-plot to increase water infiltration and root growth. The volume of fertilizer applied, application time and application methods were identical between the experiment plots.
Sampling and measurements
Root morphology was measured at 40 d and 120 d after planting. All root systems in the soil layer of the separated micro-plot were removed and slowly washed. A 100-mesh sieve was placed under the root system during washing to prevent roots from being washed away. Root total weight was recorded after drying using absorbent paper. Roots from three individual plants having consistent growth were selected and scanned using a root scanner (LA1600+scanner Canada). The WinRHIZO root analytical procedure was used to analyze the scanned root system images.
The root physiological index was measured at 40 d, 80 d and 120 d after planting. 0.5 g of roots was homogenized in 5 cm 3 of a respective extraction buffer (50 mM phosphate buffered saline (PBS) + 0.4% polyvinylpyrrolidone (PVP), pH 7.0) in a pre-chilled mortar and pestle on ice. The homogenate was centrifuged at 10,000×g for 30 min at 4˚C and the supernatant was collected as a crude enzyme extract.
Superoxide dismutase (SOD) activity was assayed by monitoring the inhibition of the photochemical reduction of Nitro Blue Tetrazolium (NBT). One unit SOD activity was defined as the amount of enzyme required to cause 50% inhibition of reduction of NBT as monitored spectrophotometrically (UV-2401, Shimadzu Corp., Japan) at 560 nm. Activity was expressed as units (U) per gram of fresh root mass (FW).
Peroxidase (POD) activity was determined using the guaiacol oxidation method [17]. Guaiacol oxidation was monitored spectrophotometrically for 60 s at 470 nm. Catalase (CAT) activity was measured by monitoring the decrease in absorbance at 240 nm for 60 s as a consequence of H 2 O 2 consumption [18]. Malondialdehyde (MDA) was estimated by measuring the content of 2-thiobarbituric acid-reactive substances in a supernatant, prepared in 20% trichloracetic acid containing 0.5% 2-thiobarbituric acid, and heated at 95˚C for 25 min. MDA content was then determined spectrophotometrically at 532 nm absorbance and corrected for nonspecific turbidity at 600 nm.
Yield
All storage roots were harvested and weighted in the yield measure area. Storage roots and plants were counted. The number of storage roots per plant and the average fresh weight per storage root were also calculated.
Statistical analysis
The analysis of variance was performed with SPSS 18.0 (SPSS Inc., Chicago, USA). Data from each sampling date were analyzed separately. Means were compared using Fisher's protected least significant difference at P<0.05 (LSD0.05).
Effects of HA-N on storage root yield and components
Compared with the C, all fertilization treatments significantly increased storage root yield of sweet potato (Table 1). HA, N, HA+N, and HA-N treatments increased the yield by 5.52%, 6.88%, 21.46%, and 29.56%, respectively (mean value of two years). Compared with the HA, HA+N and HA-N increased yield by 15.1% and 22.78%, respectively. Yield increasing effects of HA-N was significantly better than that of HA+N. Both the number of storage root per plant and the average fresh weight per storage root increased for the HA treatment, however only the results for the HA-N treatment attained a significant level. The treatment which only applied N to the crop resulted in a decrease the number of storage root, although it significantly increased the average fresh weight per storage root. These results indicated that nitrogen increased yield by increasing the average fresh weight per storage root, while HA-N promoted both the number of storage root and the average fresh weight per storage root.
Effects of HA-N on root dry weight and the morphology characteristic index
Root dry weight and vine/tuber ratio. The application of fertilizer significantly increased the accumulation of dry matter in sweet potato (Table 2). At the early growth stage, three HA treatments increased dry matter accumulation of root tubers and aerial parts, while they decreased the vine/tuber ratio. Compared with HA and HA-N, HA+N significantly increased dry matter accumulation of root tubers and above ground plant parts, with the effects on above ground plant parts being more apparent. However, N application alone could noticeably decreased the dry weight of storage and absorbing roots, as well as significantly increasing dry matter accumulation of above ground plant parts and the vine/tuber ratio. Dry matter accumulation of storage root and above ground plant parts recorded a trend of rapid growth following the developmental progress from the fast thickening period to the harvest period. All fertilizer application treatments significantly increased dry matter accumulation of absorbing roots, storage root, and above ground plant parts, and the vine/tuber ratio. The dry weight of absorbing and storage root were in the order of HA-N>HA+N>HA = N. The vine/tuber ratio was in the order of HA-N<HA+N = HA<N.
Root morphology characteristics. At the early growth stage, compared with the C, N treatment significantly decreased total root length, root diameter and root surface area, while it slightly decreased root tip number and root volume (Table 3). HA and HA+N treatments increased root diameter, decreased total root length, root tip number, root surface area, and root volume. HA-N treatment significantly increased root diameter and root surface area. These results indicate that the application of HA-N enlarged roots, and promoted the differentiation from adventitious root to storage root, while the application of nitrogen fertilizer alone resulted in a reduction of root thickness and ultimately inhibited root differentiation. At the harvest period and compared with the C, all fertilizer application treatments increased total root length, root diameter, root tip number, root surface area and root volume to different extents; the effects of HA+N and HA-N treatments were maximal. Effects of HA-N on root vigor and soluble protein content Root vigor. Root vigor, an index reflecting the nutrient absorption efficiency of plants, attained its maximum value for each treatment 80 d after transplantation (Table 4). Compared with the C, all four fertilizer application treatments were beneficial for increasing root vigor. At the early growth stage, root vigor was in the order of HA+N = HA-N>N>HA. From the fast thickening period to the harvest period, root vigor was in the order of HA-N = HA+-N>HA>N. These results indicated that HA-N was beneficial to increase root vigor, while retaining increased root physiological activity.
Soluble protein
At the early growth stage, and compared to the C, N, HA, HA+N treatments, and the HA-N treatment increased the soluble protein content by 25.27%, 16.67%, 23.66% and 8.06% in roots, respectively, while the N treatment had the largest growing rate (Table 4). From the fast thickening period to the harvest period, root soluble protein content was in the order of: HA-N>HA+N>HA>N>C. These results indicated that nitrogen fertilizer was beneficial to the synthesis of soluble proteins at the early growth stage, while HA-N significantly increased soluble protein content at the middle and late thickening stage of storage root, thus delaying root senescence.
Effects of HA-N on antioxidant enzyme activity and MDA content in root system
From the early to the fast thickening period of storage root, POD activity was highest when subjected to N treatment, being significantly higher compared to the C and HA treatments. However, there was a non-significant difference between HA-N and HA+N. At the harvest period, POD activity was higher when subjected to HA-N treatment and significantly higher compared to the HA and N treatments; however, there was a non-significant difference compared with the HA+N treatment (Fig 1).
CAT activity in sweet potato roots initially recorded an increase before decreasing; CAT activity attained a maximum value 80 d after planting. Compared with the C, CAT activity was increased to a different extent under different fertilizers. CAT activity was significantly higher under HA-N and HA+N treatments, results which indicated that HA-N was beneficial for the elimination of H 2 O 2 in roots, and thus prevented the root system from aging prematurely. SOD activity results were similar to those of CAT activity under the different treatments. Compared with the C, all four fertilizer treatments increased SOD activity in roots. SOD activities subjected to HA-N and HA+N treatments were significantly higher compared with the HA and N treatments. However, the difference of SOD activity between HA+N and HA-N treatments was not significant.
MDA content gradually increased after planting with a slow increase for HA-N treatment and a fast increase for the C treatment. Compared with the C treatment, all fertilizer treatments decreased MDA content. In the four fertilizer treatments, MDA content was the lowest for the HA-N treatment, followed by the HA+N treatment; MDA contents were the highest for the HA and N treatments.
Correlation between root characteristics and yield
Correlation analysis between root morphological indices and yield revealed that root diameter had a significantly positive correlation with yield and storage root number per plant (correlation coefficients were 0.88 and 0.97, respectively) at the early growth stage. Root diameter, root tip number, root surface area, root volume and root activity had either significant or extremely significant positive correlations with yield (correlation coefficients were 0.89, 0.88, 0.96, 0.83 and 0.86, respectively) during the harvest period. As a result, higher root tip number, root surface area, root volume and root activities were maintained at later thickening periods of sweet potato, an occurrence which delayed the rate of root senescence and played an important role in increasing yield.
Discussions
The relationship between root morphological-physiological characteristics, yield and components of sweet potato Plant root morphology and physiological characteristics were closely correlated with growth and development of the above ground plant parts and yield formation. Active roots can provide sufficient nutrients, water and plant hormones for the growth of above ground plant parts to consequently promote the biological yield. Conversely, active above ground plant parts can provide sufficient carbohydrates for transportation to the roots, and promote the activity of root functions [19,20]. Previous studies have shown that root biological yield was closely correlated with that of the above ground plant part [21,22]. For example, a large volume of roots, coupled with their strong absorption ability in the upper layer of rice had a significantly positive correlation with yield [23]. By using established mathematical regression models, it has been indicated that yield can be increased by increasing root length and weight, while retaining a relatively low number of roots. However, when root length and weight increased to a certain extent, yield decreased with an increase of root biomass [24], results which indicate that root development could affect final yield formation. Root length, volume, surface area and the number of root tips were the main morphological indices that reflect root development [25]. Higher total root length, surface area, volume and activity ensured stronger root absorption ability, promoted the formation of effective ears in rice and positively affected rice yield [22,25,26].
Compared with other crops, the root structure of sweet potato features a particularity that is not only the organ for nutrient absorption, but also the storage organ of photoassimilates [7,27,28,29]. The growth characteristic of sweet potato is to initially elongate before expanding. When the adventitious roots are elongated to a certain extent, a particular region near the root tip is gradually expands. Numerous lateral roots constantly grow during the growth process of adventitious roots, and roots which develop into absorbing roots and there by promote the absorption and utilization of water and soil nutrients [30]. Furthermore, the growth and development of lateral roots determines the differentiation and formation abilities from adventitious roots to storage root [28,29]. The decrease of total root length, root surface area, and root volume results in a reduction of sweet potato yield. Promoting root development and increasing root surface area and volume can promote nutrient absorption and dry matter accumulation in sweet potato [31]. The increased activity of the absorbing roots ultimately promotes the transfer and accumulation of nutrients and photosynthetic products into storage root, thus increasing yield [31,32]. The results of this study showed that root diameter had a significantly positive correlation with the number of storage root at the early stage of storage root formation. At the late growth stage, root tip number, root surface area, root volume and root activity significantly affected yield. Increased root tip numbers, enlarged root surface area and root volume, as well as higher root activity, ensured that the roots had a stronger nutrient absorption ability, which had a further positive effect on crop yield.
Effects of HA-N on root morphological-physiological characteristics and yield
Nitrogen was one of the main factors affecting sweet potato growth; it also plays a central role in root growth and construction, and is closely related to the differentiation and formation of storage root [28,32]. Low amounts of nitrogen application presented inhibiting effects on root growth [6]. Within a certain range of nitrogen application, root total biomass increased with increased nitrogen application rate at the early growth stage of plants, while root total biomass differentiating to storage root gradually decreased [7]. An insufficient nitrogen supply led to small and fine sweet potato roots, these not being conducive to root differentiation and storage root formation, consequently decreasing yield [33,34]. However, excessive nitrogen application also had adverse effects on the differentiation and formation of storage root, delayed tuberization and was not conducive to yield [32]. The results of this study showed that nitrogen application alone decreased dry mass of storage root and of the absorbing roots, as well as total root length, root diameter, and root surface area at the early stage of storage root thickening, and reduced the number of storage root. Dry matter accumulation in above ground plant parts and in the vine/tuber ratio were increased, as well as total root length, root diameter, root tip number, root surface area, root volume, and root activity at the harvest stage. Therefore, HA-N treatment significantly increased root diameter, root surface area, the number of storage root, and root activity, as well as promoting dry matter accumulation of above ground plant parts, roots and storage root, and improved the fresh weight per storage root compared with N application alone. Compared with the N and HA+N treatments, when dry weight, total length, surface area and volume of roots all strongly increased, root activity and yield were much higher under the HA-N treatment. This indicated that HA-N is beneficial for the promotion of root growth of sweet potato, it aids the number of storage root and maintain root activity at the late growth stage, promotes dry matter accumulation in storage root, and increases yield.
The effects of HA-N on the active oxygen metabolism Senescence of plants, the accumulative process of metabolic disorders of active oxygen and free radicals [35], is closely related to an active oxygen metabolism. The coordinated function of antioxidant enzymes, such as SOD, POD and CAT, effectively eliminates active oxygen free radicals [36,37]. The results of this study revealed that nitrogen application alone and HA-N application could effectively increase the activities of SOD, POD and CAT, decrease MDA content, and significantly increase soluble protein content compared with no fertilizer added. However, nitrogen application alone strongly affected activities of anti-senescence enzymes at the early growth stage, and HA-N strongly promoted antioxidant enzyme activities in roots during the whole growth stage, especially leading to significant effects at the late growth stage. This finding suggests that by retaining higher activities of protective enzymes, thus eliminating active oxygen in time, HA-N decreased preoxide levels, relieved membrane damage, delayed root senescence, and increased mineral nutrient absorption ability of roots.
Conclusions
HA-N effectively promoted the differentiation from adventitious root to storage root at the early growth stage, as well as increasing storage root numbers per plant. HA-N increased yield through a synergistic increase of biological yield and harvest index. Higher biomass, activity, absorbing area and volume of roots, as well as higher anti-ageing enzyme activities, promoted nutrient absorption as well as aboveground and underground biomass accumulation of sweet potato. This was the physiological basis for the observed yield increase of HA-N.
Supporting information S1 Dataset. S1 Dataset contains data on activities of SOD, POD, CAT and MDA content data (Fig 1). (XLSX) | 2018-04-03T02:54:07.836Z | 2017-12-18T00:00:00.000 | {
"year": 2017,
"sha1": "982c97fdcecb01556221d13cf75881432ab6ddc8",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0189715&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "982c97fdcecb01556221d13cf75881432ab6ddc8",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
229158313 | pes2o/s2orc | v3-fos-license | C2C-GenDA: Cluster-to-Cluster Generation for Data Augmentation of Slot Filling
Slot filling, a fundamental module of spoken language understanding, often suffers from insufficient quantity and diversity of training data. To remedy this, we propose a novel Cluster-to-Cluster generation framework for Data Augmentation (DA), named C2C-GenDA. It enlarges the training set by reconstructing existing utterances into alternative expressions while keeping semantic. Different from previous DA works that reconstruct utterances one by one independently, C2C-GenDA jointly encodes multiple existing utterances of the same semantics and simultaneously decodes multiple unseen expressions. Jointly generating multiple new utterances allows to consider the relations between generated instances and encourages diversity. Besides, encoding multiple existing utterances endows C2C with a wider view of existing expressions, helping to reduce generation that duplicates existing data. Experiments on ATIS and Snips datasets show that instances augmented by C2C-GenDA improve slot filling by 7.99 (11.9%) and 5.76 (13.6%) F-scores respectively, when there are only hundreds of training utterances.
Introduction
Slot filling is a fundamental module of the Spoken Language Understanding (SLU) in the task-oriented dialogue system (Young et al. 2013). The "inputs" in Figure 1 shows examples of slot filling, where key entities within user utterances are tagged with slot labels. Due to the high cost of manual annotation and the rapidly changing nature of dialogue domain, slot filling often faces the lack of quantity and diversity of training data. Such insufficiency in training data poses serious challenges for slot-filling models to handle myriad ways in which users express their demands.
Data augmentation (DA) technique, which improves diversity and quantity of training data with synthetic instances, offers an appealing solution to the data scarcity problem of SLU. Success has been achieved with data augmentation on a wide range of problems, including computer vision (Krizhevsky, Sutskever, and Hinton 2012), speech recognition (Hannun et al. 2014), text classification (Zhang, Zhao, and LeCun 2015a), and question answering (Fader, Zettlemoyer, and Etzioni 2013). * Equal contributions. † Corresponding author. 1 Code: https://github.com/Sanyuan-Chen/C2C-DA
Input Show me the closest<distance> shop<pos>
Gen.
Please list all the <pos> <distance>
Input Find the nearest<distance> hospital<pos>
Gen. Show me the <distance> <pos>
Input Is there a nearby<distance> book store<pos>
Gen. Please list all the <pos> <distance>
Show me the closest<distance> shop<pos> Find the closest<distance> shop<pos> Is there a nearby<distance> book store<pos>
Gen.
Please list all the <pos> <distance> I need some <pos> is there any <distance> Show me the address of <distance> <pos> How can I go to the <d vs. Figure 1: Examples of sequence-to-sequence data augmentation and cluster-to-cluster data augmentation. denotes novel utterance. denotes duplication to existing utterance. denotes duplication to other generated utterances.
For slot filling, state-of-the-art data augmentation works focus on generative methods . One of their typical ideas is generating new utterances by reconstructing existing utterances into alternative expressions while keeping the semantics. Previous works learn a Sequence-to-Sequence (Seq2Seq) model to reconstruct each existing utterance one-by-one (Yoo 2020;Hou et al. 2018;Kurata, Xiang, and Zhou 2016). However, these methods tend to generate duplicated utterances, because they can only consider the expression variance between one input-output pair at a time. For example in Figure 1, each new utterance is only generated to be different from the corresponding input utterance, and thus often unconsciously duplicates other generated utterances ( ) or other input utterances ( ). Such duplication will hinder the effectiveness of data augmentation. We argue these defects can be easily avoided by breaking the shackles of current one-by-one augmentation paradigm and considering the extensive instance relations during generation.
In this paper, we propose a novel Cluster-to-Cluster Generation framework for Data Augmentation of slot filling, named C2C-GenDA. As shown in Figure 1, different from previous works that augment each utterance one-by-one in-dependently, we jointly generate multiple new instances by reconstructing a cluster of existing utterances with the same semantics. Such cluster-to-cluster generation allows model to consider the duplication between generated utterances and aware of more existing expressions in original data. These advantages of C2C-GenDA remedy the aforementioned defects of Seq2Seq DA and help to improve generation diversity. To encourage diversity and quality of generation, we propose the Duplication-aware Attention and Diverse-Oriented Regularization mechanisms, both of which promote diverse decoding. To learn to generate diverse new utterances, we train the C2C-GenDA model with cluster-tocluster 'paraphrasing' pairs, and introduce a Dispersed Cluster Pairing algorithm to extract these cluster pairs from existing data.
Experiments on ATIS and Snips datasets show that the proposed method significantly improves the performance of slot-filling systems. Case studies and analysis of augmented data also confirm that our method generates diverse utterances. Our contributions can be summarized as follow (1) We propose a novel Cluster-to-Cluster generation framework for data augmentation of slot filling, which can remedy the duplication problem of existing one-by-one generation methods.
(2) We propose the Duplication-aware Attention and Diverse-Oriented Regularization mechanism to improve diversity of the augmented utterances. (3) We introduce a Dispersed Cluster Pairing algorithm to extract cluster-to-cluster 'paraphrasing' pairs for data augmentation model training.
Problem Description
In this paper, we study the data augmentation for slot filling task that maps utterances into semantic frames (slot type and slot value pairs). Slot filling is commonly treated as a sequence labeling problem, where slot type labels are assigned to contiguous sequences of words indicating these sequences are the corresponding slot values.
We specify the data augmentation (DA) for slot filling as exploiting existing training instances to generate new expressions for each semantic frame. Suppose existing slot filling training data is . Given a semantic frame s j and the corresponding existing utterances by associating new utterances with the semantic frame. Finally, DA takes the union of all new instances D = sj D sj as the additional data to reinforce the model training.
Proposed Framework
In this section, we present an overview of our data augmentation framework, and introduce the Cluster2Cluster generation model. Then, we discuss how to extract cluster-tocluster paraphrasing data for generation model training.
Overview
Here, we introduce the overview of the proposed clusterto-cluster data augmentation framework for slot filling. For each semantic frame, we use a Cluster2Cluster (C2C) model to generate new expressions from existing utterances. The input of our framework is a cluster of existing instances for a certain semantic frame, and the output is a cluster of generated new instances with unseen expressions.
Following Hou et al. (2018), we perform delexicalized generation. Specifically, both the inputs and outputs of C2C generation model are delexicalized utterances, where slot values tokens are replaced by slot label tokens. For the example in the Figure 1, C2C takes in "show me the <distance> <pos>" and reconstruct the expression as "please list all the <pos> <distance> ". The delexicalization focuses the model on generating diverse expressions rather than slot values and reduces the vocabulary size. Then after generation, we recover the delexicalized utterances by filling the slots with context-suitable slot values. Such delexicalization is important since it allow us to generate both the utterance and accurate slot annotations simultaneously.
To learn the ability of generating diverse and new expressions, we construct cluster-to-cluster paraphrasing pairs from original training data with the Dispersed Cluster Pairing algorithm, which simulates the data augmentation process of generating novel expressions from existing expressions for a specific semantic frame.
Cluster2Cluster Generation Model
Custer2Cluster (C2C) model is a generation model that lies at the core of our C2C-GenDA framework and aims to reconstruct input utterances into alternative expressions while keeping semantic. As Figure 2 shown, the C2C model first encodes the input cluster of utterances C = {u i } M i=1 for a certain semantic frame, then jointly decodes a new cluster of utterances C = {u i } M i=1 with different expressions, where M and M are size of input and output cluster respectively.
To further encourage the diversity of the generated utterances, we propose two novel mechanisms: (1) Duplicationaware Attention that attends to the existing expressions to avoid duplicated generation for each decoding step. (2) Diverse-Oriented Regularization that guides the synchronized decoding of multiple utterances to improve the internal diversity of the generated cluster.
Cluster Encoder We jointly encode multiple input utterances by concatenating them, and representing the whole sequence with an L-layer transformer (Vaswani et al. 2017 where R is the final representations of the input tokens, e i,j is the embedding of j th token in the i th input utterance, H 0 is the package of all input token embeddings, and H l is the outputs of l th layer. LN is the layer normalization. MultiHead(Q, V ) is the multi-head self-attention function operating on vector packages of queries Q, values V (also used as keys). FFN is a position-wise feed-forward network. TF TF TF TF TF TF TF TF TF TF Step Prediction Intuitively, we decode the r th target utterance u r depending on both input cluster C and other output utterances C \ {u r }. We also incorporate the diversity rank token #r (Hou et al. 2018) as generation conditions to encourage diversity and distinguish different output utterances. Details of the diverse rank in C2C will be introduced in a later section. Then the C2C model is formalized as: However, it is unrealistic to decode a target utterance depending on all the other target utterances, because we jointly decode all the target utterances and the generation of other target utterances has not finished. Therefore, we approximate the dependence between target utterances and depend the decoding on already generated tokens of all the target utterances. For each step, we simultaneously decode one token for all the target utterances which depends on all the previously decoded tokens {u i,1:t−1 } M i : where T is the number of decoding steps. We calculate the decoding possibility for the t th step of r th utterance as p r,t = Softmax(MLP(h r,t )), where h r,t is a hidden state that combines feature representations of C, #r and {u i,1:t−1 } M i . Here, we obtain the hidden state h r,t with DAA which contains two terms: h r,t andh r,t . The first term h r,t mainly records the information of what token should be generated. To achieve this, h r,t encodes previously decoded tokens of current utterance and semantic information from the input cluster. Since h r,t encodes existing expressions in the input cluster, it also allows to reduce generation duplicated to existing expressions. For the r th target utterance, we compute h r,t = R r,t with an L-layer transformer as decoder: where R r is a package of hidden states for all t decoding steps. H 0 is the package of all decoded token embeddings and H l is the outputs of l th decoding layer. H l is the input cluster representation from l th encoding layer.
The second termh r,t mainly records the duplicated expressions that should not be generated, it encodes expressions generated by other target utterances ash r,t = MultiHead(h r,t , {h i,1:t−1 } M i =r ). Finally, the hidden-state for decoding is h r,t = h r,t − λ ·h r,t , where λ is a balance factor. Subtraction makes h r,t different for each target utterances, and −h r,t can implicitly punish decoding of commonly shared words.
Model Training with Diverse-Oriented Regularization
We train the C2C model with a Diverse-Oriented Regularization (DOR) to encourage internal diversity within the generated utterance cluster.
To achieve this, we propose to enlarge the distance between distributions of utterances in the output cluster. However, the distribution of an utterance is hard to estimate during the decoding process. Thus, we approximately enlarge two utterances' distribution by encouraging the divergence of token distributions. As shown in Figure 2, we train the model to enlarge the Kullback-Leibler Divergence (KL) between decoding distribution of different output utterances at each step. Formally, we define the distance between two output utterances u ri and u rj as: where p r,t denotes token distribution of r th output utterance at t th decoding step. Then we define Diverse-Oriented Regularization of generation as: Overall, we train C2C model to minimize: where γ is a balancing factor.
Generation Pre-training When training data is insufficient, the data augmentation model itself is often poorly trained due to limited expression in the training data. To remedy this, we initialize the transformer encoder/decoder with pre-trained language model GPT-2 (Radford et al. 2019).
Cluster-to-Cluster Data Construction
To learn to generate diverse new utterances, we train the C2C model with cluster-to-cluster 'paraphrasing' pairs extracted from existing training data, and propose a Dispersed Cluster Pairing algorithm to construct these pairs.
We hope the cluster-to-cluster generation pairs simulate the data augmentation process, where we generate diverse new utterances from limited expressions. Therefore, given all utterances with same semantic, we gather similar utterances as an input cluster and pick the utterances with the most different expressions as the output cluster. For each semantic frame s, we construct the input cluster C with lexical clustering and the construct output cluster C with furthest including mechanism. Figure 3 and Algorithm 1 present the workflow of the cluster-to-cluster data construction. Firstly, we perform lex- ical clustering on the utterances with K-Medoids clustering method (Park and Jun 2009). Each lexical cluster contains similar utterances and is used as an input cluster C.
Then, for each source cluster C, we sample target utterance according to a furthest including principle. Each time, we pick the u that has the highest diversity score and include it in target cluster C .
We compute diversity score between a candidate utterance u and the union of source cluster C and current target cluster C as DS = min u∈C∪C EDITDISTANCE(u, u ). Notice that maximizing the diverse score between u and source cluster C increases the target cluster's novelty against the source cluster. The diverse score between u and target cluster C helps to avoid duplication within the target cluster.
Diversity Rank As mentioned in the decoder section, we adopt the diversity rank to encourage diversity and distinguish sentences in the output cluster. Consequently, we incorporate the diverse rank in training data of C2C model by associating each output utterance with a diverse rank token #r (See the examples in Figure 3). Since the output cluster utterances are greedily picked by diversity score, we naturally use this greedy picking order as the diversity rank, which models the novelty of output utterance. When augmenting new data, we generate the new utterances at rank from 1 to M , where M is a preset size of output cluster.
Cross Expansion After training of the C2C model, we generate unseen new utterances from the constructed input clusters. To avoid the new utterances to overfit to the original output utterances seen in C2C training, we perform data augmentation with a Cross Expansion mechanism. We partition all the cluster-to-cluster pairs P into training ones P train and reserved ones P seed . Then we train the C2C model only with P train and generate new utterances from the input clusters of reserved pairs P seed . To make full use of existing utterances, we repeat such partition in a crossing manner.
Experiment
We evaluate the proposed data augmentation method on two slot filling datasets. 3 Data We conduct experiments on ATIS and Snips datasets. ATIS (Hemphill, Godfrey, and Doddington 1990) is extensively used for slot filling and provides a well-founded comparison for data augmentation methods. It contains 4,978 training utterances and 893 testing utterances. To simulate the data insufficient situations, we follow Chen et al. (2016a); Hou et al. (2018); Shin, Yoo, and Lee (2019), and evaluate our model on two small proportions of the training data which is small proportion (1/40 of the original training set with 129 instances) and medium proportion (1/10 of the original training set with 515 instances). We use a development set of 500 instances. Snips (Coucke et al. 2018) dataset is collected from the Snips personal voice assistant. There are 13,084 training utterances and 700 testing utterances. We use another 700 utterances as the development set. We also split the snips training set into small proportion (1/100 of the original training set with 130 instances) and medium proportion (1/20 of the original training set with 654 instances).
Evaluation Following previous works Hou et al. 2018), we compute F1-score as evaluation metric with the conlleval script. 4 Implementation We built our Cluster2Cluster model with the transformer implemented by Wolf et al. (2019). For pretrained parameters, we used the GPT-2, which has 12 layers, 110M parameters and the hidden state dimension of 768. We used AdamW (Loshchilov and Hutter 2019) optimizer with initial learning rate 6.25e-5 or 5e-5 for training. We varied λ in {0.1, 0.02, 0.01, 0.002, 0.001} and set γ as 1.0.
Following previous works Hou et al. 2018), we conduct experiments with Bi-LSTM as slotfilling model and train it with both original training data and data augmented by different data augmentation methods. We use the same Bi-LSTM implements as previous work. 5 The dimension of word embeddings and hidden states was set to 300 and 128, respectively. We used GloVe (Pennington, Socher, and Manning 2014) to initialize word embedding. We varied training batch size in {16, 128}, set dropout rate to 0.5, and trained the model with Adam as suggested by Kingma and Ba (2015).
For all models, best hyperparameter settings are determined on the development set. We report the average of 5 differently-seeded runs for each result. Table 1 shows the evaluation results of data augmentation methods on two slot filling datasets: ATIS and Snips. To simulate data insufficient situations, we compare the proposed method with previous data augmentation methods with different proportions following previous works (Chen et al. 2016a;Hou et al. 2018;Shin, Yoo, and Lee 2019). Baseline results are obtained with a Bi-LSTM slot-filling model trained on original training data. And results of each data augmentation methods are obtained with Bi-LSTM models that have the same architecture as the baseline but are trained with both original data and generated data.
Main Results for Data Augmentation
On ATIS dataset, our model significantly outperforms the baseline model by 5.10 and 7.99 F-scores on medium and small proportion respectively. There are similar improvements on Snips dataset. These improvements show the effectiveness of our augmentation method in the data-insufficient scenarios. When tested with data sufficient scenarios on full proportions, our model also brings improvements over baselines models. The improvements are narrowed comparing to those in data scarcity settings. We address this to the fact that full ATIS and Snips are large enough for slot-fillings, which limit the effects of additional synthetic data. When we augment new data without generation pre-training, our perfor- mance drops but still achieves significant improvements in most settings, which shows the effectiveness of pre-training and C2C structure respectively. We will discuss pre-training in detail later. We compare our methods to two kinds of popular data augmentation methods for slot filling: rephrasingbased and sampling-based methods. Similar to our methods, rephrasing-based data augmentation methods reconstruct existing data into alternative expressions. 6 For this kind of method, NoiseSeq2Seq (Kurata, Xiang, and Zhou 2016) and Rel-Seq2Seq (Hou et al. 2018) learn seq2seq models to reconstruct the existing utterances. To generate unseen expression, NoiseSeq2Seq Introduce noise to decoding, and Rel-Seq2Seq considering the relation between expression alternatives. Slot Expansion generates the new data by randomly replacing the slot values of existing utterances. These methods argument each new utterance independently, thus often generate duplicated expressions that are helpless to improve slot-filling training. Our C2C model mitigates this by jointly encoding and decoding multiple utterances and considering the extensive relation between instances. Such advantages result in higher diversity and help to achieve better performance.
For the second type of data augmentation, we compare with the sampling-based data augmentation method of C-VAE (Shin, Yoo, and Lee 2019). C-VAE leverages a conditioned VAE model to sample new utterances and generates corresponding annotations at the same time. It also faces the diversity problem, since it samples each new data independently. Our methods outperform this strong baseline on all the six slot-filling settings. The improvements come from the better diversity and fluency of the proposed Cluster2Cluster generation. Notably, we gain significant improvements of 9.63 and 3.35 F1-scores on Snips-small and ATIS-small. It shows that our methods are more effective in data scarcity situations.
Analysis
Ablation Test We perform an ablation study to evaluate the importance of each component in C2C framework. Table 2 shows the results on Snips. For the model without cluster-wise generation, we directly fine-tune GPT to generate new data in a seq-to-seq manner. The drops of F1score demonstrate the superiority of the cluster-wise generation. If removing either Diverse-Oriented Regularization or Duplication-ware attention from the model, performance drops are witnessed. This shows that both of the two mechanisms help to improve slot-filling by encouraging diversity.
Effects of Generation Pre-training
We analyze the impact of initializing C2C model with pre-trained language model. We randomly initialize C2C model and vary the model sizes to avoid overfitting caused by large model sizes. As shown in Table 3, the pre-training helps to improve the effects of data augmentation on all settings. We attribute this to the fact that pre-training can improve generation fluency. However, as revealed in both Table 3 and Table 1, the drops are limited compared to the overall improvements, which shows the inherent effectiveness of C2C model.
Effects over Deep Pre-trained Embeddings
For data scarcity problem, deep pre-trained embeddings, such as BERT (Devlin et al. 2019), are also demonstrated as an effective solution (Wang et al. 2020). To see whether DA is still effective when using deep pre-trained embeddings, we conduct DA experiments over a BERT-based slot-filling model. 7 As shown in Table 4, although BERT greatly improves the performance of slot-filling, our model still achieved improvements on Medium and Small proportion data. This shows the effectiveness of our DA methods for data scarcity problems. Our augmentation method slightly lags the BERT-only model on Full proportion. We address this to the fact that full data is large enough for slot-filling and BERT can be misled by the noise within generated data.
Evaluation for Generation Diversity
Increasing the diversity of generation is one of the essential goals of the data augmentation methods. Following Shin, Yoo, and Lee (2019), we evaluate the diversity of generated data from two aspects: Inter and Intra. Inter: ratio of utterances that did not appear in the original training set. Intra: ratio of unique utterances among all generated new data. Such metrics only measure the whole-sentence level diversity, but fail to measure expression diversity at token level. To remedy this, we introduce a token-level diversity metric: Minimum Edit Distance (MED). For each generated utterance u , we calculate its MED to a set of utterances C as MED(u , C) = min u∈C EDITDISTANCE(u, u ). MED measures novelty of a sentence comparing to a set of existing sentences at token level. We report the averaged MED of each generated utterance to the original training set (Inter) and to the other generated utterances (Intra). Table 5 shows the evaluation of the generation diversity on the ATIS-Full. For Inter Diversity, our method significantly outperforms all previous methods on both Ratio and average MED metrics. We note that we can achieve the best diversity even evaluating the generated delexicalized utterances. It shows the great ability of the C2C model in generating unseen expressions. This is mainly due to that clusterwise encoding mechanism allows model to be aware of more existing expression during generation.
For Intra Diversity, our method also achieves the best performances over the previous works. These improvements show that considering relations between generated utterances can significantly reduce duplication.
Diversity Analysis To understand how the proposed method enhances expression diversity, we investigate the diversity distribution of generated delexicalized utterances on the ATIS-full. We measure the diversity with Inter MED. As shown in Figure 4, Seq2Seq generation yields more existing expressions, and the MED scores are mostly distributed in low-value areas. Comparing to Seq2Seq, Cluster2Cluster model generally has higher MED scores. This demonstrates the intrinsic advantage of the cluster-wise generation to generate new expressions.
When training the Cluster2Cluster model with Diverse-Oriented Regularization and Duplication-ware Attention, there is much fewer existing expressions within generated utterances, and we can see a continuous drifting of distribution towards higher diversity. This shows that the proposed mechanisms help to generate more diverse utterances.
Also, we conduct case studies to see how C2C model generates unseen expressions (See Appendix).
Related Work
Data augmentation (DA) solves data scarcity problems by enlarging the size of training data (Fader, Zettlemoyer, and Li et al. 2019). Previous DA works propose back-translation methods (Sennrich, Haddow, and Birch 2016;Wang et al. 2019) and paraphrasing methods (Zhang, Zhao, and LeCun 2015b;Iyyer et al. 2018;Hu et al. 2019;Gao et al. 2020) to generate semantically similar sentences. However, these DA methods are not applicable to the sequence labeling problem of slot-filling. Because slot filling requires token-level annotations of semantic frame, while these methods can only provide sentence-level labels. Spoken Language understanding, including slot filling and intent detection tasks, has drawn a lot of research attention recently (Yao et al. 2013(Yao et al. , 2014Mesnil et al. 2013Mesnil et al. , 2015Chen et al. 2016a,b;Goo et al. 2018;Haihong et al. 2019;Liu et al. 2019). In this paper, we only focus on the slot filling task. For data augmentation of slot filling, previous works focus on generation-based methods. Kurata, Xiang, and Zhou (2016); Hou et al. (2018); Peng et al. (2020) augment the training data with a Sequence-to-Sequence model. Shin, Yoo, and Lee (2019); Yoo, Shin, and Lee (2019) introduced Variational Auto-Encoder (Kingma and Welling 2014) and jointly generate new utterances and predict the labels. Louvan and Magnini (2020) introduce simple rules to generate new utterances. Different from our C2C framework, these methods augment each instance independently and often unconsciously generate duplicated expressions.
Conclusion
In this paper, we study the data augmentation problem for slot filling and propose a novel data augmentation framework C2C-GenDA, which generates new instances from existing training data in a cluster-to-cluster manner. C2C-GenDA improves generation diversity by considering the relation between generated utterances and capturing more existing expressions. To further encourage diversity, we propose Duplication-aware Attention and Diverse-Oriented Regularization mechanism. We introduce a Dispersed Cluster Pairing algorithm to construct cluster-to-cluster paraphrasing pairs for C2C-DA training. Experiments show that the proposed framework can improve slot-filling by generating diverse new training data and outperform existing data augmentation systems of slot-filling. | 2020-12-15T02:15:52.927Z | 2020-12-13T00:00:00.000 | {
"year": 2020,
"sha1": "c326c1d6f154bbc9822f900ddcf42f482ec9c611",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c326c1d6f154bbc9822f900ddcf42f482ec9c611",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
230645343 | pes2o/s2orc | v3-fos-license | Evaluation of Chickpea (Cicer arietinum L.) Germplasm for Yield and Yield Attributing Traits in Eastern Plain Zone of Uttar Pradesh
Chickpea, a member of Fabaceae, is a selfpollinated true diploid (2 n = 2 x = 16) with genome size of 738 Mbp. It is an ancient cool season food legume crop cultivated by man and has been found in Middle Eastern archaeological sites dated 7500–6800 BC. Its cultivation is mainly concentrated in semiarid environments. It is grown in more than 50 countries on an area of 13.2m ha, producing approximately 11.62 m tonnes annually. India ranks first in the world’s production and area by contributing around 70.7 % to the world’s total production. It is one of the most important food legume plants in sustainable agriculture system because of its low production cost, wider adaptation, ability to fix atmospheric nitrogen and fit in various crop rotations and presence of prolific tap root system. Chickpea can fix atmospheric nitrogen up to 140 kg/ha through its symbiotic association with Rhizobium and meets its 80 % requirement. It is a rich source of quality International Journal of Current Microbiology and Applied Sciences ISSN: 2319-7706 Volume 9 Number 10 (2020) Journal homepage: http://www.ijcmas.com
Introduction
Chickpea, a member of Fabaceae, is a selfpollinated true diploid (2 n = 2 x = 16) with genome size of 738 Mbp. It is an ancient cool season food legume crop cultivated by man and has been found in Middle Eastern archaeological sites dated 7500-6800 BC. Its cultivation is mainly concentrated in semiarid environments. It is grown in more than 50 countries on an area of 13.2m ha, producing approximately 11.62 m tonnes annually. India ranks first in the world's production and area by contributing around 70.7 % to the world's total production. It is one of the most important food legume plants in sustainable agriculture system because of its low production cost, wider adaptation, ability to fix atmospheric nitrogen and fit in various crop rotations and presence of prolific tap root system. Chickpea can fix atmospheric nitrogen up to 140 kg/ha through its symbiotic association with Rhizobium and meets its 80 % requirement. It is a rich source of quality The present research consists of the 40 genotypes of chickpea (Cicer arietinum L.) was carried out at the Department of the Genetics and Plant Breeding, Sam Higginbottom University of Agriculture, Technology and Sciences, Prayagraj during Rabi 2018-19 in Randomized Block Design with three replications with an aim to determine genetic variability, correlation, direct and indirect relationship between yield and its component characters. Significant variability existed for all characters. Based on the mean performance, high seed yield was found for the GPK-1058 followed by BKG-21164, BKG-26212 and IPC-06127. High heritability (>70%) coupled with high genetic advance (>20) were being observed for the 100 seed weight. Seed yield per plant exhibited positive and highly significant correlations with biological yield per plant, harvest index, pods per plant, secondary branches per plant, primary branches per plant at both genotypic and phenotypic level. Path analysis at both genotypic and phenotypic level identified biological yield per plant followed by harvest index, 100 seed weight, secondary branches and seeds per pod important direct components for seed yield per plant. Thus, due consideration should be given to these characters during the selection.
The area of chickpea in worldwide is 13.9 million hectares and production is 13.7 million tonnes. In India area of chickpea is 9.93 million ha, production is 9.53 million tonnes and productivity is 960 kg/ha. In UP area of chickpea is 5.77 lakh ha, production is 4.75 lakh tonnes and productivity is 824 kg/ha. The production of chickpea has been on a decline due to non-availability of early maturity, high yielding, input-responsive varieties, resistant/tolerant to various biotic and abiotic stresses and their suitability in prevailing crop rotation. Therefore, there is an urgent need to evolve high yielding varieties having high protein content and resistant to major biotic and abiotic stresses with suitability for different ago-climatic conditions and cropping systems.
As per the present scenario the population of our country has been increasing at an alarming pace. So there is an urgent need for the release of varieties with the higher yield which can be able the match the decreasing production of the pulses and serve as the source of nutritional security for the people of our country.
Materials and Methods
A germplasm collection of 40 varieties/strains of chickpea (Cicer arietinum L.) comprising indigenous as well as exotic genotypes, constituted the experimental materials for this study. These genotypes exhibiting wide spectrum of variability for various agronomic and morphological characters were obtained from the pulse section, Department of Genetics and Plant Breeding, Sam Higginbottom Institute of Agriculture Technology and Sciences, Prayagraj, 211007. The present experiment was carried out in Rabi 2018-19 in Randomised Block Design. The treatments were being replicated three times. The net area was around 120m2 with a plot size of 1*1 m2 the row to row spacing 30 cm and plant to plant distance 10cm. Soil in this region is sandy loam and alkaline in nature.
The technique of random sampling was adopted for the observation of the 12 quantitative characters namely days to 50 percent flowering, plant height, number of primary branches per plant, number of secondary branches per plant, number of pods per plant, pod length, number of seeds per pod, days to maturity, biological yield per plant, 100 seed weight, harvest index and seed yield per plant. Recommended practices were applied to raise a healthy crop. Metric data on 12 quantitative characters were taken at different stages of growth.
Correlation coefficient estimates degree of association of different component characters of yield among themselves and with the yield. The correlation studies between various yields attribute with yield provides a basis for further breeding programme.
Path coefficient analysis measures the direct effect of variable upon another and permits the separation of the correlation coefficient into components of direct and indirect effects. Information on the variability and correlation studies among the economic characters of the crop is of great value to plant breeders. It will not only, help to understand the desirable and undesirable relationship of economic characters but also help in assessing the scope of simultaneous improvement of two or more attributes.
Genetic variability
Genetic variance and phenotypic variance of days to 50% flowering is (7.028 and 7.82), plant height ( Table 1 and 2). (2001) and Parashuram (2003). Genetic advance (as a percent of mean) for different characters revealed that it varied from (7.53%) days to 50% flowering to (49.46%) 100 seed weight.
In present investigation characters like genetic advance (as percent of mean) is highest recorded for 100 seed weight (49.46%) followed by seed yield per plant (44.24%), number of secondary branches per plant (23.41%), number of seed per pod (21.18%) shows moderate genetic advance.
Correlation coefficient analysis
In table 3 and 4 the present study in genotypic correlation the seed yield per plant was found to be highly significant and positive correlation with biological yield per plant, harvest index, primary branches per plant, number of pods per plant and 100 seed weight. Plant height is having the positive and non-significant relationship with the seed yield per plant. Similar findings was also reported by Yadav et al., (1990), Arora et al., (2004), Singh et al., (2008) and Sial et al., (2003). In phenotypic correlation seed yield per plant was found to be highly significant and positive correlation with biological yield per plant, harvest index, primary branches per plant, number of pods per plant and 100 seed weight Dehal et al., (2016), Tiwari et al., (2016), Saroj et al., (2013) and Shafique et al., (2016).
Path coefficient analysis
In table 5 and 6 the highest direct and positive effect on seed yield was exhibited by biological yield per plant followed by harvest index, while the 100 seed weight and number of seed per pod exhibit moderate direct positive effect. Thus these characters turned out to be the major component of seed yield Chopdar et al., (2017), Dehal et al., (2016), Shafique et al., (2016) and Tiwari et al., (2016).
From the path analysis study, it was apparent that maximum direct effects were exerted by biological yield per plant and harvest index. Both exhibited positive and significant correlations with seed yield therefore, these may be considered as the most important yield contributing characters. Hence, due emphasis should be placed on these characters while breeding for higher yield in chickpea.
It is concluded from the present study that all the 40 genotypes of chickpea showed significant differences among them. The values of GCV and PCV the relative amount of genotypic and phenotypic variation was high for seed yield per plant, indicating that the major portion of total variation was accounted by the genetic cause hence selection based on phenotypic performance would be rewarding for improvement in these traits. The Moderate heritability values were noticed for most of the characters except some such as days to 50% flowering, days to maturity, 100 seed weight shows high heritability. A low magnitude of genetic advance (<10%) expressed as a percent of mean was observed in respect of days to 50 flowering and pod length. The traits such as 100 seed weight, seed yield per plant, secondary branches per plant and number of seeds per pod exhibit high genetic advance (>20%) and rest other traits shows moderate genetic advance.
Mean performance results have shown that genotypes GPK-1058 and BKG-21264 shows the best performance for seed yield. Correlation and path analysis revealed that biological yield and harvest index have the positive correlation and direct effect on seed yield. Both the genotypes with these characters can be used for further improvement and development of chickpea. | 2020-12-17T09:13:30.741Z | 2020-10-10T00:00:00.000 | {
"year": 2020,
"sha1": "4f4cb75fa731380595eb01c0c8d25cb92f640d45",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/9-10-2020/Shivashish%20Verma,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3b2486da59c527ac45453b8d1885cb19d01d11e4",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
9401305 | pes2o/s2orc | v3-fos-license | Study of the Effect of Breast Tissue Density on Detection of Masses in Mammograms
One of the parameters that are usually stored for mammograms is the BI-RADS density, which gives an idea of the breast tissue composition. In this work, we study the effect of BI-RADS density in our ongoing project for developing an image-based CAD system to detect masses in mammograms. This system consists of two stages. First, a blind feature extraction is performed for regions of interest (ROIs), using Independent Component Analysis (ICA). Next, in the second stage, those features form the input vectors to a classifier, neural network, or SVM classifier. To train and test our system, the Digital Database for Screening Mammography (DDSM) was used. The results obtained show that the maximum variation in the performance of our system considering only prototypes obtained from mammograms with a concrete value of density (both for training and test) is about 7%, yielding the best values for density equal to 1, and the worst for density equal to 4, for both classifiers. Finally, with the overall results (i.e., using prototypes from mammograms with all the possible values of densities), we obtained a difference in performance that is only 2% lower than the maximum, also for both classifiers.
Introduction
Several factors can affect the composition of breast tissue. The increase or decrease of the breast gland is part of the normal physiological changes that occur in the breast and usually occurs in both breasts simultaneously. These changes may be caused by hormonal fluctuations (natural or synthetic) including menarche, pregnancy, breastfeeding, or menopause. The increase in glandularity also depends on the woman's genetic predisposition. In young women, normally, the breast is composed mostly of glandular tissue and very little fat. And although this composition varies depending on age, it is possible to find older women with extremely dense breasts, that is, consisting mostly of glandular tissue and not fat. Weight gain or loss also increases or decreases the fat content of the breast and therefore also affects the breast glandularity [1].
The composition of breast tissue is defined by the BI-RADS parameter called "density" [2], which can have four possible values (1-4) explained in Table 1.
The degree of difficulty of analyzing a mammogram depends on the nature of the breast tissue, as can be seen in Figure 1. In these two mammograms, the different nature of the tissue predominant in each one is clearly distinguishable. As can be seen, it is very easy to locate the lesion in the figure on the left, which corresponds to a 71-years-old woman and has a density equal to 1, whereas it is much more difficult to analyze and locate the lesion in the mammogram on the right, corresponding to a 41-years-old woman with a density equal to 4. This example suggests that the density may be a factor limiting the sensitivity which can be reached when analyzing a mammography (both for radiologists or CAD systems). Several analyses can be found showing that the majority of cancer cases discarded in screening mammographies correspond to dense mammary gland (density equal to 3 or 4) [3][4][5]. We can also find in the literature works as [6], in which the impact of BI-RADS density on CAD systems is studied, in particular on the SecondLook CAD system (version 4.0) developed by the company iCAD. Finally, there are other Breast tissue mainly fatty 2 Scattered fibroglandular densities 3 Breast tissue heterogeneously dense 4 Breast tissue extremely dense studies such as [7] that incorporate the information provided by this parameter for the development of their algorithms to detect masses in mammograms. In this work, we studied how BI-RADS density affects our mass detection system, which consists of two stages. In the first one, a blind feature extraction is performed over ROIs, using ICA as the main technique. Next, in the second stage, those features are used as inputs to a neural classifier which determines whether the ROI includes a mass. The system is described in detail in the next two sections. The rest of our paper is organized as follows. Section 2 describes the general methods used for the generation of prototypes, feature extraction, and classification. Next, Section 3 includes a description of the system structure and operation, and also of the experiments devised. In Section 4, the most significant obtained results are described, while Section 5 presents the main conclusions of the work.
Methods
In this section, we present the techniques used in this study for the generation and selection of prototypes, for feature extraction tasks, and for classification. We are going to review these methods in the following subsections.
Data and Prototype Creation.
In the literature, one can find various proposals focused on the detection and segmentation of masses on mammograms, such as those reviewed in [8], but it is usually difficult to compare the results of different studies addressing both the detection and diagnosis of masses. The main problem is the use of proprietary databases of small size, or, if using a public database, the use of selected, unspecified cases. Horsch [9] analyzes recent studies in mammography CAD and concludes that, in view of the observed variability in the datasets used, currently the only mammography database that is public and sufficiently large to allow a meaningful and reproducible evaluation of a CAD system is the Digital Database for Screening Mammography (DDSM) [10].
The DDSM is a resource available to the mammographic image analysis research community and contains a total of 2,620 cases. Each case provides four screening views: mediolateral oblique (MLO) and craniocaudal (CC) projections of left and right breasts. Therefore, the database has a total of 10,480 images. Cases are categorized in four major groups: normal, cancer, benign, and benign without callback. All cases in the DDSM were reported by experienced radiologists providing various BI-RADS parameters (density, assessment, and subtlety), BI-RADS abnormality description, and proven pathology. For each abnormality identified (within which masses are included), the radiologists draw free form digital curves defining ground truth regions. We consider these regions to define squared "regions of interest" (ROIs) that will be used as prototypes of mass. Apart from the previous data, each DDSM case includes additional information such as patient age, date of study, and digitization or digitizer's brand, though we have not used it in this work.
The DDSM database contains 2,582 images that contain an abnormality identified as mass, whether benign or malignant. Some of them were located on the border of the mammograms and could not be used (see the following paragraph, dedicated to ROIs). Consequently, only 2,324 prototypes could be considered, namely, those which might be taken centered in a square without stretching. Some mass prototype examples are shown in Figure 2.
Regions of Interest. Ground truth regions for abnormalities are defined in the database by a chain code which generates a free hand closed curve. We use the chain code to determine the smallest square region of the mammogram that includes the manually defined region. Therefore, if the mass is located near one edge of the mammogram, this procedure may not be able to obtain a squared region from the image, and the mass is discarded as a valid prototype. Figure 3 shows an example of the ground truth region coded by the radiologist (solid line) and the area to be used as ROI (purple box). On the other hand, the prototypes of normal tissue were selected randomly from the normal mammograms. This normal tissue prototypes were caught originally with sizes randomly ranging from the smallest to the largest of the sizes found in the DDSM for masses.
The generated regions have different sizes but the selected image feature extractor needs to operate on regions with the same size, so we need to reduce the size of the selected regions to common sizes. The reduction of ROIs to a common size has demonstrated to preserve mass malignancy information [11][12][13]. To determine the optimum region size, we considered two sizes for the experiments: 32 × 32, 64 × 64 pixels. The process of resizing was carried out using the bilinear interpolation algorithm provided by the OpenCV library [14].
Feature Extraction.
As we commented above, we used Independent Component Analysis (ICA) [15] as blind feature extraction method. The objective of the method is to obtain an appropriate functions basis, derived from prototype ROIs (including masses and normal tissue), so that we can represent the texture and characteristics of each ROI from the breast images as an expansion in this basis (Figure 4), where the coefficients of this expansion (s ) are the input vectors to the classifiers (i.e., the "features" describing the ROIs).
The added value of our approach, compared to other methods that use some generic functions, is that our basis should be more specific for our problem, since it is obtained using a selection of the images to be classified.
Shape Edges
Circumscribed multivariate data, typically given as a sample database. In this model, it is assumed that the data are linear combinations of some unknown latent variables, and the system by which are combined is also unknown. It is assumed that the latent variables are non-Gaussian and mutually independent, and they are called independent components of the observed data. These independent components, also called sources or factors, can be determined by ICA. ICA is related to Principal Component Analysis (PCA) [16] since, before applying the ICA method itself, it is advisable to make a dimension reduction or feature extraction of the original input vectors which can be done using PCA. The data analyzed by ICA can come from many different types of fields including digital images. In many cases, the data comes from a set of parallel signals or time series, being used in this case the term "Blind Source Separation" (BSS) to define these problems.
In that sense, if we suppose that we have n signals, the objective is to expand the signals registered by the sensors (x ) as a linear combination of n sources (s ), in principle unknown as follows: The goal of ICA is to estimate the mixing matrix A = ( ), in addition to the sources s . One can use this technique for feature extraction since the components of X can be regarded as the characteristics representing the objects (patterns) [15].
Classification
Algorithm. In our system, the classification algorithm has the task of learning from data. An excessively complex model will usually lead to poorly generalizable results. It is advisable to use at least two independent sets of patterns in the learning process: one for training and another for testing. In the present work, we use three independent sets of patterns: one for training, one to avoid overtraining (validation set), and another for testing [17]. For the classification, we have used Multilayer Perceptron (MLP) [18] and SVM classifiers [19]. We have chosen these two techniques because they are widely used in classification and detection of breast cancer, as can be seen in the works listed in several reviews as [9] and in [20]. Also, to do a more rigorous study as is shown in [21], we could have tested with other techniques and other quality metrics that are also widely used in classification and regression problems, although they may not be as common in works found on detection and classification of breast cancer.
Neural Networks.
We implement MLP with a single hidden layer, and a variant of the Back-Propagation algorithm termed Resilient Back-Propagation (Rprop) [22] to adjust the weights. This last is a local adaptive learning scheme performing supervised batch-learning in a multilayer perceptron which converges faster than the standard BP algorithm. The basic principle of Rprop is to eliminate the negative effect of the size of the partial derivative on the update process. As a consequence, only the sign of the derivative is considered in indicating the direction of the weight update [22]. The function library of the Stuttgart Neural Network Simulator environment [23] was used to generate and train the NN classifiers. To avoid local minimum during the training process, each setting was repeated four times, changing the initial weights in the net at random. Furthermore, the number of neurons in the hidden layer was allowed to vary between 50 and 650 in steps of 50.
Support Vector Machines.
As with MLP, the goal of using an SVM is to find a model (based on the training prototypes) which is able to predict the class membership of the test subset's prototypes based on the value of their characteristics. Given a labeled training set of the form (x , y ), = 1, . . . , where x ∈ R and y ∈ {1, −1} , the SVM algorithm involves solving the following optimization problem: In this algorithm, the training vectors x are projected onto a higher-dimensional space than the original. The final dimension of this space depends on the complexity of the input space. Then the SVM finds a linear separation in terms of a hyperplane with a maximal (and hence optimal) margin of separation between classes in this higher dimensional space.
In the model, ( > 0) is a regularization or penalty parameter to control the error, is the final dimension of the projection space, w is the normal to the hyperplane (also known as the weights vector), and is the bias. The parameter is introduced to allow the algorithm a degree of flexibility in fitting the data, and (x , x ) ≡ (x ) (x ) is a kernel function to project the input data onto to a higher dimensional space. We used the LibSVM [24] library with a radial basis function (RBF: ( , ) = exp(− ‖ − ‖ 2 ), > 0) as kernel function. To find the optimal configuration of the parameters in the algorithm, was allowed to vary like 2 −5 < < 2 3 in steps of 0.5 for the exponent, and the penalty parameter between 2 −5 and 2 10 also in steps of 0.5 for the exponent.
Outline of the Process
In this section, we provide an overview of the structure of our system, describing the main steps required to configure the = 1 × + 2 × + · · · + −1 × + × Figure 4: Decomposition of the image using an ICA basis.
FastICA ICA basis of n components = 1 + · · · + · · · system to discriminate prototypes of masses from prototypes of normal breast tissue.
System Description.
We provide an overview of our system's structure, describing the main steps required to configure the system in order to discriminate ROIs corresponding to masses from ROIs corresponding to normal tissue. In addition, we will present the experiments devised to determine how the performance of these classifiers is affected by the breast density, that is associated with each mammography (and, therefore, with each ROI).
The main scheme that summarizes in a more graphical form all phases of this work is represented in Figure 5. In the first stage, the prototypes of masses are obtained as was explained in Section 2.1. Then the FastICA algorithm [25,26] is applied to obtain the ICA basis (the ICA-based feature extractor), with the log cosh function being used to approximate the neg-entropy. These bases are generated with different configurations, different numbers of components, and using prototypes of different sizes. The second stage uses this generated basis to obtain the training sets and to train and test the classifiers. Finally, in the third stage, the test subset, which contains input vectors not used in the optimization of the classifiers, is used to provide performance results of our system.
System Optimization.
To determine the optimal configuration of the system, various ICA bases were generated to extract different numbers of features (from 10 to 65 in steps of 5) from the original patches, and operating on patches of the different sizes noted above (32 × 32 and 64 × 64 pixels).
The training process consisted of two stages-first training the NN classifiers, and then the SVM classifiers. The results thus obtained on the test subsets in a 10-fold cross validation scheme are shown in Figure 6. This allowed us to find the optimal configuration of the feature extractor.
The study was done with a total of 5052 prototypes: 1197 of malignant masses, 1133 of benign masses, and 2722 of normal tissue.
We found that the optimal ICA-based feature extractor configuration for an NN classifier was a feature extractor that operated on prototypes of 64 × 64 pixels, extracting 10 components (average success rate 86.33%), and for an SVM classifier was a feature extractor that also operated on prototypes of 64 × 64 pixels, extracting 15 components (average success rate 88.41%). The results to be presented in the following section were obtained using these optimal configurations.
Experiments.
To determine how the density associated to each mammography (and, therefore, to each ROI) could affect the performance of our system, we carried out five experiments. In each of the experiments we made the same tests, but with different sets of prototypes: first with all the available prototypes (one experiment), and then with prototypes obtained from mammograms with a given value of density (four experiments).
For each of the experiments, a 30-fold cross validation scheme was used. In this process, 30 partitions of the data set are generated randomly, and, iteratively, one partition is reserved for test, and the remaining 29 are used for training and validation (80% of the prototypes for training and 20% for validation). As a result we have 30 performance values that can be studied statistically.
Finally, to analyze the performance and compare results, ROC curves [27] have been generated for each experiment. To this end, the threshold applied to the output neuron of the classifier (in order to decide if the prototype being classified is mass or normal tissue) is swept, and the ratios of true and false positives are calculated. As a performance parameter, the "area under curve" (AUC) was used. Regarding the prototypes, Table 2 shows the average number of "normal breast tissue, " "benign mass, " and "malignant mass" prototypes for each of the subsets (training, validation, and test), and calculated over the 30 "trainings of the classifier" that are made in the 30-fold cross validation scheme. These average values are shown for the overall experiment, and for the experiments with a given value of density. In the process of selection of the prototypes, no account was taken of the pathology of them. But, as can be seen, this selection process yields always a balanced distribution of the mean number of prototypes in each subset. On average, about 73% of malignant prototypes were included in the training sets, 23% in the validation sets, and 3% in the test sets. For the case of the benign prototypes, around 73% were included in the training sets, a 23% in the validation sets, and 3% in the test sets. And finally, in the case of normal prototypes, about 73% were included in the training sets, 23% in the sets of validation, and a 3% in the test sets. Therefore, if we only consider the overall data, there seems to be no clear trend which suggests that the prototypes selected in any of the ranges of density have a greater or lesser likelihood of being mass or normal tissue. However, when we analyze particular density values, differences are observed in the number of prototypes for each class that may be significant.
In Figure 7, it can be seen that the prototypes of malignant and benign masses prototypes are quite different from the number of prototypes of normal tissue in some cases. For a density value equal to 3, this sum is always significantly lower than the number of normal tissue prototypes. For example, in the training subset this sum is equal to 475.2 and the number of normal tissue prototypes is equal to 559.6. Therefore, there is a difference of 15%. Moreover, this difference is much more significant for a density value equal to 4, where, for the training subset, the sum of malignant and benign masses is equal to 187.2 and the number of normal tissue prototypes is 432.9, being, therefore, the difference equal to 57%. In contrast, for density values equal to 1 and 2 these differences are just only a 3% and 4%, respectively, favorable to the number of mass prototypes.
Results
As we stated above, our main interest in this paper is to evaluate the dependence presented by our system with the composition of breast tissue, determined by the BI-RADS density parameter. For this study, we have considered all those prototypes of masses in the DDSM for which a square shape could be obtained by determining the smallest squared region that includes the complete area marked by the radiologist, and always without resizing. As we commented before, the distribution of prototypes is shown in Table 2 and in Figure 7. We must point out that the relative number of prototypes of each class is very different depending on the density value. Particularly, for a density value of 4, the difference between mass (malignant and benign) prototypes and normal tissue prototypes is as high as 57%. This is a big handicap for the training of the classifiers, as we explain below.
To determine the influence of the density parameter in the performance of our system, we applied first a 30-fold cross validation scheme to train and test the system with the whole set of 5,052 prototypes. Next, a ROC analysis was performed over each of the 30 test results, calculating the area under curve (AUC) as a parameter to describe the performance over each test set. Finally, the mean value of the 30 AUCs was determined, to give a parameter that describes the overall performance of the system with those prototypes. This scheme was repeated later considering sets of prototypes containing only a given value of the density parameter, in order to compare the results. Those results are presented in Table 3. The overall results are presented in Figure 8 for both classifiers, and for cases with densities equal to 1 and 4 in Figure 9 for a NN classifier and Figure 10 for a SVM classifier.
As we expected, the best results were obtained for a density value equal to 1 (virtually fatty breasts with very little breast tissue, usually corresponding to old women), and the worst results for a density of 4 (very dense breasts, with much breast tissue, usually corresponding to young women). These results are consistent with other studies about the nature of cancer cases that are discarded by radiologists in a larger proportion [3][4][5].
Besides, it is important to remark that there are very different distributions of prototypes for the different values of density. While for a density of 1 the number of mass and normal tissue prototypes is almost the same (a 3% difference favorable to the number of mass prototypes), for a density of 4 the difference is very important (a 57% favorable to the number of normal tissue prototypes). This difference in the number of prototypes of each class introduces a statistical bias which could affect the training of the classifiers.
Conclusions
In this work, we have studied the influence of the BI-RADS density parameter assigned to a mammogram over the performance of our system. As a result, we have concluded that the performance is affected by that parameter, since the AUC of the ROC curves decreases from 0.965 to 0.892 (−7.56%) for NN classifiers and 0.964 to 0.897 (−6.95%) for SVM classifiers when we move from density 1 to density 4. However, taking into account that mammograms with density 4 are more difficult to analyze than those with density 1 (density 4 means very dense breasts with much breast tissue, so it is difficult to find masses, while density 1 means that very little breast tissue is present), and considering also the difficulties during training due to the different number of prototypes of both classes, we can conclude that our system is rather robust and performs very well even in the worst conditions.
Besides, it is important that the AUC for the global set of prototypes is only 2.28% and 2.07%, respectively, for NN and SVM classifiers, lower than the performance achieved for density 1, which is the most favourable case, so the performance of the system with the overall set is acceptable.
Finally, as the number of samples in the subsets of prototypes with densities equal to 2 and 3 is significantly higher than those in the subsets with densities equal to 1 and 4, we conclude that the variation of performance due to the BI-RADS density of our system is limited to about 7% in both cases.
On the other hand, it worth to remark the equality of performance obtained with the two types of classifiers tested. | 2017-07-27T01:07:06.782Z | 2013-03-21T00:00:00.000 | {
"year": 2013,
"sha1": "af0ee28605ce3f1e30d12adf232f717dbd544fba",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/cmmm/2013/213794.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2428f621dda65c14dcc7920e8db4c3fc67c15131",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Mathematics"
]
} |
253450920 | pes2o/s2orc | v3-fos-license | Numerical and experimental evaluation of ultrasound-assisted convection enhanced delivery to transfer drugs into brain tumors
Central Nervous System (CNS) malignant tumors are a leading cause of death worldwide with a high mortality rate. While numerous strategies have been proposed to treat CNS tumors, the treatment efficacy is still low mainly due to the existence of the Blood–Brain Barrier (BBB). BBB is a natural cellular layer between the circulatory system and brain extracellular fluid, limiting the transfer of drug particles and confining the routine treatment strategies in which drugs are released in the blood. Consequently, direct drug delivery methods have been devised to bypass the BBB. However, the efficiency of these methods is not enough to treat deep and large brain tumors. In the study at hand, the effect of focused ultrasound (FUS) waves on enhancing drug delivery to brain tumors, through ultrasound-assisted convection-enhanced delivery (UCED), has been investigated. First, brain mimicking gels were synthesized to mimic the CNS microenvironment, and the drug solution was injected into them. Second, FUS waves with the resonance frequency of 1.1 MHz were applied to the drug injected zone. Next, a finite element (FE) model was developed to evaluate the pre-existing equation in the literature for describing the drug delivery via acoustic streaming in brain tissue. Experimental results showed that the FUS transducer was able to enhance the drug volume distribution up to 500% relative to convection-enhanced delivery alone (CED). Numerical analysis showed that the FE model could replicate the experimental penetration depths with a mean difference value of less than 21%, and acoustic streaming plays a significant role in UCED. Therefore, the results of this study could open a new way to develop FE models of the brain to better evaluate the UCED and reduce the costs of conducting clinical and animal studies.
A Central-Nervous-System (CNS) tumor begins when healthy cells within the brain or the spinal cord change and grow out of control, forming a mass. As reported by 1 , approximately 308,102 new cases of brain and other CNS tumors were diagnosed in the year 2020 worldwide, with an estimated 251,329 deaths. Various treatment methods have been implemented to cure CNS cancers. Among all treatment strategies, there are three major clinical approaches: chemotherapy, radiation therapy, surgery, (Fig. 1). Chemotherapy drugs can be administered orally or intravenously. Of the two, the latter delivers drugs immediately via the bloodstream to the tumor site (Fig. 1a). However, Blood-Brain Barrier (BBB) is a major obstacle to delivering therapeutics for the CNS disease, as it restricts many chemical compounds to penetrate the brain 2 . BBB possesses several layers composed of endothelial cells that separate systemic circulation from brain extracellular fluid 3 . These layers prevent chemical drugs entry into the target site and cause low treatment efficiency 4,5 .
Another treatment strategy is radiation therapy that uses the X-ray or gamma-ray to destroy tumor cells (Fig. 1b) 6 . This process takes weeks to shrink the tumor and it is usually implemented in combination with chemotherapy. The main problem associated with this method is the danger of damaging the surrounding healthy tissue that may cause severe problems for patients. Moreover, these rays themselves can damage DNA and cause cancer themselves [7][8][9] . Another invasive method is surgery, which is a process where the patient's skull is opened to remove the entire or part of the tumor (Fig. 1c). Surgery is not an appropriate treatment method for all types of tumors. Furthermore, it may lead to brain damage 10 www.nature.com/scientificreports/ the shortcomings of the mentioned routine treatment strategies, and to use engineering methods for enhancing the results of treatment strategies 11,12 .
More novel methods are concentrated on bypassing the BBB. Convection-enhanced delivery (CED) has been introduced for the treatment of CNS cancer. In this method, a thin cannula is injected directly into the brain tissue to cross the BBB. The cannula's tip is located at the tumor site, and then a pressure gradient with a syringe pump is applied to transfer drugs to the tumor (Fig. 1d). Creating a bulk flow inside the extracellular space, CED can deliver drugs at a depth in order of multi-centimeters into the brain tissue. However, this method can cause edema due to the concentration of drugs at the tip. Also, this method needs longer infusions or using more than one cannula in some cases 13 .
In recent years, using ultrasound waves in the field of biomedical engineering applications has drawn the scientists' attention. Such waves are compatible with the human body and various effects can be extracted from them depending on the application and the area in the human body. Referring to those different effects, diverse techniques have used ultrasound waves to improve drug delivery to the brain or treat brain tumors, including thermal ablation, sonoporation, BBB disruption, and ultrasound-assisted drug delivery 11 . Among these techniques, ultrasound-assisted convection-enhanced delivery (UCED) has shown promising results in preclinical studies to improve drug delivery for treating brain tumors [14][15][16][17][18][19] . UCED is a method to overcome the shortcomings of CED and using the advantage of bypassing BBB. This treatment method is developed to improve the drug penetration depth, avoid edema, and better the drug distribution over the tumor tissue. In this method, first, the cannula's tip is located at the tumor area, and a syringe pump injects the drug solution (refer to Subsect. "Drug injection system" for details) gently into that area. After the drug is loaded, an ultrasound transducer generates waves to help the drug solution penetrate deep into the tumor's tissue.
Lewis, G. et al. used a focused ultrasound (FUS) transducer with a resonance frequency of 1.58 MHz to increase the penetration of Evans blue dye (EBD) into the equine brain, avian muscle, and agarose brain-mimicking gels. They reported that FUS waves could enhance drug penetration by 590% relative to diffusion. Moreover, applying FUS in combination with CED could enhance drug penetration by 880% relative to diffusion 15 . Liu, Y. et al. investigated the effect of a disk ultrasound transducer on drug penetration to porcine brain tissue in vitro and cynomolgus monkey in vivo. Different resonance frequencies were studied (85 kHz, 175 kHz and 1 MHz). They reported that an 85-kHz transducer could enhance the tissue's permeability 24 fold at an energy density of 1200 J/cm 2,17 .
In another study, Lewis, G.K. et al. investigated the effect of UCED on delivering drug solutions into rodent brain in an in-vivo setting. They fabricated a portable ultrasound system called transducer cannula assembly (TCA). The TCA was made up of an ultrasound transducer with a resonance frequency of 1.34 MHz and three 2400-mAh batteries. Four different experiments were conducted to investigate the effect of CED, CED with microbubbles, UCED and UCED with microbubbles. The results indicated that UCED could enhance the drug volume distribution up to 3.25 times relative to CED, and this number was 1.7 when UCED was used with microbubbles 16 . Mano, Y., et al. designed and fabricated a new device called ultrasound-facilitated delivery (UFD) system. A vast range of frequencies and voltages were studied to investigate the ability of UFD in delivering EBD to rodent brain. They reported that UFD was able to enhance drug volume distribution up to 2 times relative to CED 14 . Figure 1. Different treatment strategies of brain tumors. In chemotherapy, the drug solution is injected into the blood circulatory system (a). In radiation therapy, X-ray or gamma-ray is radiated to the tumor cells to destroy them (b). Using surgical approaches, an expert opens the skull to remove the tumor (c). As for conventional drug delivery methods, the drug solution is injected directly to the tumor site via a thin catheter in CED (d). 19 . In the later studies on the concept of drug penetration into the soft tissue (especially in the brain), Raghavan conducted a theoretical research to study the effect of acoustic streaming for drug delivery in soft tissue 20 . To the best of our knowledge, this was the first theoretical study that proposed an equation for drug penetration into the soft tissue porous microenvironment and claimed that the most important effect of acoustic waves on drug penetration is acoustic streaming. In order to derive the equations governing acoustic streaming in a porous medium, first, acoustic streaming of free fluid was evaluated, then acoustic streaming equations in the porous medium were derived. To investigate the ability of these equations in predicting experimental results, Raghavan compared exposure times of past studies with the results of analytical solutions of his equations. He reported that the results were promising, and the equation was able to predict the experimental results. However, acoustic sources were selected in a way that an analytical solution was easy to obtain.
Since the results of UCED were promising and most of the efforts conducted in UCED were preclinical studies, developing a model to simulate this process would be beneficial. This is particularly important when one notes that experimental efforts are high cost, and some crucial parameters may alter when an animal model is used instead of a human model. This point is further pronounced when considering that above 90% of the results obtained from animal experiments fail to predict the results in humans 21 , and figures are worse, up to 99.6% failure, in brain studies 22 . Moreover, because of the differences in inherent characteristics between human body and animal models, numerical models would be helpful since human-related parameters can be applied to them to eliminate the differences that exist between animal and human models. A numerical model that can also predict the effect of ultrasound waves on drug delivery will significantly decrease the cost of studies. Thus, it is necessary to investigate the ability of available equations in simulating the UCED process.
In this study, an experimental setup was developed to mimic the ultrasound-assisted drug delivery procedure. An array of ultrasound transducers with a resonance frequency of 1.1 MHz was used to focus the ultrasound waves. Agarose gels were synthesized to mimic the brain porous microenvironment, and drugs were injected into gels by a syringe pump. Then, the effect of FUS waves on pushing the drugs in the brain was studied in different exposure times and acoustic intensities. Along with the experiments, a finite element (FE) model was developed to simulate the only available equation in the literature and evaluate its ability to predict experimental results.
Materials and methods
Since the aim of this study is to validate the results of the equation for acoustic streaming in the brain by the experimental data, we split this investigation into experimental and numerical parts. In the following, these two have been discussed.
Experimental setup. Phased array transducers. Six disk transducers (Siansonic, Beijing, China) with a resonance frequency of 1 MHz were placed in a hemisphere shell annularly to produce FUS waves. The hemisphere shell radius was 10 cm, and the disk radius of each transducer was 10 mm (focal length = 10 cm, axial FWHM = 3 mm, lateral FWHM = 0.6 mm). Also, these transducers were 2 mm thick, with a maximum power 30 W per transducer. The physical properties of these transducers are listed in Supplementary Table 1.
Electrical Generator. An electrical generator (EMTco; Exon Electro-medical Technologies, Tehran, Iran) was utilized to produce 1-MHz sinusoidal ultrasound waves with up to 300 W electrical power. This generator allows the user to adjust the frequency and acoustic intensity while eliminating noise. Moreover, it uses a ramp function to ensure the transducers' safety.
Brain-mimicking gel preparation. One of the most critical steps in UCED is the selection of background tissue for drug delivery. As mentioned, various tissues, including the equine brain, avian muscle, porcine brain tissue, rodent brain, and agarose gels have been used in the literature as the background tissue to mimic the properties of the human brain. Among these background tissues, 0.6-weight-percent agarose brain phantoms has an important advantage: it is easy to make, without costing a life. They are also transparent, which allows us to trace the drug distribution. Furthermore, it has been reported that they mimic well the neurological properties of human brain tissue, particularly in CED 15,18 .
The gels were prepared based on the recipe given in 15,18 . For preparation of 1X TBE buffer, first, 108 g of Tris was mixed with 55 g of Boric Acid. Next, 750 ml of dH2O was added and all three ingredients were mixed completely. Then, 40 ml of 0.5-M (7.5 g) EDTA Disodium was added. The flask containing the solution was filled to a final volume of 1 L by adding dH2O and it was stored at room temperature. Finally, a 1:10 dilution of the prepared TBE with dH2O created a 1X Working Solution. The buffer had a final pH of ˜8.3 and did not require adjustment.
Agarose gel was prepared using a weight/volume ratio of 0.6-0.7%. This optimal percentage of agarose would result in mimicking the brain microstructure efficiently 15 . Thus, 0.6 g of agarose powder was added to the flask containing 100 ml of TBE buffer. The agarose was allowed to sit in the solution for a few minutes before stirring. A stir bar and stirring plate were used to rapidly mix the solution. The flask was covered with a plastic wrap and a small hole was made in it to allow the solution to vent. The flask was heated in a household microwave for 30 s and then the solution was stirred again, gently. This process was repeated until a homogenous agarose solution was obtained. Then, the solution was cooled down to 55-60 °C and poured into a plastic container. The gel was then refrigerated for a few hours to solidify. www.nature.com/scientificreports/ Drug injection system. The drug injection system included a syringe pump, a peripheral venous catheter, a connector tube, and a drug solution. Since the aim of this study is to investigate the effect of FUS waves on improving drug penetration in CED, the amount of drug solution and the volumetric flow injected into the background tissue must be determined so that the experiment can be reproducible in later studies. A syringe pump was utilized to inject 100 µ L of drug solution at the volumetric flow of 60 µ L/min via a peripheral venous catheter. Red food coloring was used to mimic the drug solution, allowing drug distribution tracking into the agarose phantom 18 .
As the name of the connector tube suggests, it connects the syringe pump to the peripheral venous catheter.
Ultrasound-assisted convection-enhanced delivery. After preparing brain-mimicking gels and injecting drug solutions into the samples, they were exposed to FUS waves. Figure 2 shows a schematic of the UCED process. Four different experiments were conducted to better understand the effect of exposure time and intensity (spatial-peak temporal-peak intensity; I SPTP ) of FUS waves on drug penetration: One CED and three different UCEDs. Table 1 lists the acoustic intensities and exposure times used in these experiments.
Acoustic streaming and drug penetration depth estimation. In this section, first, ultrasound waves generated by an FUS transducer were obtained. Then, by assuming the brain tissue as a porous medium (with low permeability), acoustic streaming equations were solved. An FE model was developed to evaluate the ability of the acoustic-streaming equation in porous media in predicting experimental results. To develop this FE model, we used COMSOL Multiphysics which is proven to combine and solve problems with different physical equations and is widely used in biomedical applications of ultrasonic waves [23][24][25][26] .
Ultrasound waves propagation calculation. In this study, a spherical 2D-axisymmetric transducer with a focal length of 50 mm and a resonance frequency of 1.1 MHz was used as the FUS source. The frequency was set at 1.1 MHz to be validated with the data in the literature and at the same time be close to the experimental value (1 MHz) with adequate precision. In order to obtain ultrasound field variables such as pressure and velocity, the In the above equation, p is the ultrasound pressure. Furthermore, equivalent wave number ( k eq ) and complex density ( ρ c ) are defined as 27 : in which α is the attenuation coefficient and ρ , ω and c denote the density, angular frequency, and sound speed, respectively.
The acoustic properties of brain-mimicking gels are presented in Supplementary Table 2. Given the fact that brain-mimicking gels are symmetrical about the propagation line of the ultrasound waves, a 2D-axisymmetric FE model was developed to simulate UCED.
In the model, in order to generate ultrasound waves, a normal displacement was applied to the transducer boundary. Other boundaries were assumed to be plane wave radiation boundaries which allow sound beams to pass through without any reflection. Figure 3a shows the geometry and boundary conditions of the FUS www.nature.com/scientificreports/ transducer, which can be compared with the transducers used in the experiments, as shown in Fig. 3b. It is worth mentioning that the ultrasound domain after revolution around the axis of symmetry would be a cylinder which can be a good representation of the experimental environment.
Acoustic streaming in porous media. While, various studies have been conducted to investigate the acoustic streaming effect, only one study has proposed acoustic streaming equations in a porous medium. This equation, proposed by Raghavan for describing streaming velocity and pressure, is as follows 20 : Here, p,v , and ρ 1 are respectively the pressure, velocity, and density of sound. The porosity of tissue is defined by the unitless parameter ε , and γ can be expressed as: where K is the hydraulic conductivity.
Safety considerations. While ultrasound waves are a promising tool for improving drug delivery efficiency and can significantly reduce drug side effects, safety issues in ultrasound as an imaging or therapeutic tool must be considered. One of the simplest ways to define limitation guidelines for the safety of a ultrasound system is the use of mechanical index (MI). This unitless number creates a dividing line between circumstances where cavitation initiates and the situation where no or little steady vibrations of bubbles are present. These bubbles either exist in the medium or they can be generated when ultrasound waves are applied onto the human body. MI can be determined with the formulation below 17,28 : where P n denotes peak negative pressure in MPa and f is the frequency of the waves in MHz. In this study, the maximum pressure corresponds to Experiment III. From the simulations and under the conditions of Experiment III, the peak negative pressure is 0.8 MPa, and the resonance frequency of the source is 1.1 MHz. Therefore, MI = 0.76 is the maximum value that occurs in our experiments. The safety threshold is usually considered MI = 1.9 28 . Therefore, in the view of MI, the parameters of our setup are safe even for use in the human brain and have pre-clinical value.
Validation
Since Eq. (4) and (5) give acoustic streaming velocity in a porous medium and the experimental studies reported the drug penetration depths before these equations were reported, Raghavan validated his equation using exposure times. He extracted the penetration depths from figures or tables in the past studies and calculated the exposure time by the following integration 20 : where φ is the porosity, v(r) is the velocity of the liquid, T is the exposure time, and r is the radial distance (in the spherical coordinate). Finally, he compared the calculated exposure times with prior experiments. Using this method of validation, however, diverse results were obtained. In other words, some exposure times were close to those in the experiments, and some did not match experiment data. Given the fact that El-ghamrawy et al. 29 reported average streaming velocities at different acoustic intensities, a different validation approach was considered in the study at hand. In this approach, first, acoustic pressure and velocity were extracted by solving Eq. (1) and setting maximum acoustic intensities and acoustic parameters on what El-ghamrawy et al. reported (Supplementary Fig. 1) 29 . Second, streaming velocity profiles were calculated from Eq. (5) and (6). Eventually, average streaming velocities were obtained. As Fig. 4 illustrates, a good agreement exists between findings of this study and the data reported in 19 as the difference between the two methods in average streaming velocities are 44.33%, 17.33% and 8.18% for 159 W/cm 2 , 646 W/cm 2 and 1317 W/cm 2 ultrasound intensities, respectively, relative to the data from literature.
Results and discussion
In this section, the results of the experiment and numerical parts are reported. First, the effect of ultrasound waves on enhancing the drug solution penetration through the gels is reported, then these results are quantified, and at last the drug volume distribution of each sample is presented. Second, the ultrasound intensities are adjusted to the values of the experiments, and the streaming velocities are calculated. Finally, the drug penetration depths from the experiments and the simulations are compared, and the ability of the FE model in predicting the empirical results is discussed. Table 1. Subfigure (e) shows how the volume of the penetrated drug was calculated by a cylindrical calculation from parameters D and H. www.nature.com/scientificreports/ and 459% relative to CED, with the increase of ultrasound intensity from 6.72 to 9.67, 13.2. and then 17.20 W/ cm 2 (Experiment I), respectively, within the exposure time of 30 min. In order to assess the impact of exposure time of the ultrasound waves in UCED, Experiments II and III were designed, where ultrasound intensity is fixed (at two values of 13.20 W/cm 2 and 17.20 W/cm 2 ), while exposure time is varied. In contrast, in Experiment I, the exposure time was constant, and consequently the CED data remained almost constant. As shown in Fig. 6b,c, the change in the CED results (gray) with time is very slow. In other words, time had a relatively small effect on improving the outcomes of CED. However, time had a significant effect on the UCED drug volume distributions. That is, in Experiment II, by increasing the time from 40 to 50 min, drug volume distribution grew 48%, but this number was about 12% for CED, which shows the effectiveness of the UCED method.
Since the only force for propagating drug solution into brain tumor in CED after cutting the syringe pump (or pressure) is diffusion, it is no surprise that the drug volume distribution of the CED samples grew slowly during the specified time. Moreover, it is worth mentioning that in Experiment II, 180%, 308%, 318% and 421% enhancement was made by using UCED, when compared to CED, for exposure times of 20, 30, 40 and 50 min, respectively. These values were 260%, 459%, 503% and 500% for Experiment III. In clinical practice, this improvement in drug volume distribution could translate to better drug delivery to the brain tumors and, hopefully, effective treatment of these tumors.
While it has been reported that the increase in time and ultrasound intensity lead to better drug delivery and more drug volume distribution through brain-mimicking gels, it must be noted that there is always an upper limit for exposure time and ultrasound intensity since as time and intensity grow the thermal effects of FUS waves become stronger and may lead to necrosis of the healthy brain tissue. Furthermore, one can claim that it is true that it takes time for CED results to reach the outcomes of the UCED and if we give CED enough time in the scale of days to CED, it can replicate the results of UCED. This claim is valid for a passive background of drug delivery, but, in a real brain with the existence of circulatory system and other complex organs, the major amount of drug solution dissipates through the clearance into capillaries and brain tissue, and thus, the CED cannot replicate the efficiency of UCED 16,30 .
Another possible scenario can be extending the time of applying pressure and continuing the infusion, until the distribution of drug reaches a desired value. While prolonging the infusion can greatly impact the efficiency of drug delivery, sometimes this process can take hours and this duration is infeasible 17 . In any case, the main focus of this paper is not to contradict CED but to make it more feasible and quicker. This equation describes the multi-dimensional acoustic streaming for soft, porous materials and does not contain any terms for the circulatory system or other scenarios in a real brain. However, since we used brainmimicking gels that are inherently passive in this study, this equation is appropriate for developing a numerical model of this drug delivery method. To achieve this goal, first, the ultrasound intensities were set at the numbers used in the experiments ( Supplementary Fig. 2), and ultrasound pressures were calculated (Eq. (1)). Then, these pressures were assumed as the inputs of Eq. (4) and (5). As there is no pre-defined physics for Eq. (4) and (5), two Coefficient-Form PDEs were developed in COMSOL to obtain streaming velocity and pressure. Since the effect of waves on producing streaming velocities is much more significant in the axis of symmetry of the FUS transducer, other directions are assumed to have zero contribution in streaming 31 . By solving Eq. (4) and (5), streaming velocity profiles were obtained, as displayed in Supplementary Fig. 3. Next, the average streaming velocities were calculated. These average velocities were multiplied by the exposure time of each experiment to obtain the drug penetration depths. To better compare the drug penetration depths of the numerical model and experimental setup, these values are shown in Fig. 7a,b,c for Experiments I, II and III, respectively. These data show that Eq. (4) and (5) produce results that match the experimental results in all three experiments with a slight difference. While the difference between numerical and experimental results may vary for each experiment, for most of them, the mean error remained under 23% (Table 2), which shows the ability of this numerical model to replicate the experimental data. When outlier data (> 30%) is removed, the mean value of simulation-experiment difference is 20.11 ± 8.92%, 7.01 ± 4.52%, 10.69 ± 6.84% for Experiments I to III, respectively, when the difference percentage is calculated relative to the experimental data as reference. Figure 7d reports the mean value of the simulation-experiment difference, alongside the standard error bar, showing a good agreement between numerical and experimental results in a collective sense. This validation method fills the gap between experimental efforts and results from solving the equation proposed by Raghavan 20 . Further, the results are consistent for all the cases, in contrast to the diversity reported in 20 for validating the results of UCED using Eq. (7). It is worth mentioning that most of the simulation penetration depths are smaller than the results of experiments. This difference may be interpreted as the thermal effects of ultrasound waves that can make little change in the microstructure of the brain-mimicking gels. Evidence for this is the increase in the simulation-experiment difference at the highest intensity, i.e. 17.2 W/cm 2 in Experiment I (Fig. 7a), where heating is expected to be the most.
Conclusion
This paper is written in two parts. First, the phased-array transducer of one of our previous studies was modified and redesigned to produce ultrasound waves for UCED. Then, the CED process was conducted, and after loading a drug solution into several brain mimicking gels, the samples were exposed to ultrasound waves with discrete increases in intensity and exposure time. The outcomes of the experimental part showed that ultrasound waves could enhance the drug volume distribution by up to 500% relative to the CED results. Second, a numerical FE model of the UCED process was developed to assess the performance of the only existing equation for describing acoustic streaming-the primary effect of ultrasound waves in UCED-in a porous medium. For this purpose, we validated our method with two different approaches: first, we validated the average streaming velocities with the data of El-ghamrawy et al. and then we validated our obtained penetration depths to those obtained from the experiments. Our results indicated that the results from Raghavan's equation are in good agreement with experimental outcomes. Moreover, there was a good agreement between simulation and experimental results, with less than 21% mean difference, when the outlier data was excluded.
In recent years, targeted drug delivery methods have significantly improved drug delivery efficiency with various disciplines, including chemical, electrical, and mechanical tools. Among these tools, ultrasound waves have shown promising outcomes in enhancing the treatment efficiency. However, most of these efforts are at their primary steps and require many investigations to disclose and measure every parameter that affects such methods. While clinical studies can be a significant step toward understanding these methods, they are highcost, cannot control some parameters, and have ethical issues. Regarding these disadvantages, developing finite element models are valuable because they are low-cost, can be patient-specific, and can easily control the geometry and physical parameters. Also, the mechanical properties of human tissue can be used in them for more realistic models. The aim of this study was to validate the experimental outcomes with a FE model directly. This approach can be used to build more complex models of the human brain with a tumor and other live organs to better understand the effect of ultrasound waves in drug delivery to tumors.
Data availability
All data generated or analyzed during this study are included in this published article (and its Supplementary Information files). Table 2. This table shows the errors of numerical penetration depths compared to each experiment. The first three rows are data related to Experiment I, and others are error data for Experiment II and Experiment III. The relative differences, with experimental data as the reference, are listed as well. | 2022-11-11T16:35:09.043Z | 2022-11-11T00:00:00.000 | {
"year": 2022,
"sha1": "2bb9a0c3bc39212e183088789e4c3ce1fea2f209",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "2bb9a0c3bc39212e183088789e4c3ce1fea2f209",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
96561691 | pes2o/s2orc | v3-fos-license | Ab initio Calculations of the Linear and Nonlinear Optical Properties of Amino Acids
A number of proteins can assemble into chiral structures that display strong nonlinear optical activity. For instance, proteins such as myosin and collagen exhibit intense second harmonic generation (SHG). A large number of experimental studies on the SHG of proteins have been conducted; however few predictive models have been proposed that reliably relate the macroscopic SHG properties to the amino acids present in the peptidic chain. In this study, the linear polarizability (α), first (β) and second hyperpolarizability (γ) of all twenty amino acids was investigated by time-dependent Hartree-Fock calculations under physiological conditions. Ab initio calculations were performed using the GAMESSUS computational chemistry package. We have found that the aromatic amino acids give rise to the largest mean α, β and γ values. With this finding, we hope to apply this method to protein structures in order to understand how second harmonic signal is generated from individual amino acids, as well as, recognize how manipulation of the secondary structure of proteins might enhance SHG and third harmonic generation (THG).
Introduction
Nonlinear optical microscopy based on second harmonic generation (SHG) and third harmonic generation (THG), is currently an emerging technique used in studying biological structures (see review [1]). Harmonic generation microscopy is used to achieve distinct contrast in living tissue [2] [3]. Although both SHG and THG are coherent light processes, SHG only occurs in media with non-central symmetry. THG can be generated in homogeneous media, but when focusing under tight conditions with a microscope, THG is generated at interfaces [4]. For example, THG has been found to originate from multilayer structures such as biological membranes and cell walls [5] [6] while SHG has been previously found to originate from proteins such as myosin and collagen which consist of ordered helical structures [7] [8] [9]. Due to symmetry restrictions, media exhibits SHG signals at well-defined laser polarizations [10]. This information can be used to determine the orientation of proteins in tissues [10]. Therefore, information related to the organization of the secondary structures of proteins can be extracted from SHG data [10]. 1 To whom any correspondence should be addressed. A large number of experimental studies on the SHG of proteins have been conducted though, few predictive models have been proposed that reliably relate the macroscopic SHG measurements to the structure of proteins. For further understanding of the origin of SHG and THG from various biological structures, modeling of the nonlinear response from molecular structures with various structural organizations is required. In this study, we investigated a relatively simple model for interpreting the nonlinear optical properties of proteins. This was done by examining the influence of the primary structure of proteins to generate both second harmonic and third harmonic signal. Ab initio computational studies were performed using the GAMESSUS computational chemistry package (Gordon Group, Ames Laboratory, Iowa State University) [11]. The time-dependent coupled perturbed Hartree-Fock (TDHF) approach was used to calculate the frequency dependent linear and nonlinear optical properties of individual amino acids [12] [13]. Specifically, an average for the linear polarizability, α, was expressed as well as values for the average first hyperpolarizability tensor, β, descriptive of SHG and the average second hyperpolarizability tensor, γ, descriptive of THG.
Theory
The dipole moment of a molecule when induced by an electric field can be expressed in a Taylor series as [14] 11 ...
where μ o is the permanent dipole moment of the molecule in the absence of an electric field, E is the electric field, α is the linear polarizability, and coefficients β and γ are the first and second hyperpolarizabilities, respectively. In equation (1) the repeated indices imply summation. In general, α, β, and γ are dependent on the frequency, ω, of the incident electric field. The induced dipole yields linear polarization as well as second harmonic generation at 2ω and third harmonic generation at 3ω.
The mean linear polarizability can be defined as The mean second hyperpolarizability can be expressed as The first hyperpolarizability is zero for all molecules with inversion symmetry. The first hyperpolarizability is a third rank tensor described by a 3 x 3 x 3 array. All 27 components of the three dimensional array can be used to find the mean first hyperpolarizability by calculating an average related to the modulus The expressions (2), (4), and (5) were used to express the linear and nonlinear optical properties of all amino acids.
Method
Calculations of the linear polarizability, the second-and third-order hyperpolarizabilities at the fundamental wavelength (1028nm) of our home-built Yb:KGd(WO 4 ) 2 laser [15] was achieved by applying the ab initio time-dependent coupled perturbed Hartree-Fock (TDHF) method at the Restricted Hartree-Fock (RHF) level using GAMESSUS program [12] [13] running parallel on the GPC supercomputer at the SciNet GPC Consortium at the University of Toronto. Calculations were performed on a single node which included 8 processors. Geometry optimization of each amino acid did not take longer than a single day. Similarly, the corresponding polarizability and hyperpolarizabilities of each amino acid were calculated within one day. Output geometry optimization files were as large as 430 000 kB whereas hyperpolarizability calculations generated files no larger than 2300 kB. The ground state geometries of the amino acids were fully optimized without any symmetry restrictions using the 6-311++G** basis set at the DFT/B3LYP level at pH 7 [16]. These molecular structures were then used for nonlinear optical property calculations. As suggested in literature, reliable calculation of the nonlinear optical properties requires increased convergence parameters which included the following GAMESS options: ITOL=30 (the products of primitives whose exponential factor is less than 10 -30 are omitted), ICUT=20 (integrals less than 10 -20 are overlooked), INTTYP=HONDO (HONDO/Rys integrals evaluated), ATOL=1.0D-07 (the tolerance for convergence of first-order results is 1.0 -7 ), BTOL=1.0D-07 (tolerance for convergence of first-order results is 1.0 -7 ) and NCONV=10 (when the absolute value of the density change between two consecutive self-consistent field cycles is less than 10 -10 then convergence is achieved) [17]. Nonlinear optical calculations were also done using 6-311++G** basis set. It has been shown previously that a basis set augmented with diffuse functions is necessary when computing the nonlinear optical properties of molecules [18].
Results
The mean linear polarizability, first hyperpolarizability, and second hyperpolarizability for all twenty amino acids at pH7 were found by applying equations (2) From table 1, larger amino acid structures give rise to increased linear polarizabilities due to increased electron density and delocalization. Of all the different chemical derivatives, the aromatic amino acids, phenylalanine, tryptophan, and tyrosine have the largest linear polarizability values. Aromatic amino acids consist of conjugated residues which contribute to the increased polarizability due to the delocalization of π-electrons. Aromatic amino acids, may for this reason, play a significant role in the nonlinear response of proteins [19]. In particular, tryptophan has the largest mean linear polarizability as well as the largest mean first and second hyperpolarizability values. In general, a large linear polarizability is necessary to obtain substantial hyperpolarizabilities, though there are additional factors associated with increased hyperpolarizability values [20].
Due to symmetry restrictions, the SHG response of a molecule is highly directionally dependent where each individual β tensor varies. Eighteen of twenty-seven unique components of the first hyperpolarizability tensor were calculated where Kleinman symmetry was assumed for the other nine β tensor elements. An overall orientational average β was calculated using equation (4). In particular, phenylalanine, tryptophan, and histidine demonstrate the largest < β> values. These molecules are polarizable in multiple directions and therefore, the overall average of β appears to be large. We express an orientational average for the first hyperpolarizability however, when amino acids are aligned, such as in proteins, β is best expressed by its individual tensor components.
From calculations of the second hyperpolarizability, aromatic amino acids demonstrate the largest γ values due to increased π-electron delocalization. However, the second carboxylic acid amino acids including aspartic acid and glutamic acid also give rise to large γ values. Although these two amino acids are not conjugated, both of them contain two carboxylic acid groups, which are deprotonated at pH 7. Electron density is delocalized amongst the deprotonated carboxylic acid groups at both ends of the amino acid which therefore likely enhances the polarizability of aspartic acid and glutamic acid.
Conclusions
We have demonstrated that ab initio calculations can help in interpreting the origin of SHG and THG from the primary structure of proteins. Each amino acid alone does not give rise to high nonlinear optical signals however; increased SHG and THG can be achieved by coherently summing the nonlinear response of many amino acids. In particular, the aromatic amino acids are highly polarizable due to πelectron delocalization and may be desirable candidates for increasing SHG and THG when forming secondary structures. We hope to use the fundamental understanding of the origin of SHG and THG from amino acids to manipulate the secondary structure of proteins in order to enhance the response of SHG and THG. | 2019-04-06T13:10:12.348Z | 2010-11-01T00:00:00.000 | {
"year": 2010,
"sha1": "76c141c20060a26d5162489f3b1823590572c5a8",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/256/1/012015",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c41d85d2263d801621812934dcd884b02dc2abfc",
"s2fieldsofstudy": [
"Chemistry",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Chemistry"
]
} |
254106737 | pes2o/s2orc | v3-fos-license | Revisiting scalar leptoquark at the LHC
We investigate the Standard Model (SM) extended with a colored charged scalar, leptoquark, having fractional electromagnetic charge -1/3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$-1/3$$\end{document}. We mostly focus on the decays of the leptoquark into second and third generations via cμ,tτ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$c\,\mu , t\, \tau $$\end{document} decay modes. We perform a PYTHIA-based simulation considering all the dominant SM backgrounds at the LHC with 14 TeV center of mass energy. Limits have been calculated for the leptoquark mass that can be probed at the LHC with an integrated luminosity of 3000fb-1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$3000 \, {\hbox {fb}}^{-1}$$\end{document}. The leptoquark mass, reconstructed from its decay products into the third generation, has the maximum reach. However, the μ+c\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu + c$$\end{document} channel, comprising a very hard muon and c-jet produces a much cleaner mass peak. Single leptoquark production in association with a μ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu $$\end{document} or ν\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\nu $$\end{document} provides some unique signatures that can also be probed at the LHC.
Introduction
Leptoquarks, arising in several extensions of the standard model (SM) are particles which can turn a lepton to a quark and vice versa. Beyond standard model (BSM) theories, which treat the leptons and quarks on the same basis, like SU (5) [1], SU (4) C × SU (2) L × SU (2) R [2], or SO(10) [3,4], contain such particles. The theories with composite model [5] and technicolor model [6] can also have such particles. Leptoquarks carry both baryon and lepton numbers simultaneously.
The discovery of the leptoquarks would be unambiguous signal of physics beyond the SM and hence searches for such particles were conducted in the past experiments and the hunt is still going on at the present collider. Unfortunately, so far, all searches have led to a negative result. However, these searches received further attention in view of the possibilities for leptoquarks to explain certain striking discrepancies observed in the flavor sector. The discrepancies are observed mostly in rare decay modes of B mesons by various experimental collaborations, like LHCb, Belle and BaBar, hinting towards lepton non-universality. Previous collider studies on leptoquark searches can be found in Refs. [7][8][9][10][11][12][13][14][15][16][17][18][19][20].
In this article we consider the LHC phenomenology of a scalar leptoquark which has the quantum numbers under the SM gauge group (3, 1, −1/3). As mentioned above, the leptoquark can explain some of the observed anomalies [21,22]; however, in this article we mainly focus on the collider perspective. The presence of the leptoquark also improves the stability of the electroweak vacuum significantly [23]. A study at ATLAS [24] with 13 TeV data puts a bound on the scalar leptoquark mass > ∼ 1, 1.2 TeV when such leptoquark decays to u e and c μ with 100% branching fraction, respectively. Another very recent study at 13 TeV data from the CMS collaboration [25] imposes a most stringent bound on the leptoquark mass of ≥ 900 GeV in the search through t τ final states with 100% branching fraction. The previous results, with 8 TeV data, from the search of single leptoquark production are much weaker ≥ 660 GeV [26] for its decay to cμ.
As mentioned above, a leptoquark with a hypercharge of −1/3 has been looked for at CMS experiments via its third generation decay mode, i.e., t τ [25]. However, no searches are performed for the final states comprising the decays of the leptoquark involving both second and third generations. In this article we focus mainly on the third generation and also controlled second generation decay phenomenology for such leptoquarks that can probe the most favored region of the parameter space required by the other studies.
Preference of the third generation will promote the decays of the leptoquark to t τ modes over other decay modes. This changes the search phenomenology drastically, which is the topic of this article. Apart from the decay, such a parameter space also allows single leptoquark production in association with ν via b gluon fusion and in association with μ via c gluon fusion. In this aspect we focus on the leptoquark pair production as well as the single leptoquark production at the LHC.
The paper is organized as follows. In Sect. 2 we briefly describe the model. The parameter spaces that is allowed when a leptoquark dominantly decays into second and/or third generations are studied in Sect. 3. The benchmark points and collider phenomenology are discussed in Sect. 4. The LHC simulation results for the final states coming from leptoquark pair production are presented in Sect. 5. In Sect. 6 we discuss the leptoquark mass reconstruction and the reach at current and future LHC. The last two discussions are repeated for single leptoquark productions in Sect. 7. Finally, in Sect. 8 we discuss the prospects of the leptoquark in future colliders and summarize the results.
The leptoquark model
We consider the SM extended with a colored, SU (2) singlet charged scalar φ, i.e., the leptoquark with the SM gauge quantum numbers (3, 1, −1/3). The relevant interaction terms are The Q, L are SU (2) L quark-lepton doublets given by Q = (u L , d L ) T , L = (ν L , L ) T , and u c R and R are right-handed SU (2) L singlet up type quark and right-handed charged lepton, respectively. The generation and color indices are suppressed here.
The leptoquark also interacts with the SM Higgs doublet via the scalar potential It is shown in Ref. [23] that the coupling g hφ plays an important role in improving the stability of electroweak vacuum.
The moderate value of g hφ (≥ 0.3) can make the vacuum (meta-) stable up to the Planck scale for the top quark mass measured at Tevatron [27]. The leptoquark φ has an electric charge of −1/3 unit and is also charged under SU (3) c . A similar state can also arise from a leptoquark triplet with gauge quantum numbers (3, 3, −1/3), which comprises three states with electric charges −4/3, − 1/3 and 2/3; however, the interactions are different in this case.
The Lagrangian in Eq. (2.1) is written in the flavor basis, and the rotation of fermion fields should be included in the definitions of Y L,R matrices while performing the phenomenology in their mass basis. Thus in general the matrices Y L and Y R have off-diagonal terms leading to leptonquark flavor as well as generation violating couplings. The off-diagonal couplings are strongly constrained by various meson decay modes [28][29][30][31][32][33][34] and hence, for the analysis in our paper, we assume Y L,R to be diagonal. For simplicity, we introduce the following notation after performing the rotations via CKM (PMNS) matrix for down-type quarks (neutral leptons) for moving to the mass basis:
Revisiting leptoquark parameter space
The search for leptoquarks at the colliders especially at the LHC has drawn a lot of interest from the last few years. The subject has recently received further impetus from the possibility of explaining the lepton non-universal anomalies seen in B decays by leptoquarks. From the experimental point of view, it is much simpler to look for the final states involving a first or second generation of leptons. Unfortunately, no sign of excess has been seen in such searches, which eventually put bounds on the leptoquark mass as follows: a scalar leptoquark of a mass of ∼ 1 TeV is excluded at 95% confidence level assuming 100% branching ratio into a charged lepton (first and second generation) and a quark [24]. Depending upon the gauge quantum numbers, the leptoquark can also decay to b τ final states. Searches for this type of leptoquarks have also been performed in Ref. [35] which excludes leptoquark mass up to 740 GeV with the assumption of 100% branching fraction. In this work we focus on the parameter space of a scalar leptoquark which decays predominantly to the t τ and b ν final states. Both CMS [25,36] and ATLAS [37] have performed searches at 7-8 TeV and also in 13 TeV center of mass energy, where the lower bounds on the leptoquark mass are found to be 900 GeV and 625 GeV, respectively, for the final states mentioned.
In Fig. 1 we illustrate that a leptoquark mass > 600 GeV is still allowed, within 95% confidence level, for comparatively lower branching fractions to second and third generation are shown in yellow bands where the leptoquark decays to the b ν, c μ and t τ final states, respectively. The blue solid and dashed curves denote the (next-to-leading order) NLO pair-production cross-sections for the choice of scale μ = m φ and μ = √ŝ , respectively. We use the notation β = B(φ → b ν) = 0.39 (Fig. 1a), β = B(φ → c μ) = 0.1 (Fig. 1b) and β = B(φ → t τ ) = 0.61 (Fig. 1c). Later we shall discuss the collider phenomenology for three specific choices of benchmark points.
Benchmark points and distributions
It is apparent from the previous section that a range of less than TeV for the leptoquark is still allowed for relatively lower branching fractions to second and third generation lep-tons and quarks. In this article we focus on the searches for the final states that arise from the combinations of the leptoquark decays to second (c μ) and third (t τ ) generations. We select the three benchmark points presented in Table 1 motivated by such decays.
We consider two benchmark points with relatively lighter leptoquark mass of 650 GeV and the third one with 1.2 TeV in BP1, BP2 and BP3, respectively, for a collider study at the LHC with 14 TeV of center of mass energy. We have implemented the model in SARAH [38] and generated the model files for CalcHEP [39], which is then used for calculating the decay branching ratios, tree-level cross-section and event generation. Table 2 shows the decay branching fraction for the leptoquark, φ. For BP1 and BP3, the leptoquark dominantly decays into the third generation; 60.8%, 63.2% to t τ and 39.2%, 36.8% to b ν states. However, in the chosen BP2 the leptoquark also decays into the second generation, i.e., 10.4% into c μ and s ν.
0 Table 2 Branching fractions of the leptoquark φ to different decay modes for the benchmark points defined in Table 1 Branching Table 3 shows the leptoquark pair-production crosssections for the benchmark points where 6TEQ6L [40] is used as PDF and √ŝ is chosen as renormalization/ factorization scale. The suitable k-factors for NLO crosssections are implemented [9,10,15,16]. The choice of √ŝ as a scale gives a conservative estimate, which can get an enhancement of ∼ 40% for the choice of m φ as renormalization/factorization scale.
Before going into the details of the collider simulation let us have a look at the different differential distributions to motivate the advanced cuts which will be used later on to reduce the SM backgrounds. Figure 2a shows the lepton p T arising from the W ± in the case BP1 and BP3. However, for BP2 an additional source of muon is possible from the decay of the leptoquark, which can be very hard. The charged leptons coming from W ± decay in the case of BP3 are also relatively hard due to the higher mass of the leptoquark (m φ = 1.2 TeV). Hence, eventually, we expect much harder charged leptons compared to the SM processes. Figure 2b shows the charged lepton (e, μ) multiplicity distribution for the three benchmark points, where the third and fourth charged leptons come from the semileptonic decays of b or decays of τ , which could be hard enough to be detected as charged leptons in the electromagnetic calorimeter (ECAL) of the detector at the LHC. Figure 3a describes the p T of the first two p T ordered jets for BP1 and BP3, respectively. The respective leptoquark masses are 650 and 1200 GeV for BP1 and BP3, resulting in relatively soft and hard jets for BP1 and BP3. The p T distributions of BP2 are very similar to BP1 due to the same mass value chosen for the leptoquark. Nevertheless, irrespective of the benchmark points the requirement of a very hard first jet would be critical in reducing the SM backgrounds including tt, which can still give a high p T tail. Figure 3b shows the jet multiplicity distribution for BP1 and BP3, and the peak values for both of them are at five.
The leptoquark decaying to t τ gives rise to lots of hard τ -jets, which can easily be identified from the relatively soft τ -jets coming from the W ± decays. Figure 4a describes this feature, where we can see the τ -jets coming from the decay of the leptoquark in BP3 is the hardest and for BP1 it is softer, and for the tt background, the p T of such τ -jets are really low compared to the signal. A cut on such τ -jets can be decisive to kill the dominant SM backgrounds. Figure τ -jet multiplicity in the final states and a maximum of four τjets can be achieved when W ± s decay in τ ν mode. All these distributions will be crucial in the next section where we apply additional cuts to decide on the final state topologies.
Collider phenomenology
We focus on the phenomenology arising from the decays of the leptoquark into the second and third generations. The first part of the study is concentrated on the final states arising from the leptoquark pair production but the contributions from single leptoquark production are also being taken into account, whenever such contributions are nonnegligible. For the simulation at LHC with center of mass energy of 14 TeV, we generate the events by CalcHEP [39].
The generated events are then mixed with their decay branching fraction written in the decay file in SLHA format, by the event_mixer routine [39] and converted into 'lhe' format. The 'lhe' events for all benchmark points then are simulated with PYTHIA [41] via the lhe interface [42]. The simulation at the hadronic level has been performed using the Fastjet-3.0.3 [43] with the CAMBRIDGE AACHEN algorithm. We have selected a jet size R = 0.5 for the jet formation. The following basic cuts have been implemented: • the calorimeter coverage is |η| < 4.5; • the minimum transverse momentum of the jet p jet T,min = 20 GeV and jets are ordered in p T ; • leptons ( = e, μ) are selected with p T ≥ 20 GeV and |η| ≤ 2.5; • no jet should be accompanied by a hard lepton in the event; • R j ≥ 0.4 and R ≥ 0.2; • since an efficient identification of the leptons is crucial for our study, we additionally require a hadronic activity within a cone of R = 0.3 between two isolated leptons to be ≤ 0.15 p T GeV, with p T the transverse momentum of the lepton, in the specified cone. In the following subsections, we discuss the phenomenology coming from the leptoquark pair production at the LHC as we describe the different final state topologies. For notational simplicity we refer to 'b', 'c' and 'τ ' as b-jet, c-jet and τjet, respectively. As mentioned above, we include the single leptoquark contribution whenever it is necessary. Later we also shall investigate how single leptoquark production can generate different final state topologies.
This final state occurs when both leptoquarks, which are pair produced, decay into a third generation lepton and quark, i.e., t τ . The top pair then further decay into 2 b quarks and 2 W ± bosons. This gives rise to the final states 2b + 2τ + 2 listed in Table 4, where the event numbers are given for the three benchmark points and dominant SM backgrounds, with the cumulative cuts at the 14 TeV LHC with an integrated luminosity of 100 fb −1 . Here we collect both leptons (e , μ) coming from the W ± decays. The τ -jets are reconstructed from hadronic decays of τ with at least one charged track within R ≤ 0.1 of the candidate τ -jet [44]. The b-jets are tagged via secondary vertex reconstruction and we take the single b-jet tagged efficiency of 0.5 [45]. The requirements of two b-jets, two τ -jets and two opposite sign charged leptons, along with the invariant mass veto around the Z mass for di-and di-τ -jets, make the most dominant SM backgrounds such as tt, Z Z Z, ttbb and gauge boson pair reducible ones. Some contributions coming from tt Z and t Z W also fade away after the invariant mass veto on di-τ -jets. It is evident that BP1, having a leptoquark of a mass of 650 GeV, can be probed with very early data of ∼ 100 fb −1 luminosity and for BP2 we need ∼ 150 fb −1 . However, in the case of BP3 the required luminosity is beyond the reach of LHC in its current design.
2b + 2τ + 4 j
In the scenario when both W ± s coming from the decays of top pair which are produced from leptoquarks decay hadronically, additional jets arise besides di-jets. Here signal event numbers increase a lot due to the larger hadronic decay branching fraction of W ± (∼ 68%). Table 5 describes the event numbers for the benchmark points and the dominant SM backgrounds for the 2b + 2τ + 4 j final state at an integrated luminosity of 100 fb −1 . The τ -jets invariant mass veto around the Z -mass, i.e., |m τ τ − m Z | ≥ 10 GeV, reduces the background contributions significantly. The significance of the final state is naturally enhanced compared to the leptonic final state (see Table 4) and can be probed with very early data of few fb −1 at the 14 TeV LHC. It seems that this particular final state can give the very first hint towards the discovery Table 6 The number of events for the 1b +1 j +1τ +1 +1μ final state for the benchmark points and the dominant SM backgrounds at the LHC with 14 TeV of center of mass energy and at an integrated luminosity of 100 fb −1 . S sig denotes a signal significance at 100 fb −1 of integrated luminosity and L 5 depicts the required integrated luminosity for 5σ confidence level for the signal. The ' †' denotes the contribution from c g → φ μ production process of the leptoquark if it dominantly decays into the third generation i.e., t τ . Even for BP3, which has a leptoquark of a mass of 1.2 TeV, it can be probed at an integrated luminosity of ∼ 342 fb −1 . In Tables 4 and 5, the single leptoquark production via c g → μφ does not contribute and thus these final states can probe leptoquarks via pair production only.
1b
Now we focus on a scenario where both the second and the third generation decays contribute to the final state, i.e., one of the pair-produced leptoquark decays into t τ and the other one into c μ. The c-jet coming from the leptoquark is tagged as a normal jet such that we do not lose events on its tagging efficiency [46]. We also require that the W ± , arising from the top decay, decays leptonically. Selection of this kind of decay boils down to a final state composed of 1b+1 j +1τ +1 +1μ. The event numbers for the final state 1b+1 j +1τ +1 +1μ for the benchmark points and backgrounds are given in Table 6 at an integrated luminosity of 100 fb −1 at the 14 TeV LHC. This combination is rich with charged leptons with all three flavors, i.e., e, μ, τ, where τ is tagged as a jet, making it a very unique signal. In the case of BP2, we get an additional contribution from the single leptoquark production via c g → μ φ. Both BP1 and BP2 will be explored with very early data of 14 TeV LHC. However, for BP3, this final state has less to offer.
1b
Next we consider a similar case as the previous one except that one of the W ± bosons coming from the leptoquark, decays hadronically giving rise to two additional jets. One muon can come either from the decay of the leptoquark to c μ or from the W ± boson when both leptoquarks decay into t τ . Such a scenario creates 1b + 3 j + 1τ + 1μ final state and the number of events are given in Table 7 at an integrated luminosity of 100 fb −1 at the 14 TeV LHC. Here the potential muon is either coming from the decay of one leptoquark in the pair production or from the production of single leptoquark in association of muon. This is the reason for the given parameter space; single leptoquark production contributes only for BP2, where such a coupling is non-vanishing. However, due to the reduction of the final state tagged charged leptons from three to one, we have a reasonable amount of backgrounds coming from tt, t Z W , tt Z and ttbb, even with the requirement that the di-jet invariant mass produces the W ± mass. If we consider the fact that the muons coming directly from the decay of the leptoquark are hard enough, i.e., p μ T > ∼ 100 GeV (see Fig. 2a), then implementation of such an additional cut reduces the potential tt background by a factor of ∼ 7. Contrary to that, the signal numbers get a minimal reduction. After all the cuts both BP1 and BP2 can be probed at the 14 TeV LHC with an integrated luminosities of ∼ 175 fb −1 and ∼ 54 fb −1 , respectively.
1b + 1τ + 2μ
Motivated by the fact that the multileptonic final states have less SM backgrounds, we try to tag 2μ final state where one of them is very hard coming from the direct decay of the leptoquark to c μ and the other can come from the W ± boson decay. Here, in order to keep the final state robust for all the BPs, we do not tag the c-jet. This choice corresponds to a final state 1b + 1τ + 2μ, where we only tag one b-jet and one τ -jet coming from the decay of the leptoquark into third generation, and no additional jets are required. Table 8 reflects the number of events for the benchmark points and the dominant SM backgrounds at the LHC with 14 TeV of center of mass energy and at an integrated luminosity of 100 fb −1 . The requirement of an additional muon reduces the dominant tt background to a negligible level. Here additional cuts, like the veto of a di-muon invariant mass around the Z mass value and the requirement of at least one muon with p T ≥ 100 GeV are applied to reduce the backgrounds further. In this case, for BP2, both the pair and the single leptoquark production processes contribute. The single leptoquark production contribution in the case of BP2 is denoted by ' †'. We see now both BP1 and BP2 can be probed within ∼ 41 fb −1 and ∼ 30 fb −1 integrated luminosity, respectively, at the 14 TeV LHC. However, BP3 remains elusive in this final state.
Leptoquark mass reconstruction and reach at the LHC
Ensuring the final states with excess events, we now look for various invariant mass distributions for the resonance dis- Table 7 The number of events for the 1b + 3 j + 1τ + 1μ final state for the bench mark points and the dominant SM backgrounds at the LHC with 14 TeV of center of mass energy and at an integrated luminosity of 100 fb −1 . S sig denotes signal significance at 100 fb −1 of integrated luminosity and L 5 depicts the required integrated luminosity for 5σ confidence level for the signal. A cumulative cut of p μ T ≥ 100 GeV is applied to reduce the SM backgrounds further. The ' †' denotes the contribution from c g → φ μ production process covery of the leptoquark. In this section, we explore both the third and the second generation decay modes to reconstruct the leptoquark mass. Leptoquarks decay to the third generation namely, t τ or b ν. In order to construct the leptoquark mass we focus on the t τ mode and require that at least one leg of the leptoquark pair production should be tagged. In this process we also require that both t and τ should be tagged via their hadronic decay. This is due to the fact that the leptonic decay of W ± will produce a neutrino as missing energy and will spoil the mass reconstruction. Hence for that one leg we construct W ± via its hadronic decay mode with the criteria that |m 2 j − m W | ≤ 10 GeV and that W ∓ from the other leg can decay hadronically or leptonically, depending on the additional tagging, required for the final states. We also tag the τ coming from the leptoquark decay as hadronic τ -jet [44]. In such a case the only amount of missing energy will arise from neutrinos originating from τ decay and will have much less effect on the leptoquark mass reconstruction. After reconstructing the W ± mass, the top mass is reconstructed via the 2 j b invariant mass distribution, where the di-jets are coming from the W ± mass window and the b-jet originates from the top decay. Next we take the events from the top mass window, i.e. |m 2 j b −m t | ≤ 10 GeV, for the reconstruction of m 2 j b τ . These choices are sufficient to reconstruct the leptoquark mass peak via the m 2 j b τ distribution. However, some of the SM backgrounds, specially tt, overshadow the distribution. To reduce the most dominant SM background, tt, we invoke additional tagging by requiring 2b +2τ +2 j +1 and 1b + 2τ + 2 j + 1 final states, where the extra b-jet, τ -jet and are coming from the other leg of the leptoquark pair production. The result is depicted in Fig. 5a, b. Here the additional charged leptons and τ -or b-jet come from the other leg of the pair-produced leptoquark. It can be seen from Fig. 5a, b that a sort of smeared mass edges for BP1 and BP2 around 650 GeV are formed and the SM backgrounds are populated at the lower mass end only. The situation improves in terms of the statistics if we require both the W ± 's decay hadronically and thus giving rise to a final state 2b+2τ +4 j and the corresponding m 2 j b τ mass distribution is shown in Fig. 5c. We can clearly see that the dominant SM backgrounds peak to the lower mass end and the signal mass peak for BP1 and BP2 are prominent. A suitable mass cut, i.e. a mass window around the 650 GeV for BP1 and BP2, will give us an accurate estimate for the discovery reach. In Table 9, we provide the number of events around the leptoquark mass peaks, i.e. |m 2 j b τ − m φ | ≤ 10 GeV for The mass reconstruction at 100 fb −1 is highest for the 2b+2τ +4 j final state, i.e., 5.0σ and 4.0σ for BP1 and BP2, respectively, while for the other two final states we need more luminosity to achieve 5σ significance. A mass scale of ∼ 1.3 TeV can be probed at an integrated luminosity of 3000 fb −1 for We have seen that the dominant decay modes of the leptoquark are in the third generation, specially to t τ . This gives rise to a very rich final state; however, in the presence of a large number of jets, and specially the missing momentum from neutrino, the peaks are smeared and we often encounter a mass edge of the distribution instead of a proper peak. A much cleaner mass peak reconstruction is possible via the invariant mass of the c-jet and the muon coming from the single leptoquark vertex because of the presence of a smaller number of jets and absence of potential missing momentum. This can happen in the case of BP2, where such a coupling has been introduced. However, due to the constraints from flavor observables [28][29][30][31][32][33][34], we choose the branching fraction of the leptoquark to c μ to be only 11%, which reduces the signal events. We improve the signal statistics by requiring one of the pair-produced leptoquarks to decay into c μ and the other into t τ . To reduce the SM backgrounds, we tag the decay chain of the third generation by requiring one b-jet and at least one τ -jet. In order to further enhance the signal number, we require W ± from this chain to decay hadronically, giving rise to two jets which are tagged with their invariant mass within ±10 GeV of the W ± mass, i.e., |m j j − m W ± | ≤ 10 GeV. In addition, we insist on having one c-jet with p T ≥ 200 GeV and one muon with p T ≥ 100 GeV and also no spurious dilepton coming from the Z boson, i.e., |m − m Z | ≥ 5 GeV.
After having considered the above-mentioned criteria, we plot the invariant mass distribution of the c-jet and muon in Table 9 The number of events around the leptoquark mass peak, i.e. |m 2 j b τ − m φ | ≤ 10 GeV for the benchmark points and the dominant SM backgrounds at the LHC with the center of mass energy of 14 TeV and at an integrated luminosity of 100 fb −1 for three final states: (a) 2b+2τ +2 j +1 , (b) 1b+2τ +2 j +1 , and (c) 2b+2τ +4 j, respectively. The ' †' contributions are from cg → φμ process and ' * ' contributions are from leptoquark pair production. The criteria |m 2 j −m W | ≤ 10GeV and |m 2 j b − m t | ≤ 10 GeV are also required in order to achieve the leptoquark mass peak Fig. 6 for BP2 1 and the dominant SM backgrounds, namely tt, tt Z, t Z W . The detection efficiency of such c-jet is, however, not very high and for our simulation we choose the tagging efficiency of a c-jet is 50% [46]. The SM processes that contribute as backgrounds are mainly contributing due to faking of a b-jet as a c-jet, which we have taken as 25% per jet [46]. There are also possibilities of light-jets fake as c-jet [46]. Table 10 shows the numbers of such events around the peak, i.e. |m μ c − m φ | ≤ 10 GeV for signal events for BP2 and for the SM backgrounds. It is evident that the integrated luminosity of ∼ 100 fb −1 at the LHC with 14 TeV center of mass energy can probe for this mode the peak at 3σ level. Naively, one can also look for the final state consisting of 1c + 2μ, by requiring the second muon of p T ≥ 100 GeV, i.e., expecting it to come from the decay of the other leptoquark to the c μ state. For BP2, as the branching fraction of the leptoquark to c μ is only 11%, the requirement of both the pair-produced leptoquarks to decay in c μ will further reduce the effective branching fraction. To avoid further reduction from the c-jet tagging efficiency [46], we only tag one of the two c-jets as a c-jet. A cumulative requirement of 2 ≤ n j ≤ 4+ E T ≤ 30 GeV is also assumed to reduce the SM di-muon backgrounds coming from the gauge boson decays as can be seen in the second final state of Table 10. Though this has reduced the contribution from leptoquark pair production, it enhanced the single leptoquark contribution via c g → φ μ. The signal reach for BP2 in this case is 1.5σ at 100 fb −1 of integrated luminosity at the LHC with 14 TeV center of mass energy. If we proceed to tag the second c-jet, clearly the signal event reduces further, but the final state comprised of 2c + 2μ+ E T ≤ 30 GeV does not have any noticeable backgrounds as can be read from the third final state in Table 10. However, such a choice of final state yields only a reach of ∼ 1.4σ at 100 fb −1 of integrated luminosity at the 14 TeV LHC.
It is apparent from the discussions in the preceding sections that the final state defined in Table 5 has the highest reach which probes the third generation decay mode. Figure 7a, b presents the reach for the scalar leptoquark mass in terms of integrated luminosity at the 14 TeV LHC corresponding to the final states given in Tables 5 and 6, respectively. It can be seen that, for BP1, where the leptoquark branching fraction to t τ is 61%, a leptoquark mass of 1.6 TeV can be probed at the LHC with 3000 fb −1 of integrated luminosity. If such a branching ratio is 100%, the reach is enhanced to 1.8 TeV.
Similarly we can look into the final state defined in Table 6, where for BP2 both single and pair productions of the leptoquark contribute, and the final state is comprised of both the second and the third generation decay modes of the leptoquark. Here we define β 1 = B(φ → t τ ) = 0.50 and β 2 = B(φ → c μ) = 0.1. We find a leptoquark mass scale reach of ∼ 920 GeV is desired at an integrated luminosity of 3000 fb −1 . However, if we take β 1 = β 2 = 0.5, the reach increases to 1.2 TeV. These calculations are done with the renormalization/factorization scale μ = √ŝ , which give a conservative estimate. A scale variation would enhance such a reach by 10-20%. Table 10 The number of events for the benchmark points and the dominant SM backgrounds at the LHC with center of mass energy of 14 TeV and at an integrated luminosity of 100 fb −1 . Here 1c -jet has p T ≥ 200 GeV and μ has p T ≥ 100 GeV. The ' †' contributions are from the cg → φμ process and the ' * ' contributions are from leptoquark pair production
Single leptoquark production and discovery reach
It is well known that the leptoquark pair-production crosssection is almost independent of the Yukawa type couplings Y L,R ii except for very high values [47] and is actually determined by the leptoquark mass and strong coupling at a given scale. Due to the presence of the strong interaction, the pair-production cross-section range for the leptoquark is higher than the similar mass range for the weak scalar pairproduction cross-section. Unlike the weakly charged scalar, there exists an additional mechanism that can produce a single leptoquark in association with leptons of a given flavor via Yukawa type couplings Y L,R ii . Quark fusion with a gluon can give rise to final states consisting of either φ or φ ν.
In Fig. 8 we show the production cross-section of such a single leptoquark in fb with the variation of the leptoquark mass at the 14 TeV LHC. The cross-sections are calculated using CalcHEP [12], where we choose 6TEQ6L [40] as PDF and the variations for three different scale choices, i.e. μ = √ŝ , m φ /2, 2m φ , are shown. The results for three different production cross-sections are shown: q g → φ + X in green, b g → φ ν in red and c g → φ μ in blue. The k-factor of 1.5 has been taken into account [48]. The leptoquark will decay to combinations of quark and lepton. However, among the chosen benchmark points only the couplings of BP2 can have single leptoquark production via c g → φ μ and both BP2, BP3 contribute via the b g → φν production channel. In the case of BP2, the leptoquark still dominantly decays to t τ with a decay branching fraction of 50% and to c μ only with 10%. From a collider viewpoint, we also show the estimate of the inclusive single leptoquark production crosssection by considering universal Yukawa type couplings in all generations, namely, Y L,R ii = 0.5 for i ∈ {1, 2, 3}. In Table 11 we look for the final states coming from both the decay modes. The first final state deals with 1b + 1τ arising from the decay of the leptoquark into t τ . We also tag the charged lepton e, μ coming from the W ± decay along with a muon supposedly originating from one leptoquark decay with p T ≥ 100 GeV( ). A requirement of p j 1 T ≥ 100 GeV for first p T ordered jets, which mostly comes from the leptoquark decay, is also made to diminish the SM backgrounds further. For the first final state, the BP2 signal significance reaches 3.9 σ at the LHC with 14 TeV of center of mass energy and 100 fb −1 of integrated luminosity. If we tag both muons, coming from the leptoquark decay via c μ, with p T ≥ 100 GeV and the first p T ordered jet with p T ≥ 200 GeV, then the corresponding signal is given in the second row as 1 ≤ n j ≤ 2 + p where we do not tag any c-jet. However, due to the fact that the branching ratio to c μ for BP2 is only 10%, the signal significance reaches only 1.2 σ at 100 fb −1 of integrated luminosity. If we further tag one of the two c-jets as c-jet, which is coming from leptoquark decay, then the signal significance for BP2 can reach only 0.6 σ at 100 fb −1 of integrated luminosity. The c-jet tagging efficiency [46] also significantly affects the event numbers.
The excess of events compared to the SM prediction provides a hint for some BSM physics. However, the conclusive discovery of a new particle can only happen via the reconstruction of its mass, through possible invariant mass reconstructions. Figure 9 shows the reach of the leptoquark mass reconstructed via c μ for the final states given in Table 10 (in panel (a)) and Table 11 (in panel (b)). The requirement of such final states involves decay modes in both the second and the third generations. Similar to the previous reach plots (Fig. 7) here also, β 1 = B(φ → t τ ) and β 2 = B(φ → c μ). The choice of β 1 = β 2 = 0.5 results in a reach of the leptoquark mass ∼ 1.2 TeV (in Fig. 9a) and 1 TeV (in Fig. 9b) at the 14 TeV LHC with 3000 fb −1 of integrated luminosity. It should be noted that though the final reach is almost the same for the two cases, see Fig. 9a, which is for the final state given in Table 10, it mostly depends on the leptoquark pair production dominated by the gluon and quark fusion and thus is independent of Y L,R ii . On the other hand, Fig. 9b, which is for the final state given in Table 11, depends on both single and pair production of the leptoquark. As a consequence, this mode can be a good probe to the leptoquark Yukawa couplings Y L,R ii . A comparative study of both such reconstructions would certainly provide an upper hand understanding of the model parameters.
Summary
In this article we study the phenomenology of a scalar leptoquark via its dominant decay into third generation leptons and quarks and also from the combined decays into second and third generation channels. The leptoquark considered here has a hypercharge of −1/3 units. By choosing some suitable benchmark points, we list the final states with well-defined cumulative cuts arising from leptoquark pair production, at the 14 TeV LHC with 100 fb −1 of integrated luminosity in Tables 4 and 5. These searches show that b and τ jet tagging along with their invariant mass veto cuts helps to reduce the SM backgrounds immensely.
Next we discuss the phenomenology when one of the leptoquark decays into the third generation and other decays into the second generation. Due to the constraints from flavor data Table 11 The number of events for the benchmark points and the dominant SM backgrounds at the LHC with 14 TeV of center of mass energy and at an integrated luminosity of 100 fb −1 . Here 1c -jet has p T ≥ 200 GeV and μ has p T ≥ 100 GeV. The ' * ' contributions are from leptoquark pair production and ' †' contributions are from the cg → φμ process. Here V V V, V V are contributions from the SM gauge bosons where V = W ± , Z panel b), where β 1 and β 2 are the branching fraction to t τ and c μ, respectively we conservatively allow, in BP2, for the leptoquark decays to c μ with branching fraction by 10% only. Nevertheless from a collider perspective one can tune such a branching fraction while looking into a certain final state and can obtain independent limits. In Tables 6 and 7 we have analyzed the final states where both decay modes are reflected. For Table 6 the reach is comparable for BP1 and BP2, where only for BP2 single leptoquark production contributes. In Table 7 the significance drops due to lower branching fraction of W ± into leptons. Our study shows that a scalar leptoquark with hypercharge −1/3 can be probed till ∼ 2 TeV at the LHC with 14 TeV of center of mass energy and 3000 fb −1 of integrated luminosity.
The leptoquark mass has been reconstructed via its decay to the third and second generations. For the decay into third generation states, we reconstruct m 2 j b τ and for BP1 it has a reach of ∼ 1.3 TeV that can be probed with the 3000 fb −1 data. Next we reconstructed the leptoquark mass via c μ invariant mass reconstruction. However, we require an environment that has additional tagging of b-jet and τ -jet coming from third generation decays. This choice makes the final state almost background free and also increases the signal strength due to the higher branching fraction in the third generation.
We also study the single leptoquark production via bgluon and c-gluon fusion in Fig. 8. The production crosssection improves significantly in the case of inclusive single leptoquark production while considering equal Yukawa type couplings for all generations. We highlight the reach of the leptoquark mass reconstruction from the single production in Fig. 9. For choices of couplings as in BP1 and BP2, we find that the reach is ∼ 1.2 TeV at the 14 TeV LHC with 3000 fb −1 of integrated luminosity. As the limits obtained in this work are well within the current and future reach of the LHC, dedicated searches for the proposed final states will be important to confirm/falsify the existence of such a BSM particle. | 2022-12-01T15:46:59.399Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "75dbe26e4c9f44c971a7c7bc89dd66c0b4e0b84d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-018-5959-x.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "75dbe26e4c9f44c971a7c7bc89dd66c0b4e0b84d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
231734525 | pes2o/s2orc | v3-fos-license | Comprehensive Succinylome Profiling Reveals the Pivotal Role of Lysine Succinylation in Energy Metabolism and Quorum Sensing of Staphylococcus epidermidis
Background Lysine succinylation is a newly identified posttranslational modification (PTM), which exists widely from prokaryotes to eukaryotes and participates in various cellular processes, especially in the metabolic processes. Staphylococcus epidermidis is a commensal bacterium in the skin, which attracts more attention as a pathogen, especially in immunocompromised patients and neonates by attaching to medical devices and forming biofilms. However, the significance of lysine succinylation in S. epidermidis proteins has not been investigated. Objectives The purpose of this study was to investigate the physiological and pathological processes of S. epidermidis at the level of PTM. Moreover, by analyzing previous succinylome datasets in various organisms, we tried to provide an in-depth understanding of lysine succinylation. Methods Using antibody affinity enrichment followed by LC-MS/MS analysis, we examined the succinylome of S. epidermidis (ATCC 12228). Then, bioinformatics analysis was performed, including Gene Ontology (GO), KEGG enrichment, motif characterization, secondary structure, protein–protein interaction, and BLAST analysis. Results A total of 1557 succinylated lysine sites in 649 proteins were identified in S. epidermidis (ATCC 12228). Among these succinylation proteins, GO annotation showed that proteins related to metabolic processes accounted for the most. KEGG pathway characterization indicated that proteins associated with the glycolysis/gluconeogenesis and citrate cycle (TCA cycle) pathway were more likely to be succinylated. Moreover, 13 conserved motifs were identified. The specific motif KsuD was conserved in model prokaryotes and eukaryotes. Succinylated proteins with this motif were highly enriched in the glycolysis/gluconeogenesis pathway. One succinylation site (K144) was identified in S-ribosylhomocysteine lyase, a key enzyme in the quorum sensing system, indicating the regulatory role succinylation may play in bacterial processes. Furthermore, 15 succinyltransferases and 18 desuccinylases (erasers) were predicted in S. epidermidis by BLAST analysis. Conclusion We performed the first comprehensive profile of succinylation in S. epidermidis and illustrated the significant role succinylation may play in energy metabolism, QS system, and other bacterial behaviors. This study may be a fundamental basis to investigate the underlying mechanisms of colonization, virulence, and infection of S. epidermidis, as well as provide a new insight into regulatory effects succinylation may lay on metabolic processes (Data are available via ProteomeXchange with identifier PXD022866).
INTRODUCTION
Posttranslational modification (PTM) is a key mechanism in the effective enlargement of the function and diversity of proteins. It mainly occurs on the lysine among 20 amino acids that are the fundamental components of proteins. Because of the nature of lysine, the changes in its charge and structure will affect the composition, activity, and interaction of proteins (Azevedo and Saiardi, 2016;Stram and Payne, 2016;Gao J. et al., 2019). There are more than 620 types of PTMs discovered (Xu et al., 2018), including methylation (Unnikrishnan et al., 2019), acetylation (Barnes et al., 2019), phosphorylation (Pawson and Scott, 2005), ubiquitination (Ronau et al., 2016), crotonylation , malonylation (Peng et al., 2011), and succinylation (Zhang et al., 2011).
Among all these PTMs, succinylation is a new type of PTM, which was identified by Zhang et al. (2011) using the high throughout HPLC-MS/MS and antibody-based affinity enrichment. The process of succinylation is to transfer the succinyl group from succinyl-CoA to the lysine residue of protein, thus increasing the mass shift of 100.0186 Da and introducing the −2 charge at the same time, finally turning the +1 lysine into −1 succinyllysine. Compared with acetylation or methylation, which adds a group of 42 Da or 14 Da, respectively, succinylation brings more remarkable changes in the structure and charges, which is believed to play a more critical role in physiological and pathological regulation (Sabari et al., 2017). With the rapid development of MS technology, lysine succinylation is revealed to widely exist in prokaryotic and eukaryotic organisms, such as Escherichia coli, Vibrio parahaemolyticus, Saccharomyces cerevisiae, Trichophyton rubrum, Aspergillus flavus, Pseudomonas aeruginosa, Toxoplasma gondii, Dendrobium officinale, Mus musculus, and Homo sapiens (Weinert et al., 2013;Li et al., 2014;Pan et al., 2015;Feng et al., 2017;Xu et al., 2017;Gaviard et al., 2018;Ren et al., 2018). It is highly conserved, reversible, and dynamic and plays a vital role in the metabolism in multiple species (Alleyn et al., 2018). In bacteria including E. coli, V. parahaemolyticus, Mycobacterium tuberculosis, Bacillus subtilis, Corynebacterium glutamicum, Aeromonas hydrophila, P. aeruginosa, and Porphyromonas gingivalis, it is demonstrated that lysine succinylation emerges as a regulator of growth, virulence, antibiotic resistance, and, especially, metabolic processes in bacterial behaviors. Moreover, a plethora of enzymes that play a significant role in multiple pathways are subject to succinylation and conserved among different species, for example, isocitrate dehydrogenase (IDH), carbamoyl phosphate synthase 1 (1), 3-hydroxy-3-methylglutaryl-CoA synthase 2 (HMGCS2), molecular chaperone DnaK, and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) (Weinert et al., 2013;Kosono et al., 2015;Pan et al., 2015;Yang et al., 2015;Mizuno et al., 2016;Gaviard et al., 2018;Wu et al., 2019;Yao et al., 2019). However, the in-depth mechanism and physiological significance of lysine succinylation remain elusive. More research on succinylation should be carried out to elucidate its underlying mechanism and provide a new insight to find possible targets to treat diseases, especially those caused by microorganisms.
Staphylococcus epidermidis is a coagulase-negative staphylococcus, which scarcely produces coagulase. Unlike coagulase-positive Staphylococcus aureus, which can cause multiple infections of the host, S. epidermidis always emerges as a commensal bacterium in the skin and mucous membrane without progressive pathogenicity (Heilmann et al., 2019). However, in recent years, increasing evidence shows that it is an opportunistic bacterium, which can cause infections of the skin, soft tissues, endocardium, and other tissues in patients who are elderly, newborns, or immunocompromised. It is the most frequent causative of nosocomial infection by dwelling in the surgically medical devices (Grice and Segre, 2011;Christensen and Brüggemann, 2014;Argemi et al., 2019). Several cases have been reported of the patients infected with S. epidermidis, and the number of these cases increases gradually, which causes a great burden to the public health system (Schoenfelder et al., 2010). This bacterium can form biofilms as virulence to infect the host and escape from the immune elimination. Besides, it is widely present in the skin and mucous membrane. Therefore, avoiding infection with this bacterium during operation or insertion is dramatically complicated (Paharik and Horswill, 2016).
Except for the pathogenicity, it plays various roles in skin homeostasis mechanisms by interacting with the host cells or other skin bacteria. With regard to wound healing, a study revealed that N-formyl methionine peptides, secreted by S. epidermidis, are recognized by dendritic cells and presented to CD8 + T cells, subsequently promoting the accumulation of CD8 + T cell and benefitting the wound healing (Linehan et al., 2018). For skin tumors, S. epidermidis can suppress the growth of melanoma partially without systematic toxicity by producing 6-N-hydroxyamino-purine, an inhibitor of DNA polymerase activity (Nakatsuji et al., 2018). Additionally, Wang et al. (2017b) reported that S. epidermidis generates butyric acid, which promotes the differentiation of adipose-derived stem cells to adipocytes as well as increases the accumulation of cytoplasmic lipid, leading to an increased dermal layer. S. epidermidis can release succinic acid that acts on the TLR2 of the host keratinocytes, promoting its production of AMPs (antimicrobial peptides), then competing with C. acne (Claudel et al., 2019). Therefore, the abundance of S. epidermidis may be a regulatory factor for the colonization of C. acne in the skin. Regulating the balance between S. epidermidis and C. acne may be a new therapy for treating acne. Furthermore, Iwase et al. revealed that S. epidermidis secretes a serine protease Esp, which can hinder the biofilm formation and nasal colonization of Staphylococcus aureus, a severe infectious pathogen of pneumonia, endocarditis, and septicemia. The unknown novel mechanism of Esp inhibiting S. aureus may be a therapeutic target to prevent the infection caused by S. aureus (Iwase et al., 2010). All the above-mentioned studies show that metabolic processes and metabolites may act as essential regulators in physiological and pathological processes of S. epidermidis. Still, the interaction between S. epidermidis and other skin bacteria as well as its function in skin health remains unclear and deserves more investigation.
Until now, the underlying mechanisms of transforming commensal S. epidermidis to an infectious pathogen have not been investigated at the level of PTM. In this study, we conducted the first profile of lysine succinylome in the S. epidermidis and elucidated that 1557 lysine sites in 649 proteins were subject to succinylation, which were highly enriched in significant cellular processes including glycolysis/gluconeogenesis, citrate cycle (TCA cycle), pyruvate metabolism, and binding activity. The results proved an essential role succinylation might play in S. epidermidis. This study may promote our understanding of bacterial behaviors of S. epidermidis at the level of lysine succinylation and provide new insight into the development of effective drugs to treat infections caused by S. epidermidis.
Bacterial Culture
The strain used in this study is S. epidermidis (ATCC 12228), which was purchased from the American Type Culture Collection (ATCC). After overnight culture, the S. epidermidis (ATCC 12228) cells were inoculated into beef extract-sodium chloridepeptone medium (BSCP) at a ratio of 1:200, and the inoculation was adjusted to make the OD 600 nm 90 µL S. epidermidis suspension account for 0.027. Then, mix 50 ml bacterial suspension in a certain concentration with 50 ml BSCP medium and gently shake the mixture at 37 • C for 12 h. Centrifugate the suspension at 4000g and 4 • C for 3 min. Discard the supernatant and wash the remaining debris with PBS three times.
Protein Extraction, Labeling, Fractionation, and Affinity Enrichment Samples were ground by liquid nitrogen and sonicated on ice in lysis buffer. The remaining debris was discarded after centrifugation at 20,000g and 4 • C for 10 min. Finally, the protein was precipitated with cold 15% TCA for 2 h at −20 • C. After centrifugation at 4 • C for 10 min, the supernatant was removed. The precipitate was washed with precooled acetone three times, dissolved in the buffer (8 M urea, 100 mM TEAB, pH 8.0), and determined with a 2-D Quant kit. For digestion, the protein solution was reduced with 10 mM DTT for 1 h at 37 • C and alkylated with 25 mM IAA for 45 min at room temperature in the darkness, then diluted by adding 100 mM TEAB until the urea concentration was less than 2 M. Finally, add the trypsin at the mass ratio of 1:50 trypsin-to-protein for the first digestion overnight and 1:100 trypsin-to-protein for a second 4-h digestion.
After trypsin digestion, the peptides were desalted and further vacuum-dried. Peptides were resuspended in 0.5 M TEAB, and subsequent procedures were conducted according to the manufacturer's protocol for the 6-plex TMT kit. The samples were fractionated by high pH reversed-phase HPLC using Agilent 300Extend C18 column (5 µm particles, 4.6 mm ID, 250 mm length). Then, the peptides were dried by vacuum centrifugation. To enrich Ksu peptides, tryptic peptides dissolved in NETN buffer (100 mM NaCl, 1 mM EDTA, 50 mM Tris-HCl, 0.5% NP-40, pH 8.0) were incubated with prewashed antibody beads (PTM Biolabs) at 4 • C overnight. The beads were washed, and the bound peptides were eluted from the beads. The fractions were combined, vacuum-dried, and cleaned with C18 ZipTips (Millipore) for further LC-MS/MS analysis.
LC-MS/MS Analysis
The peptides were dissolved in 0.1% FA and then loaded onto a reversed-phase analytical column (Acclaim PepMap RSLC, Thermo Fisher Scientific). The gradient comprised of an increase from 7 to 25% solvent B (0.1% FA in 98% ACN) for 24 min, 25 to 40% for 8 min, and climbing to 80% in 5 min, then holding at 80% for the last 3 min, all at a constant flow rate of 400 nl/min on an EASY-nLC 1000 UPLC system. The peptides were subjected to NSI source followed by tandem mass spectrometry (MS/MS) in Q Exactive Plus (Thermo Fisher Scientific) coupled online to the UPLC. The electrospray voltage applied was 2.0 kV. For full scans, the m/z scan range was 350 to 1800, and intact peptides were detected in the Orbitrap at a resolution of 70,000. Peptides were selected for MS/MS using the NCE setting as 28 and 31; ion fragments were detected in the orbitrap at a resolution of 17,500. A data-dependent procedure was alternated between one MS scan followed by 20 MS/MS scans with a 15.0-s dynamic exclusion. The automatic gain control (AGC) was set at 5E4. The fixed first mass was set as 100 m/z.
Database Search
The resulting MS/MS data were processed using MaxQuant with an integrated Andromeda search engine (Cox and Mann, 2008). Tandem mass spectra were searched against the S. epidermidis ATCC12228 database concatenated with the reverse decoy database. Trypsin/P was specified as a cleavage enzyme allowing up to 4 missing cleavages, 5 modifications per peptide. The mass tolerance for precursor ions was set as 20 ppm in the first search and 5 ppm in the main search, and the mass tolerance for fragment ions was set as 0.02 Da. Carbamidomethyl on Cys was specified as fixed modification and oxidation on Met. Succinylation on Lys and acetylation on protein N-terminal were specified as variable modifications. False discovery rate (FDR) thresholds for protein and peptide were specified at 1%, and the minimum score for modified peptides was set >40. The minimum peptide length was set at 7. For the quantification method, TMT 6-plex was selected. The site localization probability was set as >0.75.
Bioinformatics Analysis
UniProt-GOA, KEGG, and InterPro database were, respectively, used to annotate GO, pathway, and domain information of proteins (Kanehisa et al., 2019;Mitchell et al., 2019;UniProt Consortium, 2019). Webserver software CELLO (v 2.5) was used to predict subcellular localization based on protein sequences. A two-tailed Fisher's exact test was performed to test the functional enrichment of succinylated protein.
Secondary structure analysis was performed using NetSurfP (v1.0). Only predictions with a minimum probability of 0.5 for one type of secondary structures (coil, α-helix, β-strand) were retained for analysis. The mean secondary structure probabilities of the modified lysine residues were compared with the mean secondary structure probabilities of a control dataset containing all the lysine residues of all the succinylated proteins identified in this study. The p-values were calculated by the Wilcoxon test. The protein-protein interaction (PPI) network was conducted by the STRING database (version 11.0) and visualized by Cytoscape software (version 3.8.0) (Doncheva et al., 2019;Szklarczyk et al., 2019).
To predict Ksu motifs, the motif-x was used to analyze peptide sequences that consist of 10 amino acids upstream and downstream of the succinylated lysine site (Chou and Schwartz, 2011). Motif-x analysis parameter: significance threshold was set as 1e-6; min occurrences were set as 20, and the background database was set as all protein sequences of species. All statistical tests and calculations were performed using R (version 3.6.1) (Chan, 2018).
Data Availability
The raw succinylome datasets generated for this study have been deposited to the PRIDE Archive 1 (Perez-Riverol et al., 2019) with identifier PXD022866. 1 https://www.ebi.ac.uk/pride
Identification of 1557 Succinylated Sites in 649 Proteins of Staphylococcus epidermidis
We identified a total of 1557 succinylation sites in 649 proteins, in which the FDR was lower than 1%. The distributions of mass error were near zero, most <0.02 (see Supplementary Table 1). Additionally, the length of most succinylated peptides was in the range of 7 to 20. Taken together, both the mass accuracy of MS data and the property of tryptic peptides fit the established standards.
Among the total 649 proteins, the most extensively succinylated protein is chaperone protein Dnak, which had 23 modified sites. Foldase protein PrsA, 2-oxoglutarate dehydrogenase E1 component (OGDH), and elongation factor Ts possessed 20, 17, and 17 sites, respectively. Dnak, which is associated with heat shock processes, is reported in the XDR Mtb with 25 succinylated sites (Xie et al., 2015). It is speculated that succinylation may have a significant effect on Dnak activity in response to environmental stimuli.
Conservation Analysis of Succinylated Proteins in S. epidermidis Revealed the Coexisted Similarity and Discrepancy Among Multiple Organisms
A plethora of reports have revealed the high abundance of succinylated proteins from prokaryotes to eukaryotes (Weinert et al., 2013). Investigation of the orthologous proteins in various organisms may elucidate the evolutionary conservation of succinylation. To figure out more features of succinylated events, we performed a comparison of succinylprotein homologs against 8 species with identified succinylomes: V. parahaemolyticus, E. coli, Trypanosoma brucei, S. cerevisiae, Magnaporthe oryzae, M. musculus, H. sapiens, and Oryza sativa (Weinert et al., 2013;Pan et al., 2015;Zhen et al., 2016;Wang et al., 2019;Zhang et al., 2020). A total of 297 succinylated proteins Table 2). All the above suggested that succinylated proteins were conserved from prokaryotes to eukaryotes. Furthermore, orthologous proteins in prokaryotes accounted more than that in eukaryotes, demonstrating that the number of homologous events of succinylation was higher between two prokaryotes than between prokaryotes and eukaryotes, which was consistent with the evolutionary laws. Then, we classified the orthologous proteins into four groups according to the number of organisms, which they were homologous in. Succinylproteins that have orthologs in 6∼8 species were grouped into the "highly conserved" category, which accounted for 7.1% (46/649) of all succinylated proteins in S. epidermidis. About 14.2% (92/649) proteins were homologous in 3∼5 species, which belonged to the "conserved" category. The "poorly conserved" category included 24.5% (159/649) of total succinylation events, in which homologous succinylproteins were found in 1∼2 species. Finally, proteins subjected to succinylation, which did not have orthologs in other 8 organisms, were clustered to the "novel" category, accounting for 54.2% (352/649) of all succinylated proteins in S. epidermidis (see Figure 2C). These results elucidated that despite the conservation of succinylated proteins among species, there still existed unique succinylation events in the distinctive organisms, providing a new insight for investigating specific functions of succinylproteins in particular creatures.
Functional Annotation and Subcellular Distribution Analysis Showed the Broad Existence of Succinylated Proteins in S. epidermidis
To figure out the role succinylation plays in the cellular processes, we performed the Gene Ontology (GO) functional classification of succinylated proteins in S. epidermidis from the perspective of biological process (BP), molecular function (MF), and cellular component (CC) (see Figure 3A and Supplementary Table 3). Based on the biological process, the four largest groups of proteins were all involved in metabolic processes, such as cellular metabolic process (16%), organic substance metabolic process (15%), primary metabolic process (14%), and nitrogen compound metabolic process (13%). The consequence elucidated that succinylation may play a significant role in the metabolic regulation in S. epidermidis. Additionally, 3% and 2% succinylated proteins were involved in response to stress and cellular response to the stimulus, respectively, indicating a pivotal mechanism for the survival and adaptation of S. epidermidis. With regard to molecular function, proteins involved in organic cyclic compound binding (13%), heterocyclic compound binding (13%), ion binding (8%), and structural constituent of ribosome (7%) were more likely to be succinylated. Moreover, on the basis of cellular component analysis, succinylated proteins preferred to distribute in the cytoplasm (27%), cytosol (19%), cell periphery (13%), and membrane (12%), which was consistent with the consequence of subcellular distribution analysis that cytoplasmic (62%), unknown (25%), and cytoplasmic membrane (10%) proteins accounted for the most among all the identified ones (see Figure 3B). Interestingly, in the cluster of the cellular component, nearly 5% of succinylated proteins were related to external encapsulating structure, which may be an important clue for the virulence of S. epidermidis. These results, corresponding to GO annotation and subcellular distribution, elucidated that succinylation existed broadly in nearly all cellular components and played a significant role in multiple processes in the S. epidermidis.
Enrichment Analysis of Succinylated Proteins and Peptides
In order to figure out the relationship between succinylated proteins and cellular function, we did the enrichment analysis based on GO enrichment, KEGG pathway, and Pfam domain (see Figure 4 and Supplementary Table 4). The consequences showed that in terms of the cellular component in GO enrichment, proteins related to the cytosolic ribosome, ribosome, ribosomal subunit, ribonucleoprotein complex, and intracellular non-membrane-bounded organelle were significantly enriched, indicating the high involvement of succinylation in the ribosomal events. Furthermore, in the light of biological process, succinylated proteins were more likely to enrich in processes including ribonucleoprotein complex assembly, cellular protein-containing complex assembly, ribosome assembly, posttranscriptional regulation of gene expression, and regulation of translation. On the basis of molecular function enrichment, proteins associated with the structural constituent of ribosome, RNA binding, nucleic acid binding, and rRNA binding were highly subjected to succinylation. Moreover, KEGG enrichment indicated that succinylated proteins were more likely to enrich in the pathways related to the ribosome, glycolysis/gluconeogenesis, citrate cycle (TCA cycle), pyruvate metabolism, aminoacyl-tRNA biosynthesis, and glycerolipid metabolism, which was consistent with the results in GO enrichment and aforementioned KEGG pathway analysis of succinylation in other species like E. coli, V. parahaemolyticus, H. sapiens, and others, confirming the important role of succinylated proteins in the metabolism processes. Then, we utilized Pfam to analyze domain features of proteins subjected to succinylation. It is illustrated that biotinrequiring enzyme, S1 RNA-binding domain, AAA domain (Cdc48 subfamily), and anticodon-binding domain of tRNA were significantly enriched.
Characterization of 13 Conserved Motifs of Succinylation Proteins and Investigation of the Relationship Among Motifs and Functional Processes
To investigate the nature of succinylated sites in S. epidermidis, we use motif-X to characterize the flanking protein sequences of S. epidermidis (10 amino acids upstream and downstream of the lysine succinylated site) based on the 1557 succinylated sites identified in this study. Consequently, 13 conserved motifs were identified, whose detailed information was shown below (see Figures 5A,B and Supplementary Table 5). The top five abundant motifs are E * * KsuK, Ksu * E, Ksu * D, KsuR, and KsuP (Ksu indicates the succinylated lysine sites and * represents a random amino acid residue). At the site of + 1 around succinylated lysine, K, R, D, Y, and P were comparably preferred. Also, D, E, and R were most frequent at the −1 site. These results demonstrated that succinylation had a higher tendency to occur around the polar residues (basic or acidic) than the non-polar ones. In the position of + 2, polar acidic residues including D and E were most frequent, which was consistent with the above findings. K was more abundant in the position of −7 and +1 in the identified proteins of S. epidermidis. Among these amino acid residues flanking lysine succinylated sites, R was the most frequent one that possessed a high tendency for the −4, +1, +5, +6, and +7 sites around succinyllysine.
Aiming to figure out the profound relationship between motifs and cellular functions, we extracted succinylated proteins with these five motifs and clustered them into five groups. Then, we did the GO, KEGG, and domain enrichment analysis of these five clusters (see Figures 6A-E and Supplementary Tables 6-8). The results showed that proteins with specific motifs were associated with corresponding functions, pathways, and domains. For example, proteins enriched in the citrate cycle (TCA cycle), carbon fixation pathways in prokaryotes, and butanoate metabolism possessed the abundant motif of KsuR. A large portion of proteins with the Ksu * D motif was located in the cell wall, cell division site, and external encapsulating structure according to cellular component enrichment analysis.
On the basis of molecular function, they were highly associated with binding events like protein-containing complex binding, glycosylation-dependent protein binding, modificationdependent protein binding, and ribonucleoprotein complex binding. Also, proteins with domains like periplasmic binding protein and transketolase, pyrimidine binding domains were enriched. These findings together elucidated the consistency among motif, domain, protein localization, and molecular function. We speculated that there is an underlying relationship between motifs and protein function, which may provide a novel insight for the future prediction of succinylated sites and their roles in cellular processes.
Furthermore, we compared the 13 identified motifs in S. epidermidis with previously reported succinylomes in E. coli, V. parahaemolyticus, T. brucei, S. cerevisiae, M. oryzae, Table 9; Weinert et al., 2013;Pan et al., 2015;Zhen et al., 2016;Wang et al., 2019;Zhang et al., 2020). The pattern of K at the + 1 site around succinyllysine was identified in E. coli, Hela cells, and yeast. Compared with other motifs reported in previous succinylome studies in various organisms, Ksu * D identified in this investigation was conserved in E. coli, Hela cells, and yeast. Meanwhile, KsuD was found in E. coli, yeast, Hela cells, and the mouse liver. These results indicated that the nature of succinylated sites was conserved among prokaryotes to eukaryotes. The highly conserved motifs may be unknown targets for the "readers" and "erasers" in the succinylated events, which provided a new insight to investigate the underlying mechanism of succinylation in cellular processes.
According to other reported results, several motifs were conserved in prokaryotes and eukaryotes, which can raise a question that whether these motifs were correlated with a specific function of proteins with them in different species.
We did a GO, domain, and KEGG enrichment analysis of proteins with the KsuD motif selected from E. coli, S. cerevisiae, M. musculus, H. sapiens, and S. epidermidis. It is revealed that proteins located in the cytosol, ribosome, and ribonucleoprotein complex were enriched in four species (E. coli, H. sapiens, M. musculus, and S. cerevisiae) based on cellular component enrichment analysis. Moreover, cytoplasmic proteins of E. coli, H. sapiens, and M. musculus had a higher tendency to have a KsuD motif (see Figure 8). According to the result of molecular function enrichment, succinylated proteins associated with binding activities including adenyl nucleotide binding, ATP binding, small molecule binding, nucleoside phosphate binding, and carbohydrate derivative binding were highly enriched in S. cerevisiae, H. sapiens, and M. musculus (see Figure 9). Additionally, succinylated proteins with the KsuD motif had a higher consistency of molecular function in eukaryotes (S. cerevisiae, H. sapiens, and M. musculus) than that in prokaryotes (E. coli and S. epidermidis), which is consistent with the laws of evolution. Based on the KEGG enrichment analysis, proteins in the glycolysis/gluconeogenesis pathway were highly enriched in all five organisms, indicating the important role of succinylated proteins in energy metabolism as well as the functional conservation of these succinylation proteins with KsuD motif (see Figure 10).
Succinylation May Affect the Secondary Structure and Surface Properties of Modified Proteins
Then, we analyzed the secondary structure of all succinylated proteins in S. epidermidis to figure out the relationship between the protein structure and succinylation frequency (see Figure 11A and Supplementary Table 10). The consequences revealed that succinylated events were more abundant in the α-helix (p < 2.2e-16) and coils (p = 2.5e-13) than that in the β-strand (p = 0.28). The percentage of unmodified lysines located in the α-helix of all unmodified peptides was higher than that of succinylated lysines in the αhelix of all succinyllysine residues; however, the results were converse in coli structure. Taken together, it is elucidated that succinylation might change the secondary structure of modified substrates.
Additionally, we investigated the absolute surface accessibility of succinylated lysines (see Figure 11B and Supplementary Table 10). The results demonstrated that succinylated lysine sites were more frequently located on the surface than the unmodified lysines did, leading to the speculation that succinylation tends to the protein surface and succinylated events may alter the surface properties of the modified proteins.
Succinylation Is a Significant Regulator of Energy Metabolism in S. epidermidis
The TCA cycle and glycolysis/gluconeogenesis are key processes to provide energy for organisms, which is essential for survival. A plethora of studies have revealed the preference of lysine succinylation in these metabolic processes (Kosono et al., 2015;Xie et al., 2015;Yang et al., 2015;Feng et al., 2017). In this study, the KEGG pathway enrichment analysis showed that succinylated proteins associated with the TCA cycle and glycolysis/gluconeogenesis pathways were highly enriched. We tried to figure out the regulatory mechanism of lysine succinylation in S. epidermidis in terms of metabolic enzymes. converts the oxaloacetate and acetyl-coenzyme A into citrate and coenzyme A, which is the first and rate-limiting step of the Krebs cycle (Li et al., 2016). Five lysine sites of citrate synthase including K64, K117, k262, K357, and K365 were succinylated, indicating the important role of lysine succinylation in the enzymatic regulation. Another pivotal rate-limiting enzyme, FIGURE 10 | KEGG pathway enrichment analysis of succinylated proteins with the KsuD motif among five organisms (Escherichia coli, Homo sapiens, Mus musculus, Saccharomyces cerevisiae, and Staphylococcus epidermidis).
isocitrate dehydrogenase (IDH), which catalyzes the oxidation and decarboxylation of isocitrate and produces α-ketoglutarate, carbon dioxide, and NADH + H + /NADPH + H + (Tommasini-Ghelfi et al., 2019), was revealed in six succinylated lysine sites: K58, K132, K163, K189, K225, and K262. Meanwhile, in drug-resistant M. tuberculosis, a total of 21 succinylated sites were identified on IDH, among which K262 was near the critical catalysis site (K257) (Xie et al., 2015). Furthermore, in E. coli, it was demonstrated that the succinylation of K100 and K242 residues of IDH affects the enzyme activity by site mutation analysis (Zhang et al., 2011). Three enzymatic components of α-ketoglutarate dehydrogenase (OGDH) complex-E1 (2-oxoglutarate dehydrogenase), E2 (dihydrolipoamide succinyltransferase), and E3 (dihydrolipoamide dehydrogenase), which together promote the conversion of α-ketoglutarate into succinyl-CoA (Lu et al., 2019), were succinylated at 8, 4, and 7 lysine sites, respectively. Furthermore, five succinylated peptides were also identified in succinyl-CoA synthetases including sucC and sucD that catalyze the reaction of succinyl-CoA hydrolysis and succinate production, which is the only step of substrate-level phosphorylation in the citrate cycle (Huang and Fraser, 2016). Succinyl-CoA is an important donor of the succinyl group for lysine succinylation in proteins. In turn, succinylation affects enzymes that regulate succinyl-CoA production and consumption, indicating a possible cyclic mechanism for regulating succinylation levels in organisms. In addition, proteins associated with the TCA cycle were widely succinylated, which may act as evidence that succinylation plays a pivotal role in regulating energy metabolism in S. epidermidis (see Figure 12). Ten out of ten glycolytic enzymes associated with converting glucose to pyruvate were subject to succinylation. These proteins contained glucokinase, glucose-6-phosphate isomerase, ATPdependent 6-phosphofructokinase (PFK), fructose-bisphosphate aldolase (FBA), triosephosphate isomerase, glyceraldehyde-3phosphate dehydrogenase (GAPDH), phosphoglycerate kinase (PGK), 2, 3-bisphosphoglycerate-independent phosphoglycerate mutase (PGPG), enolase (ENO), and pyruvate kinase. Phosphofructokinase catalyzes the conversion of fructose-6-phosphate to fructose-1,6-bisphosphate by transferring a phosphoryl group from ATP, which is the core regulatory step of the glycolytic process . Two succinylated lysine sites (K77, K212) were identified in PEK, indicating the regulatory role succinylation may play in the glycolysis pathway. Pyruvate kinase, which catalyzes the last step of glycolysis, is one of the three key regulatory enzymes in the glycolysis pathway. It transfers a phosphoryl group from PEP to ADP, then produces pyruvate and ATP (Schormann et al., 2019). In this study, we found six succinylated lysine sites in the pyruvate kinase-K139, K156, K173, K341, K390, and K563. K563 was located in the PEP-utilizing enzyme, a mobile domain at the N-terminus of pyruvate kinase, demonstrating that succinylation of K563 may be a possible regulatory site of the enzyme activity (see Figure 12).
Protein-Protein Interaction Networks of Succinylated Proteins
To figure out the relationship among these identified succinylation proteins, we performed PPI network analysis using the STRING database and Cytoscape software (Doncheva et al., 2019;Szklarczyk et al., 2019). We chose 80 proteins that were related to glycolysis/gluconeogenesis, TCA cycle, pyruvate metabolism, and aminoacyl-tRNA biosynthesis pathways, which were the four most enriched groups in KEGG enrichment analysis (see Figure 13 and Supplementary Table 11). Fourteen succinylated proteins had a degree over 40. Eight of these substrates were associated with the glycolysis/gluconeogenesis pathway, indicating that succinylated proteins in this pathway may play an essential role in biological processes. Four proteins were revealed with the higher degree that was over 50, i.e., SE_0967, SE_1371, SE_2160, and formate acetyltransferase (pflB), among which pflB possessed the most succinylated sites (13 sites), implying that this protein may be a pivotal target to understand the deep mechanisms of succinylation.
DISCUSSION
Using the MS technology together with the antibody affinity purification method, we performed the first analysis of succinylation profiles in S. epidermidis at the level of proteomics. In this study, we revealed a total of 2845 succinylated sites corresponding to 913 proteins in S. epidermidis, among which 1557 sites in 649 succinylation proteins were identified after three-time repetition. The different number and functional enrichment of succinylated substrates among various bacterial species may be explained by the discrepancy of inherent succinylation levels of proteins and sites in different bacteria. Furthermore, the divergence of antibody affinity and MS accuracy also account for this difference. Our investigation may enlarge the scale of understanding lysine succinylation in microorganisms, especially S. epidermidis which emerged as commensal inhabitants in human skin and mucous membranes as well as opportunistic pathogens of nosocomial infections. It may provide new insights into regulating bacterial survival, invasiveness, and pathogenicity by lysine succinylation.
Lysine succinylation plays an essential role in metabolic processes according to the previous investigation (Colak et al., 2013;Yang et al., 2015). In this study, multiple results illustrated the intensive connections between lysine succinylation and energy metabolism, especially the glycolysis process. For example, the KEGG enrichment analysis showed that proteins associated with the glycolysis/gluconeogenesis pathway had a higher tendency for succinylation. Ten out of ten key enzymes in this pathway were succinylated, especially the pyruvate kinase, which was identified with six modified sites. The result of PPI indicated that eight succinylated proteins associated with glycolysis possessed a degree over 40, the number of which was larger than that of other pathways. All the above demonstrated the significant effect of lysine succinylation on the glycolysis process in S. epidermidis. Further study should be conducted to investigate whether lysine succinylation could alter the structure, activity, and interaction of related enzymes.
The quorum system is a commonly existed mechanism in bacteria, which mediates cell-to-cell communication by producing, releasing, accumulating, detecting, and responding to extracellular signaling molecules called autoinducers (AIs) (Rutherford and Bassler, 2012). QS plays a significant role in controlling bacterial behaviors including virulence, biofilm formation, gene expression, and adaptation to a complex environment (Mukherjee and Bassler, 2019). It has been reported that agr, a known QS system in S. epidermidis, inhibits biofilm formation and alters the structure of biofilms (Le and Otto, 2015;Williams et al., 2019) by regulating biofilm factors including AtlE and delta-toxin (Hello et al., 2010;Reiter et al., 2014). Moreover, the LuxS/AI-2-dependent QS system has also been revealed as functional in S. epidermidis (Vendeville et al., 2005). LuxS responds to signaling AI-2 and represses virulence and biofilm formation of S. epidermidis by mediating transcription of the ica genes and production of PIA. LuxS (S-ribosylhomocysteinelyase) is a key enzyme in the QS system, which plays a pivotal role in bacterial behaviors including virulence, pathogenesis, biofilm formation, bioluminescence, and antibiotic resistance (Lewis et al., 2001;Xu et al., 2006). Recently, at the level of PTM, LuxS was identified as a tyrosine kinase phosphorylation site in V. harveyi (De Keersmaecker et al., 2006). Furthermore, according to the complete analysis of succinylomes in A. hydrophila, two lysine succinylation sites (K23 and K30) were identified in LuxS. Site-specific mutagenesis of K23 and K30 showed that these Ksu sites in LuxS upregulated enzymatic activity and influenced the communication of A. hydrophila with other bacteria (Yao et al., 2019). In our study, one succinylated site (K144) was found in LuxS of S. epidermidis, which is close to the iron-binding site (C123) and at the terminal of the protein sequence. Further study should focus on the role this succinylated site of LuxS may play in the QS system and bacterial processes, which may be a significant target to understand succinylation and other PTMs.
Succinylation is a newly recognized PTM, whose important regulatory enzymes like desuccinylases and succinyltransferases that possess oppose roles have not been investigated comprehensively yet. KAT2A, KGDHC, and CPT1A are recently identified proteins with succinyltransferase activity in eukaryotes (Gibson et al., 2015;Wang et al., 2017a;Kurmi et al., 2018;Tong et al., 2020). However, succinyltransferases have not been revealed in prokaryotes yet. Lysine acetylation is one of the best-investigated lysine acylations, whose lysine acetyltransferase (KATs) and lysine deacetylases (KDACs) have been identified to possess expanded activities in other acylations including Kpr, Kbu, Kcr, Kbhb, Ksucc, and Kglu (Sabari et al., 2017). For example, KATs p300, a well-known transcription co-activator, has been reported to recognize multiple actyl-CoAs like propionyl-CoA (Chen et al., 2007), butyryl-CoA (Chen et al., 2007), crotonyl-CoA (Sabari et al., 2015), succinyl-CoA (Sabari et al., 2017), glutaryl-CoA (Tan et al., 2014), and βhydroxybutyryl-CoA (Kaczmarska et al., 2017) and catalyze the corresponding acylation processes. Furthermore, the current structural investigation showed that GCN5 recognizes the common CoA portion of short-chain acyl-CoAs and possesses the identical affinity for acetyl-CoA, propionyl-CoA, and butyryl-CoA (Ringel and Wolberger, 2016;Kollenstart et al., 2019). According to the known discovery, mammalian KATs were categorized into two groups including type A KATs and type B KATs which are located mainly in the nucleus and cytoplasm, respectively Li et al., 2020). Type A contains five families including the GNAT, p300/CBP, MYST, Basal TF, and NCoA family (Menzies et al., 2016). Due to the wide range of acyltransferase activities and structural similarity of these acyl-CoAs, we conducted BLAST analysis based on well-known lysine acetyltransferases and revealed 15 proteins in S. epidermidis that are homologous with these acetyltransferase families, i.e., GNAT family (Q92830, Q92831, and Q5SQI0), p300/CBP family (Q92793 and Q09472), MYST family (Q92993, Q92794, Q8WYB5, O95251, and Q9H7Z6), NCoA family (Q15788, Q15596, and Q9Y6Q9), and Type B (O14929, Q9H7 × 0). More in-depth studies need to be performed to elucidate whether these proteins have succinyltransferase activities or other enzymes exist that can catalyze succinylation in S. epidermidis. It may lay the foundation for further understanding of regulating the dynamic balance of succinylation in bacteria (see Supplementary Table 12).
A total of 18 types of KDACs have been recognized in mammals which are divided into 4 classes. Among them, sir-2 like proteins (SIRT1-7) play a significant role in multiple deacylation processes (Ali et al., 2018). SIRT5 and SIRT7 are two commonly known deacetylases, which have been identified as desuccinylases located mainly in the mitochondria and nucleus, respectively (Osborne et al., 2016). In prokaryotes, sir2-like proteins, i.e., CobB is identified as the first desuccinylase in E. coli (Colak et al., 2013). Meanwhile, ScCobB2 in S. coelicolor is homologous with SIRT5 and E. coli CobB possesses the desuccinylase activity . In S. epidermidis, we found 18 proteins homologous with HDACs in mammals that may possess deacetylase activity. Among them, Q9NXA8 and Q9NRC8 are in homology with SIRT5 and SIRT7, respectively (see Supplementary Table 12). They are highly possible to desuccinylate modified substrates and regulate bacterial processes in S. epidermidis. It is intriguing to explore the possible desuccinylase activity of these proteins and their function in bacterial behaviors of this organism.
In other PTMs like phosphorylation, a complete regulatory system was identified which comprised of "writers" (transfer the acyl groups to targets), "erasers" (remove the acyl groups from the modified substrates), and "readers" (identify the modified peptides and initiate downstream reaction) (Biswas and Rao, 2018). For lysine succinylation, writers (succinyltransferases) and erasers (desuccinylases) were reported in succession. However, the readers have been scarcely reported until now. This may be due to the less specificity of succinylation motifs and fewer datasets of succinylation events. In this study, we tried to figure out the exact motif that existed in most organisms and investigate whether it had functional conservation according to KEGG and GO enrichment analysis. The results revealed that proteins with the same motif (KsuD) in different organisms including E. coli, S. epidermidis, S. cerevisiae, H. sapiens, and M. musculus were highly enriched in the glycolysis/gluconeogenesis pathway and pyruvate metabolism pathway based on KEGG enrichment analysis. Accordingly, it is possible to hypothesize that the KsuD motif could be a clue to figure out the "reader" that can recognize succinylated events and initiate further reactions. Otherwise, one motif in five organisms is too little to elucidate a common phenomenon or principle. Further study should be undertaken to include more species and motifs and investigate the stable pattern of succinylated events.
There are a couple of limitations in the present study. Firstly, experimental verification like Co-IP and site mutation should be carried out in further studies. Secondly, whether or not various PTMs such as acetylation, malonylation, and crotonylation in adjacent sites have different effects should be investigated deeply in the next project. Thirdly, the enzyme activity of predicted succinyltransferases and desuccinylases in S. epidermidis should be verified later. We hope to take this research as a starting point to further explore the physiological and pathological mechanisms of S. epidermidis at the level of PTM.
CONCLUSION
In this study, we elucidated 649 proteins with 1557 succinylated lysine sites in S. epidermidis using antibody affinity purification and MS technology, which is the first comprehensive succinylation profile in this organism. GO annotation, KEGG enrichment, and PPI network showed strong connections between lysine succinylation and metabolic processes. We identified 13 conserved motifs and tried to figure out the functional and pattern relativity from the perspective of motifs (KsuD), which provides a new insight for investigating features and regulatory factors, i.e., "readers" of succinylation; 15 succinyltransferases and 18 desuccinylases were predicted that could be pivotal regulators of succinylated events in this organism. Proteins associated with survival, metabolism, virulence, and cell-to-cell communication were succinylated in S. epidermidis, indicating the potential role succinylation may play in bacterial behaviors of this species. This study lays the foundation to deepen the understanding of succinylated events and provide a promising reference to develop therapeutic targets against infections caused by S. epidermidis.
DATA AVAILABILITY STATEMENT
The proteomic data generated for this study (ID: 632367) have been deposited to the PRIDE Archive (https://www.ebi.ac.uk/ pride) with identifier PXD022866. | 2021-02-02T18:02:08.648Z | 2021-02-01T00:00:00.000 | {
"year": 2020,
"sha1": "515c2ff9de4f1e0b064fe80589a7b70913634f7e",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2020.632367/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "515c2ff9de4f1e0b064fe80589a7b70913634f7e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
100556417 | pes2o/s2orc | v3-fos-license | A review on green synthesis of zinc oxide nanoparticles – An eco-friendly approach
Nanotechnology deals with the production and usage of material with nanoscale dimension. Nanoscale dimension provides nanoparticles a large surface area to volume ratio and thus very specific properties. Zinc oxide nanoparticles (ZnO NPs) had been in recent studies due to its large bandwidth and high exciton binding energy and it has potential applications like antibacterial, antifungal, anti-diabetic, anti-inflammatory, wound healing, antioxidant and optic properties. Due to the large rate of toxic chemicals and extreme environment employed in the physical and chemical production of these NPs, green methods employing the use of plants, fungus, bacteria, and algae have been adopted. This review is a comprehensive study of the synthesis and characterization methods used for the green synthesis of ZnO NPs using different biological sources.
Introduction
Nanomaterials are particles having nanoscale dimension, and nanoparticles are very small sized particles with enhanced catalytic reactivity, thermal conductivity, non-linear optical performance and chemical steadiness owing to its large surface area to volume ratio [1] .NPs have started being considered as nano antibiotics because of their antimicrobial activities [2] .Nanoparticles have been integrated into various industrial, health, food, feed, space, chemical, and cosmetics industry of consumers which calls for a green and environment-friendly approach to their synthesis [3] .
Nanoparticle synthesis methods
Two approaches have been suggested for nanoparticle synthesis: Bottom up and top down approach.The top-down approach involves milling or attrition of large macroscopic particle.It involves synthesizing large-scale patterns initially and then reducing it to nanoscale level through plastic deformation.This technique cannot be employed for large scale production of nanoparticles because it is a costly and slow process [4] .Interferometric Lithographic (IL) is the most common technique which employs the role of top-down approach for nanomaterial synthesis [5] .This technique involves the synthesis of nanoparticles from already miniaturized atomic components through self-assembly.This includes formation through physical and chemical means.It is a comparatively cheap approach [6] .It is based on kinetic and thermodynamic equilibrium approach.The kinetic approach involves MBE (molecular beam epitaxy).
Different methods used in nanoparticle synthesis
In the physical method, physical forces are involved in the attraction of nanoscale particles and formation of large, stable, welldefined nanostructures.Its example includes nanoparticle synthesis through colloidal dispersion method.It also includes basic techniques like vapor condensation, amorphous crystallization, physical fragmentation and many others [7][8][9][10] .Nanoparticle synthesis is mediated by physical, chemical and green methods [11][12][13] .The physical method involves the use of costly equipment, high temperature and pressure [14] , large space area for setting up of machines.The chemical method involves the use of toxic chemicals which can prove to be hazardous for the environment and the person handling it.The literature states that some of the toxic chemicals that we use in physical and chemical methods may reside in the NPs formed which may prove hazardous in the field of their application in the medical field [15] .Thus, we needed an environment-friendly and cost-effective method for nanoparticle synthesis.Physical process involves the use of high vacuum in processes like pulsed laser deposition, MBE (molecular beam epitaxy), thermal evaporation etc. [16] and chemical methods include chemical micro emulsion, wet chemical, spray pyrolysis, electrodeposition [16] , chemical and direct precipitation and microwave assisted combustion [17] .Additional capping and stabilizing agent are needed in physical and chemical methods [18][19][20][21] .
Green approach
Biosynthesis of nanoparticles is an approach of synthesizing nanoparticles using microorganisms and plants having biomedical applications.This approach is an environment-friendly, costeffective, biocom patible, saf e, green approach [22] .Green synthesis includes synthesis through plants, bacteria, fungi, algae etc.They allow large scale production of ZnO NPs free of additional impurities [23] .NPs synthesized from biomimetic approach show more catalytic activity and limit the use of expensive and toxic chemicals.
These natural strains and plant extract secrete some phytochemicals that act as both reducing agent and capping or stabilization agent; for example, synthesis of ZnO nanoflowers of uniform size from cell soluble proteins of B. licheniformis showed enhanced photocatalytic activity and photo stability clearly depicted by 83% degradation of methylene blue (MB) pollutant dye in presence of ZnO nanoflowers considering the fact that self-degradation of MB was null (observed through the control value) and through three repeated cycles of experiment at different time interval, degradation was found at 74% which clearly showed photo stability of ZnO nanoflowers produced [24] .Oblate spherical and hexagonal shaped ZnO NPs of size ranging from 1.2 to 6.8 nm have been synthesized using fungal strain Aspergillus fumigatus TFR-8 and these NPs showed stability for 90 days confirmed by measuring hydrodynamic diameter of NPs using particle size analyzer which showed agglomeration formation of NPs only after 90 days suggesting high stability of NPs formed using the fungal strain [25] .ZnO NPs of size 36 nm synthesized from seaweed Sargassum myriocystum (microalgae) obtained from the gulf of Mannar showed no visible changes even after 6 months clearly demonstrating the stability of NPs formed.From FTIR result studies, it has been confirmed that fucoidan soluble pigments secreted from microalgae were responsible for the reduction and stabilization of the NPs.
Plant parts like roots, leaves, stems, seeds, fruits have also been utilized for the NPs synthesis as their extract is rich in phytochemicals which act as both reducing and stabilization agent [26][27][28][29][30][31][32] .ZnO NPs synthesized from Trifolium pratense flower extract showed similar peaks in UV-Vis spectrophotometer after 24, 48, 72, 96 and 120 hours of NPs formation showing the stability of NPs formed [33] .Similarly, fruit extract of Rosa canina acted as both reducing and stabilizing agent for synthesized ZnO NPs, confirmed by FTIR studies.Bio-capping is done by carboxylic and phenolic acid present in fruit extract.Spherical shaped ZnO NPs were formed by Aloe Vera leaf extract where free carboxylic and the amino group of plant extract acted as both reducing and capping agent.
Zinc oxide nanoparticles
ZnO is an n-type semiconducting metal oxide.Zinc oxide NP has drawn interest in past two-three years due to its wide range of applicability in the field of electronics, optics, and biomedical systems [34][35][36][37][38][39][40] .Several types of inorganic metal oxides have been synthesized and remained in recent studies like TiO 2, CuO, and ZnO.Of all these metal oxides, ZnO NPs is of maximum interest because they are inexpensive to produce, safe and can be prepared easily [41] .US FDA has enlisted ZnO as GRAS (generally recognized as safe) metal oxide [42] .ZnO NPs exhibit tremendous semiconducting properties because of its large band gap (3.37 eV) and high exciton binding energy (60meV) like high catalytic activity, optic, UV filtering properties, anti-inflammatory, wound healing [43][44][45][46][47][48][49] .Due to its UV filtering properties, it has been extensively used in cosmetics like sunscreen lotions [50] .It has a wide range of biomedical applications like drug delivery, anti-cancer, antidiabetic, antibacterial, antifungal and agricultural properties [51][52][53][54][55] .Although ZnO is used for targeted drug delivery, it still has the limitation of cytotoxicity which is yet to be resolved [56] .ZnO NPs have a very strong antibacterial effect at a very low concentration of gram negative and gram positive bacteria as confirmed by the studies, they have shown strong antibacterial effect than the ZnO NPs synthesized chemically [57][58][59] .They have also been employed for rubber manufacturing, paint, for removing sulfur and arsenic from water, protein adsorption properties, and dental applications.ZnO NPs have piezoelectric and pyroelectric properties [60,61] .They are used for disposal of aquatic weed which is resistant to all type of eradication techniques like physical, chemical and mechanical means [62] .ZnO NPs have been reported in different morphologies like nanoflake, nanoflower, nanobelt, nanorod and nanowire [63][64][65] .
Literature study
Due to the increasing popularity of green methods, different works had been done to synthesize ZnO NPs using different sources like bacteria, fungus, algae, plants and others ( Fig. 1 ).A list of tables had been put to summarize the valuable work done in this field.
Green synthesis of ZnO NPs using plant extract
Plant parts like leaf, stem, root, fruit, and seed have been used for ZnO NPs synthesis because of the exclusive phytochemicals that they produce.Using natural extracts of plant parts is a very ecofriendly, cheap process and it does not involve usage of any intermediate base groups.It takes very less time, does not involve usage of costly equipment and precursor and gives a highly pure and quantity enriched product free of impurities [66] .Plants are most preferred source of NPs synthesis because they lead to largescale production and production of stable, varied in shape and size NPs [67] .Bio-reduction involves reducing metal ions or metal oxides to 0 valence metal NPs with the help of phytochemicals like polysaccharides, polyphenolic compounds, vitamins, amino acids, alkaloids, terpenoids secreted from the plant [66,67] .
Most commonly applied method for simple preparation of ZnO NPs from leaves or flowers is where the plant part is washed thoroughly in running tap water and sterilized using double distilled water (some use Tween 20 to sterilize it).Then, the plant part is kept for drying at room temperature followed by weighing and then crushing it using a mortar and pestle.Milli-Q H 2 O is added to the plant part according to the desired concentration and the mixture is boiled under continuous stirring using a magnetic stirrer [66][67][68][69][70] .The solution is filtered using Whatman filter paper and the obtained clear solution was used as a plant extract (sample).Some volume of the extract is mixed with 0.5 Mm of hydrated Zinc nitrate or zinc oxide or zinc sulfate and the mixture is boiled at desired temperature and time to achieve efficient mixing [69,70] .Some perform optimization at this point using different tem perature, pH, extract concentration and time.Incubation period results in a change of color of the mixture to yellow which is a visual confirmation of the synthesized NPs [69,70] .Then a UV-Vis spectrophotometry is employed to confirm the synthesis of NPs followed by centrifugation of mixture and drying the pellet in a hot air oven to get the crystal NPS [71] .Further, synthesized nanoparticles are further characterized using X-ray diffractometer (XRD), Energy Dispersion Analysis of X-ray (EDAX), Fourier Transform Infrared Spectroscopy (FTIR), Scanning Electron Microscopy (SEM), Transmission Electron Microscopy (TEM), Field Emission Scanning Electron Microscopy (FE-SEM), Atomic Force Microscopy (AFM), Thermal-gravimetric Differential Thermal Analysis (TG-DTA), Photoluminescence Analysis (PL), X-ray Photoelectron Microscopy (XPS), Raman Spectroscopy, Attenuated total reflection (ATR), UV-Visible Diffuse Reflectance Spectroscopy (UV-DRS), and Dynamic Light Scattering (DLS) [70][71][72] .
An experiment conducted by Jafarirad et al. compared the results of NPs obtained through 2 different techniques-conventional heating (CH) and microwave irradiation (MI), and results clearly demonstrated that MI takes less time for NPs synthesis attributed to the high heating rate that MI provides and thus faster reaction rate [73] .
Plants belonging to Lamiaceae family have been extensively studied like Anisochilus carnosus [74] , Plectranthus amboinicus [75] , and Vitex negundo [76] which showed NP formation of varied sizes and shapes like spherical, quasi-spherical, hexagonal, rod-shaped with agglomerates.Results clearly indicated that with the increasing concentration of a plant extract, the size of synthesized NPs decreases [74][75][76] .Results also compared the size ranges observed through different techniques like FE-SEM, TEM, XRD showed similar range values [75,76] .SEM and EDAX showed similar results different from results of XRD.NPs synthesized from Vitex negundo leaf and flower showed the similar size of 38.17 nm confirmed by XRD analysis calculated through Debye-Scherrer equation [76] .Leaves of Azadirachta indica of Meliaceae family have been most commonly used for the synthesis of ZnO NP [77,78] .All experiments showed NPs in similar size range confirmed by XRD and TEM analysis with spherical shape and hexagonal disc shape and Nano buds.These studies elucidated the involvement of alcohol, amide, amine, alkane, carboxylic acid and carbonate moieties in the for-mation of NPs confirmed through FTIR studies.Fresh leaf extract and leaf peel of Aloe vera belonging to Liliaceae family [79,80] .Synthesized NP showed the difference in size (NP synthesized from peel was greater in size confirmed by SEM and TEM analysis) but similar in shapes (hexagonal and spherical).NPs synthesized from extracts of Agathosma betulina, Moringa oleifera, Pongamia pinnata, Plectranthus amboinicus, Nephelium lappaceum and Calatropis Gigantea showed agglomerate formation.Table 1 is a comprehensive study of different plants used for the synthesis of ZnO NP till date.
Green synthesis of ZnO NPs using bacteria
NP synthesis using bacteria is a green approach but it has several disadvantages like screening of microbes is a time-consuming process, careful monitoring of culture broth and the entire process is required to avoid the contamination, lack of control on NP size, shape and cost associated with the media used to grow bacteria is also very high.
ZnO nanoflowers were synthesized by B. licheniformis through an eco-friendly approach which showed photocatalytic activity, degraded Methylene blue dye.These nanoflowers showed enhanced photocatalytic activity as compared to already present photocatalytic substances and it has been presumed that larger oxygen vacancy in the synthesized nanoparticles imparts it the property of enhanced photocatalytic activity.Photocatalysis generates active species by absorption of light which degrades the organic waste material and thus can be used as an effective bioremediation tool.Nanoflowers synthesized using B. licheniformis were 40 nm in width and 400 nm in height [83] .Rhodococcus is able to survive in adverse condition and it has the ability to metabolize hydrophobic compounds thus, can help in biodegradation [84] .Spherical shaped NPs had been synthesized using Rhodococcus pyridinivorans and Zinc Sulphate as a substrate which showed size range of 100-130 nm confirmed through FE-SEM and XRD Analysis.It also demonstrated the presence of Phosphorus compound, secondary sulphornamide, monosubstituted alkyne, β-lactone, amine salt, amide II stretching band, enol of 1-3-di ketone, hydroxy aryl ketone, amide I bending band, alkane, and mononuclear benzene band confirmed through FTIR analysis [85] .ZnO was used as a sub- strate to synthesize ZnO NP through A. hydrophilla .NPs synthesized showed size range of 42-64 nm, confirmed through AFM and XRD analysis with varied shapes like oval and spherical [86] .Singh et al. compared the antioxidant activity of bare ZnO NP and Pseudomonas aeruginosa rhamnolipid stabilized NPs and it had been found that rhamnolipid stabilizes the ZnO NP because it is tough to form micelle aggregates on surface of carboxymethyl cellulose [87] and it acts as a better capping agent because of its long carbon chain [88] .It showed the formation of spherical shaped NP with nano size of 27-81 nm confirmed through TEM, XRD, and DLS analysis [88] .Table 2 illustrates the characteristics of ZNO NP synthesized using bacterial strains.
Green synthesis of ZnO NPs using microalgae and macroalgae
Algae are a photosynthetic organism which ranges from unicellular forms (ex.Chlorella ) to multicellular ones (ex.Brown algae).Algae lack basic plant structure like roots and, leaves.Marine algae are categorized based on the pigment present in them like Rhodophyta having red pigment, Phaeophyta with brown pigment and chlorophytes with green pigment.Algae have been used extensively for the synthesis of Au and Ag nanoparticles but its application for the ZnO nanoparticle synthesis is limited and reported in very less number of papers [81] .Microalgae draw special attention because of its ability to degrade toxic metals and convert them to less toxic forms [89] .Sargassum muticum and S. myriocystum belonging to Sargassaceae family have been used for ZnO NP synthesis.Sargassum muticum studied size of NPs using XRD and FE-SEM which showed similar ranges and hexagonal wurtzite structure with the presence of hydroxyl group and sulfated polysaccharides.S. myriocystum compared size using DLS and AFM which showed different size ranges with the presence of hydroxyl and carbonyl stretching in NPs which vary greatly in shape [82] .Table 3 represents some of the micro and macro algae that have been employed for the synthesis of ZnO NP.
Green synthesis of ZnO NPs using fungus
Extracellular synthesis of NPs from the fungus is highly useful because of large scale production, convenient downstream processing and economic viability [90] .Fungal strains are chosen over bacteria because of their better tolerance and metal bioaccumulation property [92] .ZnO NPs were synthesized from mycelia of Aspergillus fumigatus.DLS analysis revealed the size range of NPs to be 1.2 to 6.8 with the average size of 3.8.AFM confirmed the average height of NP to be 8.56 nm.Particle size was > 100 nm for 90 days but after 90 days they formed an agglomerate of average size 100 nm which suggested the stability of formed NPs for 90 days [93] .NPs synthesized from Aspergillus terreus belonging to Trichocomaceae family had a size range of 54.8-82.6 nm confirmed by SEM and the average size of 29 nm calculated using Debye-Sherrer equation through XRD analysis results.It confirmed the presence of primary alcohol, primary or secondary amine, amide, aromatic nitro compounds in the NPs formed confirmed through FTIR studies [94] .NPs synthesized using Candida albicans showed similar size range 15-25 nm confirmed by SEM, TEM and XRD Analysis [95] .Aspergillus species have been widely employed for the synthesis of ZnO NPs and NPs synthesized from fungal strain were spherical shaped in most of the cases.Table 4 gives a brief account of commonly used fungus utilized for ZnO NP synthesis.[97]
Green synthesis of ZnO NPs using other green sources
Biocompatible chemicals are used as some other green sources for the synthesis of nanoparticles.It is a fast, economic process which eliminates the production of any kind of side product in the nucleation and synthesis reaction of nanoparticles.It leads to the formation of controlled shape and size nanoparticles with their well-dispersed nature [91] .Nanoparticles synthesized through wet chemical method render them special properties like enhanced anti-bacterial efficiency up to 99.9% when coated on a cotton fabric [96] .Table 5 illustrated a few other green sources that have been employed for the synthesis of ZnO NP [91,96,97] .
Conclusion
Biosynthesis of nanoparticles using eco-friendly approach has been the area of focused research in the last decade.Green sources act as both stabilizing and reducing agent for the synthesis of shape and size controlled nanoparticles.Future prospect of plant-mediated nanoparticle synthesis includes an extension of laboratory-based work to industrial scale, elucidation of phytochemicals involved in the synthesis of nanoparticles using bioinformatics tools and deriving the exact mechanism involved in inhibition of pathogenic bacteria.The plant-based nanoparticle can have huge application in the field of food, pharmaceutical, and cosmetic industries and thus become a major area of research.
Table 1
Plant mediated synthesis of ZnO NP.
Hexagonal wurtzite, quasi-spherical O-H of water, alcohol, phenol C-H of alkane, O-H of carboxylic acid, C = O of the nitro group.
Table 2
Bacteria-mediated synthesis of ZnO NP.
Table 4
Fungus mediated synthesis of ZnO NP.
Table 5
ZnO NP synthesis by protein. | 2019-04-08T13:12:47.122Z | 2017-10-27T00:00:00.000 | {
"year": 2017,
"sha1": "3e8ddb6540d5ff092f08ea7a9752be4538494214",
"oa_license": "CCBYNC",
"oa_url": "https://reffit-tech.tpu.ru/index.php/res-eff/article/download/163/138",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d21ab35d76cd37ba81e9b148d1aa88bbc7e79811",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
85497907 | pes2o/s2orc | v3-fos-license | On the Symmetry of the Bone Structure Density over the Nasopalatine Foramen via Accurate Fractal Dimension Analysis
The objective of the present paper is to describe all the anatomical considerations surrounding the nasopalatine foramen by relating them to the study of bone structure density via an accurate fractal dimension analysis in that area. We consecutively selected a sample of 130 patients, all of them with cone beam computed tomography (CBCT) images performed for treatment needs. We chose a specific window (ROI), which coincides with an axial cut at the level of the anterior nasal spine. Different anthropometric measurements were analyzed and a novel fractal dimension analysis was performed. Our sample consisted of 130 patients and was divided into two groups: group one (consisting of 65 subjects without loss of teeth) and group two (consisting of 65 patients with the absence of some teeth). In the sample, 52.31% were women (68 people). Mann–Whitney tests were applied to obtain the statistical results. The mean age of the patients in that sample was 53.67 years with a standard deviation of 8.20 years. We conclude that fractal dimension, a mathematical invariant, behaves symmetrically for binary images from the CBCT scanners of each subject of our sample of study. We also conclude that there were no significant differences between all the anthropometric measures used neither in the subjects themselves nor in the different groups. Therefore, some patterns of symmetry were appreciated at a complete range of levels.
Introduction
Embryologically, the formation (histogenesis) and mineralisation (ossification) of hard facial tissues takes place after that of the soft tissues, at the end of embryonic period (10-12 weeks).There are two types of ossification.On the one hand, intramembranous, made from mesenchyme, which will become osteoid ossification centers, and will be arranged by forming a 3D network of trabeculae.The other type is an endochondral or cartilaginous mold, in which a cast of hyaline cartilage will be replaced by bone tissue.The type of ossification will depend on the future function of the bone.In growth areas exposed to stress, the mechanism of ossification is intramembranous.The ossification is endochondral wherever there is pressure, since the cartilage is rigid and flexible, properly supporting this type of loads.Intramembranous ossification predominates in face bones or viscerocranium.The ossification of maxillary bone commences at the end of the sixth week of intrauterine life and takes place from two points, one of them located in the anterior (premaxillary), and the another one in the posterior (postmaxillary) area.The trabeculae formed from the premaxillary ossification center are rapidly directed in three directions: (1) upward, to form the anterior wall of the descending process; (2) forward, towards the anterior nasal spine; and (3) downward, to form the alveolar processes of upper incisors.Both cranial and facial growths are performed in the three dimensions of space.Usually, it is harmonic and proportional though not uniform.That growth is produced by the combination of four different biological phenomena, namely, (1) replacement of cartilage by bone; (2) growth at suture level; (3) peripheral bone affectation associated with internal resorption; and (4) dental rash [1].From an anatomic viewpoint, maxillary bone is part of mass or facial bone complex, being its functional center.The external configuration of upper jaw is quite irregular.However, a quadrilateral shape may be recognized so both external and internal faces are distinguished as well as four edges.Maxillary bone is even and fused in the midline by intermaxillary suture so it constitutes the center of upper facial mass.As such, it is part of buccal cavity, bony palate, orbital, nasal, pterygopalatine, and zygomatic pits.We would like to point out that maxillary bone consists of:
•
Body: it is most of the bone, pyramidal, is part of orbit, nasal cavity, infratemporal fossa, and the middle third of the face.It presents, in its anterior region, both the anterior nasal spine and the nasal notch.
•
Zygomatic apophysis, that is articulated medially with the maxillary process of zygomatic bone.
•
Palatine process, extending medially by forming the greatest part of hard palate, articulating in the middle line with the contralateral maxilla one, and later with palatal bone, and • Alveolar process, which supports the upper teeth.The convex region that covers canine by vestibular is the canine eminence.There is a concavity mesial to this, i.e., the incisive fossa.Also, the canine fossa is a concavity which is distal to the canine one.The most posterior region of the alveolar process is the tuberosity of the maxilla [2].
Topographically, there are three zones, namely,
1.
Anterior zone, which covers from the intermaxillary suture to the canine eminence.
The anterior maxilla, also called as the premaxilla area, contains a key anatomical structure, the anterior palatal, incisor, or nasopalatine canal (NPC in the sequel).NPC is located immediately below the incisive papilla.Both anterior palatine ducts open into the incisive fossa of the osseous palate, and possibly pass through the junction line of the incisor (premaxillary) bone with the maxilla.It constitutes the primitive communication between the mouth and the nose.NPC was first described in a general way by Stenson in 1683.It is located in the midline of the palate, posterior to the central incisors, and below the interincisal papilla.It is projected vertically in the premaxillary region and consists of two extremes: one towards the nasal floor with two openings being directed towards each side of the septum, known as foramines or Stenson holes; the second end corresponds to the opening towards the oral cavity, called incisor hole, whose diameter is equal to 3.62 mm.That anatomical structure houses both the nasopalatine nerve and the nasopalatine artery, which originates from the sphenopalatine artery (terminal branch of the internal maxillary), both going to innervate and simultaneously irrigating the nasal floor mucosa and the anterior palatal mucosa.NPC is also composed of fibrous connective tissue, fatty tissue, and some minor salivary glands [3].The balance, symmetry, and harmony between facial structures are fundamental elements in the attainment of facial beauty, since it plays a fundamental role in the psychosocial development of the individual.In this sense, due to the close relationship of teeth with the formation and preservation of alveolar processes, the loss of these causes an irreversible process of reduction of bone volume, both horizontally and vertically, which is greater in the maxilla, higher in the jaw, and vestibular in the palatine, with serious clinical, functional, and aesthetic implications [4].The rhythm of that bone loss depends on several factors, such as the existence of more teeth in the arch, maneuvers performed for exodontia, and the presence of previous infectious or cystic pathology, as well as the complication of healing by alveolitis.Patients with a long period of partial or total edentulism suffer severe atrophy of the jaws, with great asymmetry of dental arches.As a consequence, the retention and stability of partial or complete removable prostheses is negatively affected [5].Currently, in developed countries, dental implants are considered the best option to replace lost teeth with others (removable prosthesis or bridge).However, in case of bone deficiency, the placement of implants may be quite complicated, especially in the maxilla, where the bone loss may be so pronounced that the floor of the maxillary sinus is practically in contact with the palatal fibromucosa.In such cases, it becomes necessary to perform maneuvers of bone volume increase to allow the placement of implants with greater length and diameter, and in a more favorable position, thus improving the results in the medium and long term [6].These advanced oral surgery techniques require a thorough morphological study of the area to be treated, for which 3D radiology is essential.When no distortion is found, that type of technique allows better planning of implant treatments according to the receptor site.In this way, the cone beam computed tomography (CBCT) allows clinicians to carry out a variety of analyzes to know the characteristics of bone structures, such as bone quality, or to inspect the topography and thickness of cortical bones.Bone volume can be examined essentially to predict vascularity for bone maturation and preservation.Bone defect detections are crucial to decide on a graft procedure.The use of CBCT specialized in dentomaxillofacial area has been a step forward compared to conventional CT for its greater precision, lower cost, radiation, accessibility, and short duration of the scanner [7].The most relevant anatomical formation of the anterior region of the maxilla is the NPC.In the scientific literature, surgical difficulties and anatomical limitations during implant surgery have been described in relation with the location of that structure.In the study conducted by Bornstein et al. [8] to evaluate the different anatomical variations of NPC, they found a single channel that was identified in 45 cases, two parallel channels separated in 15 cases, and variations of the "Y" type, that were observed in 40 cases.The dimensions of the NPC revealed an average diameter of the nasal openings of 3.49 mm, and a broad incisive foramen with a diameter equal to 4.45 mm.The average length of the NPC was found to be equal to 10.99 mm.The dimensions of the buccal bone plate showed an increasing width from the crest to the apical measurements.Liang et al. [9] in 2009, conducted a study to determine the anatomical variability of NPC as well as its characteristics, both anatomical and histological.The diameter of the canal was found to be enlarged with age and in edentulous patients.In 2018, Hakbilen et al. [10], analyzed three-dimensionally (by CBCT) the anatomical dimensions of NPC of 619 individuals aged from 17 to 86 years, and correlated them with age, gender, and edentulism status.They found large morphological differences among individuals.In particular, 26.17% of them had a conical shape, 24.71% were hourglass, 16.80% cylindrical, 15.83% funnel, 11.14% banana, and 5.33% of the channels were branched.Men and women showed significant differences in regard to the length of the channels, as well as in the thickness of the vestibular cortices in the sagittal sections.Age and edentulism also affected the length of the NPC and the thickness of the vestibular cortex.
The knowledge about the establishment of theories that may clarify the etiopathogenesis of the development of the nasopalatine region and NPC in humans becomes necessary to understand the morphology of such a region and the morbidity that takes place therein, being the tomography a great help for that purpose.Related to all these types of studies that have been conducted there has never been a parallel study of the bone density analysis.The studies we have reviewed always use the same technique to deal with fractal dimension calculations, namely, the box-counting technique ( [8,9]).As we have explained previously ( [10,11]), our technique consists of a much more precise algorithm that throws reliable results, much closer to reality than the others.In summary, the objective of this work is to carry out an exhaustive analysis of the area surrounding the nasopalatine foramen, helping us with new mathematical techniques with the idea of detect generic symmetry patterns to provide some empirical evidence that bony trabeculate resembles in its structure a fractal, and therefore, possesses a fractal dimension whose real value can be accurately approximated by our procedure.As such, the main research question in this study is as follows: Are there patterns of symmetry on bone structure density over nasopalatine foramen?
Materials and Methods
This is a cross-sectional and observational clinical study for which we selected a sample of 153 patients consecutively from the University dental clinic of Murcia (Spain).The study was approved by the Bioethics Committee of the University of Murcia.All individuals gave their informed consent in writing before participating.The following inclusion criteria were applied; patients in health conditions both systemic and dental, not pregnant, images that do not contain artefacts.A total amount of 130 patients that met all the criteria described above (five were discarded because they were submitted to treatments with bisphosphonates as well as 18 other subjects, since their images presented artefacts or were not considered with enough quality to be able to apply our fractal dimension procedure).All CBCTs were performed using the same Planmeca R equipment, Planmeca ProMax 3D Max (Planmeca Oy, Helsinki, Finland) calibrated according to technical considerations.X-rays were obtained with the patient in the same position (prone position).The beam emission parameters were kV = 96, mA = 8, exposure time of 12 s (11.94 s) with an image size of 501 × 501 × 466 voxels (each voxel being equivalent to 200 µm).The evaluation software used was the Romexis 2.5.1 R program (Planmeca Oy, Helsinki, Finland), which allowed observing the image in a multiple window where the axial, coronal and sagittal planes can be visualized in 0.2 mm intervals, in addition to a 3D vision.As indicated above, the sample was divided into three groups.We proceed to select a specific ROI that was obtained in the axial plane at the height of the nasal spine, visualising the nasopalatine foramen and the canine mamelons on both sides, see Figure 1.We proceed to make the following measurements; distance anterior wall nasopalatine hole to anterior nasal spine (DCV), distance back wall foramen (NF) to border palate bone (DCP), distance right side wall NF to right canine mamelon (DVD), distance left lateral wall NF to left canine mamelon (DVI), area of the NF and other values provided by the software itself: W, H, mean, and standard deviation.All measurements were made by a single examiner duly trained for the purpose.The measurements were repeated by the examiner one month after performing the first ones and if there was a discrepancy in any measurement, the average of both is obtained and the kappa index was used.Once all the results were obtained, a database was created, and the necessary Mathematica R code was written to perform all the statistical analyses.
Description of the Sample
In this section, we shall describe the sample of patients that took part in our study.130 subjects were involved with 52.31% (68 people) being women.Each patient was assigned to one of the two following groups.Group one consisted of 65 patients, all of them without loss of teeth, and 65 subjects were assigned to group two (with the absence of some dental pieces).A first descriptive analysis was carried out with the aim to characterize our sample of patients.As such, for each patient in our study, the following attributes were considered.
1.
Age: it was found a mean age equal to 53.67 years with a standard deviation of 8.20 years.
2.
DCV: a mean of 7.54 and a standard deviation equal to 1.53 were found.
3.
DCP: with a mean of 3.99 and a standard deviation equal to 1.65.4.
DVD: a mean of 12.95 and a standard deviation of 1.75 were obtained.5.
DVI: with a mean of 12.98 and a standard deviation equal to 1.34.6.
Area: a mean of 5.63 and a standard deviation equal to 2.18 were found.7.
W: a mean equal to 2.96 and a standard deviation of 0.71 were found.8.
H: with a mean equal to 2.23 and a standard deviation of 0.59.9.
Mean: a mean equal to 158.66 and a standard deviation of 120.83 was obtained.10.
DIM: a mean fractal dimension of 1.69 and a standard deviation equal to 0.09 were found.
Table 1 summarizes the sample description by attributes.
Sample Description by Sex
Next, we shall describe in detail our sample of patients by sex groups.
Female Population
It contained 68 subjects (52.31% of the whole sample) of which 34 people were assigned to group one and 34 people were assigned to group two.In regard to the attributes explored for each female patient in the present study, the results we obtained are as follows.
1.
Age: a mean age of 54.62 years with a standard deviation equal to 8.99 years was found.
2.
DCV: a mean equal to 7.29 and a standard deviation of 1.63 were obtained.
3.
DCP: with a mean equal to 4.05 and a standard deviation equal to 1.69.4.
DVD: a mean of 12.82 and a standard deviation equal to 1.62 were found.5.
DVI: it was found a mean equal to 12.38 with a standard deviation of 1.62.6.
Area: with a mean equal to 5.33 and a standard deviation equal to 1.98.7.
W: a mean equal to 3.03 and a standard deviation of 0.72 were found.8.
H: a mean of 2.26 and a standard deviation equal to 0.44 were found.9.
Mean: a mean of 142.65 and a standard deviation of 107.52 were obtained.10.
DIM: a mean fractal dimension of 1.68 and a standard deviation equal to 0.13 were found.
Male Population
It contained 62 people (47.69% of the whole sample) in our study.Of them, 31 people were assigned to group one and 31 were assigned to group two.Similarly to our female population, some descriptive statistics regarding the attributes of the male one were calculated.The results are provided below.
1.
Age: it was found a mean age equal to 53.03 years with a standard deviation of 7.05 years.
2.
DCV: a mean of 8.75 and a standard deviation equal to 1.35 were found.
3.
DCP: with a mean of 4.13 and a standard deviation equal to 1.26.4.
DVD: a mean of 13.71 and a standard deviation of 1.96 were obtained.5.
DVI: with a mean equal to 13.13 and a standard deviation of 1.70.6.
Area: a mean of 5.18 and a standard deviation equal to 2.60 were found.7.
W: a mean equal to 2.64 and a standard deviation of 0.73 were found.8.
H: with a mean of 2.01 and a standard deviation equal to 0.66.9.
Mean: a mean equal to 160.16 and a standard deviation of 98.83 were obtained.10.
DIM: a mean fractal dimension equal to 1.69 and a standard deviation of 0.10 were found.
Some Comparisons by Sex
Some preliminary comparisons were carried out between each attribute for both the female and the male populations involved in this study.The obtained results appear below.All the conclusions were obtained by working at a confidence level of 95%.
1.
Age: no significative differences were found.In fact, a p-value equal to 0.12 was obtained by the Mann-Whitney test.
2.
DCV: in this case, significative differences were found by a p-value of 0.01* (* means that significative differences were found at a confidence level of 95%.) in the Mann-Whitney test.
3.
DCP: no significative differences were found by a Mann-Whitney p-value equal to 0.25.4.
DVD: a p-value of 0.25 was provided by the Mann-Whitney test.As such, no significative differences were found.5.
DVI: the Mann-Whitney test provided a p-value equal to 0.11.Thus, no significative differences were found.6.
Area: no differences were observed.In fact, the Mann-Whitney test provided a p-value equal to 0.13.7.
W: a p-value of 0.71 was found in the Mann-Whitney test.As such, no significative differences were found.8.
H: There were no significative differences.In fact, the Mann-Whitney test provided a p-value of 0.26.9.
Mean: the Mann-Whitney test threw a p-value equal to 0.45, so no significative differences were found.
10. DIM: no significative differences were found.In fact, a p-value of 0.33 was provided by the Mann-Whitney test.
A First Step towards Symmetry
We compared the variable DVD (distance right side wall nasopalatine hole to right canine mamelon) to DVI (distance left lateral wall nasopalatine hole to left canine mamelon) regarding the female population.As a result, no significant differences at a significance level of 95% were found.In fact, we obtained a p-value equal to 0.34 in the Mann-Whitney test.Thus, both variables DVD and DVI behave symmetrically in regard to the female population.A similar study was carried out concerning the male population.A p-value equal to 0.33 in the Mann-Whitney test also highlights symmetric behavior between these two lateral variables.
Fractal Dimension Analysis
The fractal dimensions of 130 binary images from each patient in our sample have been accurately calculated and analyzed.To deal with, an appropriate collection of scanners was selected from each patient taking part in our study.Next step was to convert such scanners to binary images by assigning "ones" to those pixels exceeding a certain color threshold and "zeros" otherwise.For illustration purposes, Figure 2 provides a graphical representation of an actual scanner from a subject that took part in our study together with its binary images for illustration purposes.Thus, a mean fractal dimension equal to 1.70 and a standard deviation of 0.09 were found.We also extracted both left and right sides of each binary image in the sample and calculated their fractal dimensions.A mean fractal dimension equal to 1.68 and a standard deviation of 0.08 were obtained for the left side binary images.Similarly, a mean fractal dimension of 1.72 and a standard deviation equal to 0.08 were found for the right side images.The results of the fractal dimension analysis carried out for each kind of binary images and each group appear in Table 2.The differences (in absolute value) between the fractal dimensions from each left binary image and its corresponding right side one were calculated.Thus, a mean difference equal to 0.09 with a standard deviation of 0.07 were obtained.Table 3 contains the results of the analysis of the differences among the fractal dimensions of each left binary and its corresponding right side one for each group.In addition, Figure 3 (left, first row) illustrates the empirical distribution of the fractal dimension values for all the 130 binary images analyzed as well as the empirical distribution of the fractal dimensions from their corresponding lateral binary images.In addition, the empirical distribution of the differences between each left binary image and its corresponding right side one is depicted in Figure 3 (right, first row).Figure 3.The blue line in each plot at the left illustrates the empirical distribution of the fractal dimension values for all the binary images analyzed from each group in our patient sample.Further, the discontinuous line marks the mean fractal dimension of the binary images from all the scanners analyzed.Notice also that the orange line (resp., the green line) represents the empirical distribution of the fractal dimensions of the left side (resp., the right side) binary images from each group of patients.On the other hand, each graph at the right depicts the empirical distribution of the differences (in absolute value) between each left side binary image and its corresponding right side one from each group of patients.The discontinuous line represents the mean of such differences.
To properly test the null hypothesis µ a,l − µ a,r = 0, where µ a,l (resp., µ a,r ) denotes the mean of the fractal dimensions of the right (resp., left) side binary images of the whole sample, we obtained a (medium) effect size equal to 0.50, and hence, a statistical power of 0.85.Thus, such a significance level becomes adequate to properly carry out such a comparison.
In this way, a p-value equal to 0.21 was found by the Mann-Whitney test when comparing the (mean of the) fractal dimension values of the left side binary images of the whole with respect to the (mean of the) fractal dimensions of their corresponding right side ones for all the subjects in the sample.That empirical result suggests a symmetric behavior of the fractal dimensions of the left side binary images with respect to their corresponding right side ones for all the subjects involved in the present study.
Next, we analyse in detail the fractal dimension of the binary images of the subjects from each group.
Fractal Dimension Analysis for Group One Patients
Group one consists of 65 subjects.A mean fractal dimension of 1.67 with a standard deviation equal to 0.06 was found.Moreover, we obtained a mean fractal dimension equal to 1.66 and a standard deviation of 0.04 regarding the left side binary images for all the patients assigned to group one.Similarly, a mean fractal dimension of 1.68 and a standard deviation equal to 0.04 for the right side images from the subjects in group one were obtained.Further, for each patient in group one, the differences (in absolute value) between the fractal dimension of each right side image and the fractal dimension of its corresponding left side one were analyzed.As such, a mean difference equal to 0.04 and a standard deviation of 0.03 were found.Figure 3 (left, second row) shows the empirical distribution of the fractal dimensions of all the 65 binary images of each subject in group one as well as the empirical distribution of the fractal dimensions of their corresponding lateral images.On the other hand, Figure 3 (right, second row) illustrates the empirical distribution of the differences between the fractal dimension of each left side image and the fractal dimension of its corresponding right side one for the subjects in group one.
To appropriately test the null hypothesis µ 1,l − µ 1,r = 0, where µ 1,l (resp., µ 1,r ) refers to the mean of the fractal dimensions of the right (resp., left) side binary images of group 1 subjects, we obtained a (medium) effect size equal to 0.50, leading to a statistical power of 0.85 at a confidence level of 0.05.As such, that significance level becomes valid to properly carry out that comparison.
A p-value equal to 0.19 was provided by the Mann-Whitney test when comparing the (mean of the) fractal dimension values of the left side binary images with respect to the (mean of the) fractal dimensions of their corresponding right side ones for all the subjects in group one.That empirical result suggests a symmetric behavior of the fractal dimensions of the left side binary images with respect to their corresponding right side ones for all the patients in group one.
Fractal Dimension Analysis for Patients in Group Two
Group two contains 65 subjects.A mean fractal dimension of 1.70 with a standard deviation equal to 0.06 was found.Moreover, we obtained a mean fractal dimension equal to 1.68 and a standard deviation of 0.04 regarding the left side binary images for all the patients assigned to group Similarly, they were obtained a mean fractal dimension of 1.70 and a standard deviation equal to 0.04 for the right side images from the subjects in group one.Further, for each patient in group two, the differences (in absolute value) between the fractal dimension of each right side image and the fractal dimension of its corresponding left side one were analyzed.As such, a mean difference equal to 0.05 and a standard deviation of 0.04 were found.Figure 3 (left, third row) shows the empirical distribution of the fractal dimensions of all the 65 binary images of each subject in group two as well as the empirical distribution of the fractal dimensions of their corresponding lateral images.On the other hand, Figure 3 (right, third row) illustrates the empirical distribution of the differences between the fractal dimension of each left side image and the fractal dimension of its corresponding right side one for the subjects in group two.
To properly test the null hypothesis µ 2,l − µ 2,r = 0, where µ 2,l (resp., µ 2,r ) refers to the mean of the fractal dimensions of the right (resp., left) side binary images of group 2 subjects, an effect size equal to 0.57 was obtained, leading to a statistical power of 0.85 at a confidence level of 0.05.As such, that significance level was adequate to properly carry out such a comparison.
A p-value equal to 0.38 was provided by the Mann-Whitney test when comparing the (mean of the) fractal dimension values of the left side binary images with respect to the (mean of the) fractal dimensions of their corresponding right side ones for all the subjects in group two (at a confidence level of 95%).That empirical result suggests a symmetric behavior of the fractal dimensions of the left side binary images with respect to their corresponding right side ones for all the patients in group two.
Analysis of Fractal Dimension by Groups
In this section, we shall perform a series of pairwise comparisons by groups in regard to the (empirical) distribution of the (lateral) fractal dimensions of the binary images that were assigned to each of them.In fact, recall that the research question to tackle with in the present study is as follows: Are there patterns of symmetry on bone structure density over nasopalatine foramen?In this section, we provide some empirical evidence in regard to such symmetry patterns.More specifically, next we carry out all the pairwise comparisons between the fractal dimension distributions (resp., the lateral fractal dimension distributions) of the binary images from the three groups.
Fractal Dimension Comparison between Groups One and Two
Next, we compare the mean of the fractal dimensions of the whole binary images from group one patients with respect to the mean of the fractal dimensions of the whole binary images from group two patients.Figure 4 (first row) compares the empirical distributions of the fractal dimensions of all the binary images in each group.
To appropriately test the null hypothesis µ 1 − µ 2 = 0, where µ i : i = 1, 2 denotes the mean of the fractal dimensions of the whole binary images of group i subjects, an effect size equal to 0.50 was obtained, leading to a statistical power of 0.85 at a confidence level of 0.05.Thus, that significance level was adequate to properly carry out that comparison.
A p-value equal to 0.28 was provided by the Mann-Whitney test, which suggest that no significative differences were found.
We also compared the means of the fractal dimensions of the left side binary images from each group.Figure 4 (second row, left) depicts the empirical distributions of the fractal dimensions of the left binary images in each group.
Thus, to properly test the null hypothesis µ 1,l − µ 2,l = 0, where µ i,l : i = 1, 2 denotes the mean of the fractal dimensions of the left side binary images of group i subjects, an effect size equal to 0.50 was obtained, leading to a statistical power of 0.85 at a confidence level of 0.05.Therefore, such a significance level was adequate to properly carry out that comparison.
A p-value of 0.14 was found by the Mann-Whitney test at a confidence level of 95%.That result throws some empirical evidence in regard to a similar behavior of the empirical distributions of fractal dimensions of the left side binary images from both groups.
The means of the fractal dimensions of the right side binary images from each group were compared, as well.Figure 4 (second row, right) illustrates the empirical distributions of the fractal dimensions of the right binary images in each group.
Thus, to properly test the null hypothesis µ 1,r − µ 2,r = 0, where µ i,r : i = 1, 2 denotes the mean of the fractal dimensions of the right side binary images of group i subjects, an effect size equal to 0.50 was obtained, which leads to a statistical power of 0.85 at a confidence level of 0.05.Hence, that significance level was adequate to properly carry out such a comparison.Empirical distribution of fractal dimensions (groups 1 and 2) A p-value of 0.29 was found by the Mann-Whitney test at a confidence level of 95%.Table 4 summarizes the results of the statistical comparison of means of the fractal dimensions of each kind of image and group.They suggest that the empirical distributions of the fractal dimensions of the right binary images from both groups are similar.
A Note on the Multiple Comparisons Problem
To prevent from Type I errors, we applied Bonferroni type adjustment.That procedure throws a more stringent p-value, which depends on the number of hypotheses tested, and makes it less likely to commit Type I errors.Thus, let H 1 , . . ., H 6 be the 6 null hypotheses tested in that section.If p i denotes the (Mann-Whitney) p-value for testing the null hypothesis H i , Bonferroni procedure states that H i has to be rejected whenever p i ≤ 1 6 α, where α = 0.05 is the significance level considered throughout this article.In this way, the p-value provided by Bonferroni approach is α B = 0.01.Since all the (Mann-Whitney) p-values calculated (c.f., e.g., Table 4) stand > α B , we discard significative effects from Type I errors.
On the other hand, recall that type II error consists of accepting the null hypothesis when it is actually false.Since statistical power is the probability of rejecting the null hypothesis when it is false, then we have that statistical power = 1 − β, where β denotes the probability of type II error.In this paper, we tested the null hypothesis µ i = µ j with i, j = 1, 2, 3 and i = j, where µ i denotes the mean of the fractal dimensions of the binary images from all the patients in group i.To deal with, a significance level α = 0.05 was selected and our sample consisted of two groups containing 65 patients each.As stated in the previous revision of our article, a statistical power of 0.85 was obtained by such an experimental design.Hence, β = 0.15 is the probability of type II error in this study, i.e., the probability of accepting that null hypothesis when it is actually false equals 15%.An option to reduce that β would consist of increasing the sample size in further studies.Indeed, that way would allow us to hold α at the desired level while still reducing β.As such, the statistical power of our study would increase.
Conclusions
All the anatomical considerations that surround the nasopalatine foramen have been described in this paper.Regarding them, an analysis of bone density via an efficient calculation of fractal dimension has been carried out in that area.A sample with 130 patients was considered.65 of them were assigned to group one (without loss of teeth) and 65 to group two (with the absence of some teeth).63 women took part in the final stage of our study.The mean age of the patients was equal to 53.16 years with a standard deviation of 8.73 years.For each subject, cone beam computed tomography was performed for treatment needs.A specific window, which coincides with an axial cut at the level of the anterior nasal spine, was selected.In that area, different anthropometric measurements were performed.In addition, we applied a novel and accurate approach to calculate the fractal dimension of binary images generated from each patient CBCT scanner.Three types of binary images were used for each subject including both right and left sides from the original one.Mann-Whitney tests threw some statistical evidence regarding a symmetric behavior of such binary images.Moreover, we found no significative differences regarding the anthropometric measures explored in the different groups of our study.Accordingly, several patterns of symmetry were appreciated at a complete range of levels.
In this paper, we highlight the utility of an accurate fractal dimension model as a reliable and unbiased approach for clinicians to analyse the density of the bone structure through CBCT images from a large sample of patients.In this way, it becomes a reproducible and non-invasive tool to properly quantify the bone structure density.Advanced surgical techniques for maxilla reconstruction need stable anatomic measures allowing the assessment of both morphological and structural characteristics of a given zone with the aim to perform minimally invasive interventions and with the best functional and aesthetic results, that have a positive impact on the quality of life of patients.
Mathematics provides the natural environment for modelling and analyzing problems from different areas in particular from health sciences, see for instance some applications to the war against cancer like [12,13].In the next Appendix A we shall present all mathematical machinery used in this work.
It is worth mentioning that covering Γ n of Γ is called as level n of that fractal structure.Equivalently, Definition A2 states that level n + 1 is a strong refinement of level n of Γ.Additionally, the levels of the natural fractal structure on R 2 , ∆ = {∆ n : n ∈ N}, are defined as In particular, the natural fractal structure on R 2 can be induced on the unit square by defining Notice that the first level consists of four squares with sides equal to 1 2 , Γ 2 contains 4 2 squares with sides equal to 1 2 2 , and in general, level n consists of 4 n squares with sides equal to 1 2 n .
In this paper, we shall apply the following result to efficiently calculate the box dimension of a binary images from the CBCT scanner of each patient.Observe that it suffices with calculating the number of δ-cubes that intersect α −1 (F) ⊆ [0, 1] for (lower/upper) dim B (F) calculation purposes.
Figure 1 .
Figure 1.Cutting selection to be studied and taking measurements.
Figure 2 .
Figure 2. Graphical representation of an actual cone beam computed tomography (CBCT) from a patient that took part in our study (left) and its corresponding binary images.
Figure 4 .
Figure 4. Empirical distributions of the fractal dimensions of the binary images from both groups one (blue line) and two (picture at the first row) and empirical distributions of the fractal dimensions of their lateral binary images (second row).The discontinuous straight lines mark the mean fractal dimension of each group and each kind of binary images.
Figure A1 .
Figure A1.First two levels of the natural fractal structure on [0, 1] × [0, 1].Notice that the first level consists of four squares with sides equal to1 2 , Γ 2 contains 4 2 squares with sides equal to 1 2 2 , and in general, level n consists of 4 n squares with sides equal to1 2 n .
Table 1 .
Sample description by attributes.Recall that DCV denotes distance anterior wall nasopalatine hole to anterior nasal spine, DCP means distance back wall foramen (NF) to border palate bone, DVD is distance right side wall NF to right canine mamelon, DVI refers to distance left lateral wall NF to left canine mamelon, Area is the area of the NF, and W, H, mean, and standard deviation are provided by the software.Whole Sample (n = 130) Female Group (n = 68) Male Group (n =
Table 2 .
Results of the fractal dimension analysis for each kind of binary images and group.
Table 3 .
Analysis of the differences among the fractal dimensions of each left binary image and its corresponding right side one for each group.
Table 4 .
Comparison of empirical distributions of (lateral) fractal dimensions from each group.They are provided the p-values from the Mann-Whitney tests involving both groups.No significative differences were found at a confidence level of 95%. | 2019-03-26T13:04:04.996Z | 2019-02-11T00:00:00.000 | {
"year": 2019,
"sha1": "e361ccaab4c2d13b3e65a5fec54e79895f1d2b9e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-8994/11/2/202/pdf?version=1550817317",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e361ccaab4c2d13b3e65a5fec54e79895f1d2b9e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
52801075 | pes2o/s2orc | v3-fos-license | Engineered exploitation of microbial potential
We are at a critical point in our history, faced with the challenges induced by our own large‐scale activities, over many years, which are leading to dramatic changes in our climate and an urgent need for remedial measures. Added to these concerns is the continual growth in populations and the pressure this puts on resources and environmental quality. These multiple stresses have stimulated the need to improve efficiencies in current technologies and the search for alternatives for sustainable energy, securing reliable water supplies, treating waste and the generation of sustainable products. One of the positive features of this alarming situation is the increasing awareness among microbiologist and non‐experts alike, of the potential of microorganisms as providers of some of the remedies. For instance, issues of environmental quality and energy have impacted on the waste industry in such a way it is now seen to be a resource opportunity, and anaerobic digestion is considered to be the way forward for treatment and sustainable energy generation.
With such high hopes riding on microbial potential, it is reassuring to know that in recent decades we have invested significant funding in techniques for improving our knowledge of the microbial world. This includes increasingly elaborate and sophisticated molecular methods for detecting unculturable populations and rapid sequencing for improving genetic understanding. However, we are now at a critical point whereby we urgently need to translate this vast mountain of knowledge and microbial system insights into solutions to the demanding global challenges. Furthermore, if new or even established microbial technologies are going to have any significant impact on climate change or any other of our global problems, they must be effective at very large scales and importantly, be controllable. Control of microbial potential en masse is in fact an engineering challenge.
Manipulation of microbial biomass on a large scale and in a controlled manner is by no means an easy task. However, mankind has had some notable successes in harnessing the potential of microbial processes, most notably the effective exploitation of microbial communities in municipal sewage systems and agriculture. Although such systems are very effective and have served us well for generations, they do not represent examples of where key insights from new genetic information have been exploited to develop novel environmental technologies for solving mankinds' problems. This has yet to be achieved. Furthermore, the successes in terms of harnessing of microbial potential were achieved by Victorian engineers and agriculturalists, not microbiologists. The kind of opportunity today for such life‐changing exploitation would be the identification of microbial gene sequences/strains that enable problematic CO2 to be converted directly by algae into methane or even clean fuels. However, even if we could isolate or generate such strains the challenge of effective exploitation would have to be resolved, most probably by engineers. This is because in order to harness light, algae have to grow on the surface of ponds and such two‐dimensional growth leads to self‐shading. The solutions to such issues require effective collaboration with good engineers to develop algae holding systems or bioreactors, which enable maximal light exposure in three dimensions, reducing the foot‐print and improving yield. Other scenarios whereby cross‐disciplinary collaboration will eventually lead to more effective microbial exploitation include hybrid approaches for treating trace levels of contaminants (such as hormones) in drinking water, by employing a combination of high affinity nanofilters, which concentrate the contaminant, making it bio‐available to catabolic strains.
Encouragingly, there are signs that microbiologists are beginning to open their minds in terms of developing new physical technologies for exploiting cells en masse, in a more controlled manner. These approaches include novel approaches for moving bacteria through soil and manipulating biofilm formation by electrokinetics (Andrews et al., 2006), stimulating biodegradation by manipulating bacterial genomes in situ employing ultrasound (Song et al., 2007), and the application of nanomaterials for in situ detection and stimulating cell activity (Chien et al., 2008). Although early days, such novel approaches provide hope that the microbial potential can be engineered and more reliably harnessed on a large scale. This will require a new generation of microbiologists who have more cross‐disciplinary training, who embrace the opportunities that physical and engineered techniques offer, and who have the imagination to consider complementary approaches for the limited array of microbial cell manipulation methods we traditionally employ (e.g. pH, temperature, concentration). This is very good news as the quicker we realize that sequencing more genomes is not the only option for resolving our problems, the quicker we can generate some effective solutions. Such critical advances will be accelerated by employing more systems biology approaches, linking information from the cell to the whole community, an approach which again will require multidisciplinary training, in this instance computing science and mathematics.
The Victorians may have not realized when they developed sewage and clean water systems the extent they had harnessed the potential of microbial communities to solve their problems. However, what they achieved and what we need to learn again is that the solution to many of our current problems is going to come from effective engineering of microbial systems, as this is the only way to control more effectively and provide the scale‐up required to have significant global impacts.
Mining for new enzymes
Amir Aharoni, Department of Life Sciences and the National Institute for Biotechnology in the Negev (NIBN), Ben-Gurion University, Beer-Sheva, Israel. The size of the microbial world is beyond our imagination.
In last year's crystal ball article, Tom Curtis compared the size of the microbial world to the size of the universe pointing out that the number of microorganisms in the world is billions of times larger than the number of stars in the sky (Curtis, 2007). There is little doubt that such unlimited microbial biodiversity holds great promise for enriching our repertoire of known enzymes, with many novel enzymes awaiting discovery.
The development of new sophisticated sequencing technologies, including the 454 pyrosequencer and Solexa technology, allow for accumulation of extremely large amounts of sequence information. Such rapidly accumulated information will allow microbiologists, bioinformaticians and system biologists to perform extensive surveys on microbial lineages, lead to the description of new metabolic pathways and allow for the identification of new regulatory mechanisms. These data also represent an invaluable resource in our quest for new enzymes. Still the question of how do we actually find these precious needles in the large haystack arises. Namely, what will be the best way to find new enzymes in the huge space that is the microbial world?
The discovery of new enzymes is of ultimate importance for fundamental knowledge regarding enzyme evolution, enzyme structure-function relationships, basic mechanisms of enzyme catalysis and even for the identification of novel protein folds. Novel enzymes could also serve as corner stones in catalysing industrial chemical synthesis reactions, thus serving as 'clean' alternatives for producing chemicals by large-scale chemical synthesis. Currently, new highly robust enzymes are urgently needed as biocatalysts in many different industries, including the pharmaceutical and agricultural industries. The dream of significantly expanding the repertoire of known enzymes, both for research and industrial applications, is currently the subject of intensive research and will keep many scientists busy in years to come (Ferrer et al., 2007).
One of the main methods for the identification of novel enzymes is by virtual sequence homology screens. Using this approach, new enzymes are identified by comparing vast number of sequences from genomic/metagenomic sources to the sequences of known enzymes (Ferrer et al., 2007). In this process, the immense repertoire of millions of proteins predicted from the microbial sequence database is compared with known enzymes to identify new homologues. One of the obvious limitations of such an approach is that it exclusively relies on existing gene annotations that are difficult to predict and often prone to errors due to their reliance on machine-based techniques (Hallin et al., 2008). No doubt that improved annotation will enable more accurate gene function predictions from microbial sequencing data. However, the main conceptual drawback of such an approach is that truly novel enzymes that are only remotely related to known existing enzymes will never be found. Genetic drift can significantly alter any obvious homology between functionally similar enzymes, thereby restricting the search to only related enzymes with similar sequences.
An unbiased manner to mine natural microbial biodiversity for new enzymes is by functional screening for the desired enzymatic activity. Such direct screening for new enzymes will allow the identification of novel enzymes that are not related by sequence homology to any other known enzyme. A prerequisite for any functional screening procedure is the availability of high-quality genomic/ metagenomic libraries and the use of an adequate host organism that is able to express the target genes to yield functional proteins. Currently, most functional screens rely on spatial separation between the different samples either on agar plates (by direct screening of colonies) or using microtitre plates. To perform such screens for large libraries, access to heavy robotic systems usually accessible only for large laboratories, is a prerequisite. Even with the aid of robotics, however, the number of genes that can be screened is on the order of~10 4 . Given the unlimited amount of diversity in the microbial world, this number will allow sampling of only a tiny fraction of the functional enzymes out there awaiting to be discovered.
To allow for a more efficient route for functional mining of new enzymes, we must adopt high-throughput approaches that will allow us to rapidly screen 10 6 -10 8 samples. These numbers are clearly far beyond reach of any currently available robotic system and rely on bulk selections for enzymatic activities rather than screening samples individually. In recent years, high-throughput screening/selections for enzymatic activities have been developed based on sophisticated approaches linking genotype to phenotype (Aharoni et al., 2005). Novel methodologies that allow us to maintain the linkage between the gene, the enzyme it encodes and the product it generates are based on cell-surface display technologies (using phage, bacteria or yeast) or on in vitro compartmentalization, using emulsion techniques (Aharoni et al., 2005). Different high-throughput screening (HTS) approaches have been developed for a number of enzyme families, including proteases, esterases, phosphotriesterases, peroxidases, DNA/RNA polymerases and glucosyltransferases. All of these approaches allow the screening of extremely large libraries by multiple cycles of enrichment using flow cytometry or selective immobilization of active clones. These methodologies, originally developed for directed evolution experiments, can be readily adopted for screening large genomic/ metagenomic libraries. Still, despite the powers of HTS assays, applications of these technologies for mining microbial libraries will require library pre-enrichment and the use of appropriate host organisms to increase our chances of the identifying and isolating novel enzymes. I believe that in the near future, we will see increasing efforts in applying powerful screening technique to sample ever larger fractions of the microbial world for the discovery of new and exciting enzymes.
Food and gut microbes for thoughts
Fabrizio Arigoni and Harald Brüssow, Nestlé Research Centre, Lausanne, Switzerland. The ambulance returns to the hospital with a young motor cyclist who had lost the control of his vehicle on an icy patch on the road. The surgeon observes heavy injuries on one leg, the bones are shattered, severe torsions were exerted on the knee and ankle and a deep open wound goes down into the bone. A lengthy operation will be needed and he knows that an infection with pathogenic staphylococci and streptococci can annihilate his efforts. The surgeon hates amputation and prosthetics in such a young man as a personal defeat. The surgeon will try his best. Surprisingly, he does not disinfect the wound nor does he apply antibiotics into the operation wound. After a first cleaning of the wound, he sprays a mixture of commensal skin bacteria and starter bacteria used in meat fermentation (Staphylococcus carnosus and Lactobacillus plantarum) on the wound. It turned out that these bacteria work quite well: they digest the dead tissue, the production of metabolites like lactic acid and hydrogen peroxide fights off Staphylococcus aureus from the wound and assists in the build-up of granulation tissue. This scenario is of course science fiction but not so much as you might think. US military surgeons observed in the 19th century that wounded soldiers left for one or two days on the battle field fared better than rapidly rescued soldiers. Flies had the time to lay their eggs into the wounds. The maggots, i.e. the fly larvae that hatched from the eggs, make their living from living flesh. They digest the wound tissue by proteolytic enzymes. Notably, the saliva of the maggots also contains substances that have a strong anti-bacterial activity including one against S. aureus. Small wonder that maggot therapy -despite its emotional disgust reaction -recently got FDA approval and starts to become popular with wound surgeons. Antibiotics, the big success story of mid-20th century medicine, were not mentioned in our fiction. The magic bullets, how Paul Ehrlich called the agents of early chemotherapy, are loosing their spell. We urgently need alternatives. In addition to antibiotics, which are the brain child of the chemical industry, commensal bacteria might become the future tools of a biological industry that uses whole organisms as ecological competitors not only against pathogens. Prominent scientists explore the impact of commensal bacteria on host physiology (obesity and related diseases, Turnbaugh et al., 2006). Others investigate the effects of commensals on the metabolism of food (soy isoflavons in the context of postmenopausal disturbances, Bolca and colleagues 2007) or drugs (to explain distinct pharmacokinetics, Sousa et al., 2008). There are already bacteria on the market with documented clinical studies showing activity against diarrhoea or inflammatory gut diseases or with generalized immunostimulatory activity. On the analytical side, it is much more difficult to define the mode of action of these biologicals than for antibiotics, which target a specific biochemical reaction. However, the pathogen will also be confronted with the same dilemma when competing with the commensal and there is some theoretical hope that the pathogen might less easily escape from control by commensals than from antibiotics.
The Iron Curtain crossing Europe during the Cold War period has also prevented the flow of ideas across this Crystal ball 129 ideological frontier. Biological research was not an exception. The former Soviet Union relied more on biological approaches against infectious diseases than on antibiotics. Antibiotic-resistant bifidobacteria and lactobacilli were sold in Russian pharmacies before probiotics became popular in the Western world. Even the above science fiction scenario was to a certain extent realized in the Soviet Union with a wound spray and wound dressing containing a cocktail of phages. The phages are directed against six major wound pathogens including S. aureus, Streptococcus pyogenes and Pseudomonas aeruginosa. We cannot, however, claim that the future is already now. The phage approach -despite its application to ten thousands of soldiers of the Red Army over half a centuryhas never been tested in controlled clinical trials fulfilling current criteria of clinical science. The jury is thus still out with respect to their efficacy.
If you object that our crystal ball gazing goes into the past, you can easily modernize the commensal and phage approach biotechnologically. Lactobacilli have been constructed that express single chain antibodies with antirotavirus activity. Lactococci were modified for intestinal delivery of human interleukin 10 providing a therapeutical approach for inflammatory bowel disease (Steidler et al., 2000). The bacterial carrier offers not only a safe passage through hostile environments like the stomach; commensals derived from defined body sites might re-home to their natural site. Proteins of medical interest could thus be targeted to particular anatomical sites and expressed in situ (for ex. Bifidobacterium breve expressing cytosine deaminase for tumour-targeting enzyme/prodrug therapy, Hidaka et al., 2007). Phages are particularly attractive for biotechnologists as they are not only genetically more tractable than bacteria but also known to reach practically all body sites, including those which you cannot easily target with conventional drugs. Filamentous phages were extensively studied for foreign gene expression and phage display technology was developed with them. Now take the following scenario: modify such a phage such that it expresses a cocaine-binding single chain antibody. Apply the recombinant phage to the nose of a drug-addict. Taking the privileged neuronal pathway of the first cranial nerve, the recombinant phage travels directly into the brain of the subject. Here the phage binds the cocaine, which reaches the brain of the drug addict and prevents the psychoactive processes including the selfadministration drive for cocaine. This story sounds even more science fiction than the opening story. However, it is the content of a 2004 PNAS paper demonstrating these effects in mice (Carrera et al., 2004).
Looking beyond the time horizon of grant applications is a daring exercise. Microbiology, which currently enjoys and suffers a data flood derived from genomics, transcriptomics, proteomics, metabolomics and metagenomic analyses, needs long-term visions based on a sound theoretical reasoning. Microbiologists cannot leave the field to computational scientists expecting them to sort out new ideas from the data accumulated by the various '-omics' approaches. A relationship as between theoretical and experimental physicists must be developed in microbiology where new theoretical concepts are tested with available data sets leading to new experiments testing refined theories. The challenge of the antibiotic crisis might therefore also be a healthy push for biological approaches not only against infectious diseases.
References
Bolca, S., Possemiers, S., Herregat, A., Huybrechts, I., Heyerick, A., De Vriese, S., et al. (2007) Conventional scientific wisdom dictates that evolution is a process that is sensitive to many unexpected events and influences, and is therefore essentially unpredictable. On the other hand, considering the bulk of recent knowledge about bacterial genetics and genomics, population genetics and population biology of bacterial organisms, and their sub-cellular elements involved in horizontal gene transfer, we should eventually face the possibility of predicting the evolution of bacterial evolution. The importance of such type of approach is self-evident in the case of the evolution of antibiotic resistance, and bacterial-host interactions, including infections. Prediction of bacterial evolution could provide similar clues as weather predictionhigher possibilities of certainty in the closer and more local frames. Indeed, there is a local evolutionary biology based on local selective constraints that shapes the possible local trajectories, even though in our global world, some of these locally originated trends might result in global influences. In the case of adaptive functions (as antibiotic resistance genes in pathogenic bacteria), some of the elements whose knowledge is critical for predicting evolutionary trajectories are: (i) the origin and function of these genes in the chromosome of environmental bacterial organisms; (ii) their ability to be captured (mobilized) by different genetic platforms, and to enter in particular mobile genetic elements; (iii) the ability of these mobile genetic elements to be selected and spread among bacterial populations; (iv) the probability of intra-host mutational variation and recombination; (v) the probability of re-combinatorial events among of these and other mobile elements, with consequences in selectable properties and bacterial host-range; (vi) the original and resulting fitness of the bacterial clones in which the new functions are hosted, including its colonization power and epidemigenicity; (vii) the results of interactions of these bacterial hosts with the microbiotic environment in which they are inserted; and (viii) the selective events, as the patterns of local antibiotic consumption, or industrial pollution, and in general, the structure of the environment that might influence the success of particular complex genetic configurations in which the adaptive genes are hosted (Baquero, 2004). Dealing simultaneously with all these sources of evolutionary variation is certainly a challenge. Such a type of complex structure has evolved along all biological hierarchical levels, creating specific 'Chinese-boxes' or 'Russian-dolls' patterns of stable (preferential) combinations, for instance encompassing bacterial species, phylogenetic subspecific groups, clones, plasmids, transposons, insertion sequences and genes encoding adaptive traits (Baquero, 2008). Assuming a relatively high frequency of combinatorial events, the existing trans-hierarchical combinations are probably the result on the local availability of the different elements (pieces) in particular locations (local biology), the local advantage provided by particular combinations, and also the biological cost in fitness of some of them. More research is needed to draw the interactive pattern of biological pieces in particular environments (grammar of affinities). Such a complex frame required for predicting evolutionary trajectories (Martínez et al., 2007) will be analysed (and integrated) by considering heuristic techniques for the understanding of multi-level selection. The application of new methods -based on covariance, and contextual analysis, for instance using Price's equation derivatives -should open an entirely new synthetic way of approaching the complexity of living world.
The age of planet medicine
Because of the increasing, apparently unavoidable influence of human species on the ecology of our planet Earth, and the necessary counteraction of a modified planet on human health and style of life, the entire planet should be considered as something requiring medical care. The future human medicalization requires the planet medicalization. The refined medical methodology should be escalated to the planet dimension, starting by defining the signs and symptoms of illness, studying the pathogenesis and pathophysiology of planet illness, trying to evaluate their possibilities to invade other regions, establishing specific methods for diagnosis using all available technologies, from genetics to image analysis, and try to make ecological and evolutionary predictions. This will be followed by applying specific interventions (not excluding surgery), treatments, or even isolation procedures and intensive care technology, and recommending or imposing prevention measures. As we are examining the safety of drugs or foods for humans, we should do the same for anything influencing the planet. Of course microbiologists have a big role to start this process -as medical and environmental microbiologists know each other and are progressively closer, and because they have the tradition of being involved in global problems -international health. Microbiologists are also mastering one of the key issues to start with Planet Medicine, the microbial diversity. Changes in microbial diversity might constitute one of the bases of altered planet symptomatology (Baquero, 2003 The biochemical and functional analysis of proteins with unknown functions can be a difficult task and needs endurance and the knowledge of sometimes 'oldfashioned' methods. Even more, without a sequenced genome, it takes a long time to identify the DNA sequence coding for the protein of interest. Remembering my time as a diploma student, working on the filamentous fungus Aspergillus nidulans, it took me weeks or months to identify the genomic sequence corresponding to a purified protein, because the genome sequence was not publicly available. One of the strategies in 'former days' was to purify the protein, blot it onto a membrane, perform N-terminal protein sequencing, construct degenerate primers, screen a genomic library, subclone fragments and sequence them. Nowadays, the protein can be fragmented by trypsin, surveyed by MALDI-TOF analyses and, due to the increasing number of finished genome projects, the gene is subsequently identified by an automatic database search against the genome of interest. This methodology speeds up the procedure by several weeks and allows a much higher throughput in the identification of gene functions. However, genome projects would not have been initiated to ease the identification of the coding sequence of a single interesting protein. In fact, the sequencing of genomes has opened the era of 'omics', starting from 'gen'omics, continuing with 'transcript'omics and enhancing the informative value of 'prote'omics. Now it is possible to compare genomes of different organisms (which genes are universal and which are specific), to look for changes in transcript levels (e.g. after applying an environmental stress) and to identify modifications of proteins and their abundance under defined conditions. These massive amounts of data (especially from transcriptomics) create a 'Garden of Eden' for bioinformatitians, who can perform statistical analyses on the data sets to evaluate their significance in order to develop new methods for hierarchical cluster analyses. The use of mutual information matrices on time response studies allows the identification of genes that can be grouped into regulons. These analyses aim to suggest new hypotheses on the connection of different pathways and their communication. The challenge of the biologists is to examine these hypotheses experimentally to give a verification of the predictions.
This sounds like a 'beautiful new world', because 'life' becomes computable and it works well for pathways in which all genes and proteins involved are already known. However, these analyses and predictions are hampered by several problems, in particular: (i) the countless 'hypothetical proteins' and genes of 'unknown function', and (ii) the genes and proteins, which were annotated by their identity to other already characterized proteins. Analyses dealing with those data rapidly reach a dead end and lead to a sobering conclusion: bioinformatics cannot substitute the wet lab.
Transcriptional analyses, the prediction of the number of genes within a genome, the comparison of genomes, the location of proteins and other features depend on a correct genome annotation. However, the functional annotation of proteins is very frequently misleading with sometimes profound consequences. A single example, although there are many more that could be listed: The methylisocitrate lyase is a key enzyme of the fungal methylcitrate cycle and specifically cleaves (2R,3S)-2methylisocitrate into succinate and pyruvate. The enzyme is highly specific for its natural substrate and does not accept isocitrate in its active site. Fungal isocitrate lyases, specific enzymes from the glyoxylate bypass, share about 35-50% identity to fungal methylisocitrate lyases, but are hardly active with methylisocitrate. By the means of 'identity' most fungal methylisocitrate lyases are incorrectly annotated as isocitrate lyases. This is exemplified on the so-called isocitrate lyase 2 from Saccharomyces cerevisiae, which was formerly denoted as a 'non-functional' isocitrate lyase, because no activity was observed with isocitrate as a substrate (Heinisch et al., 1996). However, a re-characterization revealed significant methylisocitrate lyase activity (Luttik et al., 2000), which led to the correction of the annotation. Nevertheless, there are still several methylisocitrate lyases in fungal genome annotations that are denoted as isocitrate lyases, but indeed represent methylisocitrate lyases. Although this might sound as a minor problem, the missing methylisocitrate lyase would lead to an incomplete methylcitrate cycle. Metabolic flux analyses, however, depend on the knowledge of all pathways present within a cell and a missing pathway may lead to incorrect data evaluations. Genes of unknown function or hypothetical proteins cause even bigger problems. A strong upregulation or downregulation of gene expression suggests an importance under the applied condition. However, due to the complexity within a cell it is difficult to predict, whether the change is a direct cause from the applied stress or resulted from a subsequent adaptation mechanism. Systems biology tries to answer this question by hierarchical clustering of the changes, which provides hints on additional genes that may be involved in the same pathway. However, without a detailed molecular biological and/or biochemical analysis of each gene, its true function remains unsolved. Therefore, 'omic'-researchers need to continue to collect large data sets but should have in mind that the characterization of genes in the laboratory is still an essential approach, in particular for those of unknown function, but also for those with only a predicted function.
Therefore, I propose to give the characterization of unknown function genes a higher priority. Currently, it seems as the value of such research is not appreciated and getting financial support for this kind of research is difficult. A few years ago I published a paper dealing with the biochemical characterization of several enzymes and the impact of a gene deletion on cellular physiology (Brock and Buckel, 2004). It turned out quite difficult to get it published, because one reviewer commented: '. . . the manuscript mainly deals with methods used in the sixties and seventies and it is questionable, whether such investigations are still suitable for publication.' Of course it might be less 'sexy' for readers to become confronted with 'old methods', but the time may have come to go back to the roots and to elucidate the function of genes of unknown function to drive the advancing field of biological science, including computer-based technologies, forward.
There is not one path anymore. Twenty years ago, you worked at the clean bench, you isolated new microbes able to grow on agar plates, and then you isolated single genes coding single enzymes for particular processes, you optimized them, and you wrote books or articles with conceptual and technical developments based on known biodiversity. If you were lucky, a small-to-medium company approached you and continued to go further with the applicability of such finding. Today you can be a biogeochemist with bioinformatics knowledge who writes books or articles trying to reveal the mysteries of microbial and genetic adaptation and diversity. It all sounds so . . . uncomplicated, doesn't it? But, of course, THIS DIDN'T HAPPEN overnight. It's been especially in the past fifteen years that a confluence of factors, mainly, technical developments, has resulted in some young people turning their backs on sequencing. But above all, there is the sense that biodiversity is at the centre of a vital scientific universe, with microbes as its capital: we know the communities, how diverse they are, but we are far from understanding the individual members and functions, and how each of them can be helpful, for example, to improve the human condition. It is like our human society: the government knows how many we are, but it does not know how each individual lives, and how many consortia (friends and family, to cite some) we constitute.
The co-founding editor of Microbial Microbiology wrote to me and asked, wouldn't I want entree into a crystal ball, to 'predict' the future and catch reader attention? Of course, there's more than a little romanticism to do this, there is a discernible sense that, as a young researcher put it: 'those kinds of jobs -people predicting the futureexist, but just not for scientists'. However, I agreed, because I don't know why anyone who wants to be involved in scientific understanding would not want to turn their attention to future ideas. Following on from this, how do I know I have a correct vision over the next few years? It's hard to get a sort of accurate gauge on how you're doing or what you will do, but you have to just take it on faith that 'there is a real biotech-market out there and an appreciation for what you're doing'.
I am a chemist-turned-enzymologist and now a microbiologist. Like all of you, I believe that microbes are important for the Earth System, playing a very important role in maintaining the well-being of our global environment. Despite the obvious importance of microbes, very little is known of their diversity, how many species are present in the environment, and what each individual species doesi.e. its ecological function. Until recently, there were no appropriate techniques available to answer these impor-tant questions. The vast majority of these organisms cannot be cultured in the laboratory and so are not amenable to study by the methods that have proven so successful with known microorganisms throughout the 20th century. It was only with the development of highthroughput technology to sequence DNA from the natural environment that information began to accumulate that demonstrated the exceptional diversity of microbes in Nature -in fact, most microbes are entirely novel and have not previously been described.
A non-exhaustive list of questions that should be addressed over the next few years includes: 'is everything is everywhere?', 'do microorganisms exhibit biogeographical patterns of distribution?', 'is the relative abundance of a certain group of microorganisms necessarily linked to their importance in the community functioning?', 'which organisms are of pivotal importance in the community?', 'how diverse are metabolic pathways and networks within the given ecosystem?', 'how do microbes and protein-coding genes interact with each other to lead to the overall system function?', 'how many specific microbes are responsible for the metabolism of different substrates?', 'how do environmental stimuli impact ecosystem functioning and long-term system stability?', and finally 'how can we improve the meta-genomic technology for accommodating the needs of microbial biologists and enzymologists?'.
To answer such questions, it should be noted that conceptual advances in microbial science will not only rely on the availability of innovative sequencing platforms but also on sequence-independent tools for getting an insight into the functioning of microbial communities. I believe that is so because, over the last four years, in all conferences there was the very same question: 'how can I get information about hypothetical genes and functions'? The reasons are clear. First, every single cell or environmental genomic project added a huge number of putative genes, the function of which is often unknown and at best deduced from sequence comparison. Second, even the best annotations only created hypotheses of the functionality and substrate spectra of proteins which require experimental testing by classical disciplines such as physiology and biochemistry. This highlighted the difficulties of making sense of environmental sequence data: a significant proportion of the open reading frames could not be characterized because there were no similar sequences in the databases.
As we are primarily concerned with establishing the function of microorganisms in the environments and identifying new enzymes with biotech-promiscuity, there is an urgent need for characterizing protein functions from environmental DNA or proteomes. Once function has been identified, it can be mapped to metabolic pathways or proteins involved in a particular process (environmental or biotech-like), to determine the functional activity. To this end, I think that it will be helpful to generate visualization tools capable of generating functional and dynamic knowledge, if possible, nondestructively and in real time. That is, tools that identify the connected poles of activities (the so-called microbial reactomes) that shape the internal structure of an ecological niche, without a large-scale DNA sequence analysis. This is a straightforward concept since it constitutes the direct link from DNA and genes to proteins and functions, a major hurdle in both Systems Biology and Biotechnology studies. By doing this, it will be possible to unravel gene functions and add valuable information about how microorganisms adapt to changing environmental influences, and how biotech processes can be designed by new microbial functions that can be checked by visualizing directly the reactomes.
I think that methods for identifying at global-scale microbial reactomes are partially available, but we still need to solve many problems. For example, bioinformatics methods exist for isolating in silico microbial reactomes; but they rely on sequence data. Metatranscriptomics has the potential to describe how metabolic activities will change, but still does not reflect the protein level and does not predict microbial functions (only upregulation or downregulation). Metaproteomics gives valuable information on how microbes respond to stress, but it is limited to the low resolution and no direct functionality. Metabolomics has become very popular recently by combining new analytical and isotope analyses but, in the environmental context, its use is very meager because the difficulty in identifying and localizing metabolites. Finally, single-cell genomics is increasing in importance but, once again, the sequencing of such cells will predict functions based on sequence data or, in the best case functional hypothesis of certain individual functions. If we ignore these problems, we increasingly waste significant financial resources and staff effort in order to achieve a final goal: reconstruct experimentally based reactomes in single cells or complex communities.
So it does not need to take a crystal ball to see that the bottleneck in meta-genomic technology, both for microbial and biotech point of view, will not be only the design of powerful assembler computer programs but rather the development of technologies that provide direct analysis of complex mixtures and entail detecting specific substrate-protein transformations among thousands of other endogenous metabolites and proteins in order to get a clear picture of 'who is doing what'. Some methods do exist for isolating single transformations from the natural environment; but these are not relevant for reactome coverage as they are not universal. Clearly, existing methods for enzymatic activity detection based on changes in spectroscopic properties should give rise to high-throughput chips than can be used to provide information on chemistry of reactions and identity of the product formed. This type of information will be extremely useful for ascribing functions to genetic sequences from environmental samples, thus minimizing annotation mistakes and suggesting biotechnological potential. I believe in a future where any single genome or environmental sequence project is done in parallel with chip-based enzyme screening, so that annotations are experimentally documented at the time when the paper is written. Only through obtaining holistic information can holistic hypotheses about ecosystem characteristics be formulated. The question then is: 'how many reaction and substrate types should one have in a single high-throughput chip to cover the whole microbial metabolism'? As Shakespeare said in Hamlet, 'that is the question'! I think that this is the time to think about it as progress to manage sequence information per se accelerates. All in all, it is clear that to access the microbes in their natural milieu and new enzymes from them, there is a strong need to elaborate a Systems Biology concept based on the combination of multiple strategies to understand the functioning of microbial communities as a whole, with metagenomic tools playing a pivotal role (Fig. 1).
Microbial genomics as pursuit of happiness
Michael Y. Galperin, NCBI, NLM, National Institutes of Health, Bethesda, MD 20894, USA. Over the past dozen years, the availability of complete genomes brought a profound change to all aspects of (micro)biological research. As noted by Ian Dunham (2000), during the 'dark ages' before the advent of genomics, our perception of the cell was akin to the medieval maps of Earth with large areas marked 'Here be dragons'. It is now quite common to describe enzymes, metabolic and signalling pathways that are missing in a given organism, something that can be done only with complete genome sequences. Although close to a third of genes in any newly sequenced genome have unknown functions (and the rest have only more or less reliable functional predictions that are not going to be experimentally tested any time soon), we can safely assume that these genes do not code for dragon skin or any other dragon body parts. We are even running out of candidate genes that could code for the vital force (aka the 'living soul', Hebrew: nephesh; Greek: psuche; French: élan vital) of the bacterial cell.
In contrast, microbial technology has remained relatively unaffected by the genomics data. Most biotechnological processes still remain the same as they were 10-15 years ago. Genomic and metagenomic libraries are widely used to search for useful enzymes but those searches rely more on activity than on genome sequence data. It is easy to predict that in the course of the next several years genomics will start making its way into everyday technology. The most immediate change will be the recruitment of an ever-expanding range of organisms for use in the bioremediation of environmental contaminants and in production of various compounds, from biopharmaceuticals to biofuels.
Use of new organisms will result in dramatic progress in metabolic engineering. We already know that many bio- chemical reactions can be catalysed by two or more different enzyme variants. We also know that metabolic pathways in any given organism have evolved to optimize the organism's growth, not the overproduction of any particular metabolite that we might want it to produce. Incorporating foreign genes could be used to steer the metabolism in the needed direction, to remove inconvenient by-products, to relieve feedback inhibition, and to adapt the metabolic pathway to particular environmental conditions (t°, pH, salinity). For example, the flux through the standard glycolytic pathway of Escherichia coli could be manipulated by introducing the ADP-dependent phosphofructokinase from, metal-independent aldolase and/ or bisphosphoglycerate-independent phosphoglycerate mutase from Methanococcus maripaludis or other mesophilic archaea. The abundance of alternative enzyme versions from exotic and poorly studied microorganisms will be complemented by the abundance of suitable hosts capable of deriving energy from the solar light (photosynthetic bacteria, including cyanobacteria) or cheap substrates, such as natural gas, methanol, sawdust and timber waste. This combination will bring metabolic engineering to an entirely new level, allowing the construction of customized organisms for every ecological niche that would consume industrial waste and convert it into useful products. Very soon, microbial metabolic engineering will be used to improve our food supply, solve the energy crisis and fight global warming. We already have completely sequenced genomes of several nitrogen-fixing endophytic bacteria that enable the rapid growth of sugarcane and various legumes (Krause et al., 2006;Fouts et al., 2008;Lee et al., 2008). Adapting such bacteria to corn, wheat, rice and soy will dramatically decrease the need for chemical fertilizers, allowing rapid growth of plant biomass and, as an added benefit, increased consumption of CO 2. Plant foods derived this way could be enriched in essential amino acids and vitamins without carrying the stigma of 'Frankenfoods'. This will decrease the need for animal protein and provide yet another way to decrease the production of greenhouse gases.
The next step will be using bacteria to improve human bodies. We already affect human gut microflora with our foods and change it by consuming yogurts, beer, brie, kimchi, and other products that contain live microbial cultures. Enriching yogurts with vitamin-producing bacteria will go a long way towards eliminating various vitamin deficiencies. The next step will be introducing engineered bacteria in human tissues and even human cells. If the plans to use specially constructed clostridia for curing (or at least slowing down) cancer (Wei et al., 2008) bring even modest success, they will pave the way for further gene therapy. If aphids, nematodes and fruit flies can afford carrying intracellular bacteria to supply them with nutrients (Wernegreen, 2004), we sure can try the same thing in order to cure hereditary diseases. A phenylalanine-dependent symbiotic bacterium could be used to improve the life of patients with phenylketonuria. Lipid-degrading bacteria (mycobacteria?) might be used to clear atherosclerotic plaques, tartrate-metabolizing bacteria to dissolve kidney stones, and lactate-consuming bacteria to relieve muscle fatigue. Microbes could also be engineered to maintain a healthy balance of neurotransmitters, replacing the morning cup of lattė and secreting just enough serotonin derivatives to keep the host constantly happy.
Having fought bacteria in the last century, we have nearly exhausted the repertoire of available antibiotics and will have to learn to co-exist with bacterial world. Knowledge of bacterial genomics should allow us to separate friend from foe and harness them both for our own use.
both infectious and non-infectious (e.g. cancer, autoimmunity) diseases, is gaining considerable interest. However, despite major advances in the fields of microbial pathogenesis, immunology and vaccinology, there are still many diseases for which vaccines are not available or the available vaccines are inadequate in terms of efficacy and/or safety. This is particularly true for chronic or persisting infections. In the post genomic era all potential antigens, which are coming into consideration for inclusion into a vaccine formulation, are well known. This knowledge has been exploited in the context of reverse vaccinology-driven approaches, which in combination with comparative genomics enabled to select the most highly conserved and promising antigens for vaccine design. However, the advent of new vaccines against diseases such as AIDS, chronic hepatitis or malaria, as well as improved vaccines against 'old diseases', such as tuberculosis, is well overdue. It is obvious that extremely optimistic end-points for vaccination against these agents, such as the stimulation of sterilizing immunity, should be replaced by more realistic goals, like the stimulation of immune responses able to delay disease onset or progression. However, this is not the key issue. Where then lay the most critical roadblocks preventing the development of effective immune interventions against the agents causing these diseases?
The first roadblock is that our knowledge on the effector mechanisms responsible for the clearance of these pathogens is by and large fragmentary. In-depth studies of natural infections represent the best strategy to access this knowledge. There are individuals who are refractory to infection (e.g. multiple exposed uninfected individuals for HIV) or develop slow progressing forms of disease (e.g. long-term non-progressors for HIV, chronically infected patients without liver cirrhosis for hepatitis). Welldefined patient cohorts with different forms of disease were established in recent years, which are being characterized in terms of their genetic, microbiological and immunological profiles. This is expected to lead to biomarkers and molecular/phenotypic signatures associated with better prognosis, as well as to the identification of the effector mechanisms responsible for microbial clearance. This knowledge base will considerably facilitate and accelerate rational vaccine design.
Let us consider for an instant an ideal scenario in which the first roadblock has been overcome. It is exactly known which antigens need to be included in the formulation and which kind of effector mechanism should be stimulated to confer protection. Considering the present state of the art, a subunit vaccine will probably be the strategy of choice, as the replacement of whole cell vaccines or semi-crude antigen preparations by well-defined antigens has dramatically improved their safety profile. At this point we will face the second roadblock; namely the availability of tools enabling the stimulation of predictable immune responses of the adequate quality following vaccination. In fact, highly purified antigens are often less immunogenic than more complex preparations, rendering essential their co-administration with potent adjuvants. These compounds also have immune modulatory properties, which allow to fine tune the responses elicited. This is critical issue since the stimulation of a wrong response pattern may even lead to more severe forms of disease. However, despite the fact that there are several adjuvants under development, the sad truth is that only a handful of them have been licensed for human use (i.e. Alum, MF59 and MPL;Tagliabue and Rappuoli (2008). This is far worse if compounds exhibiting activity when administered by mucosal route are considered, from which only a few candidates are in the development pipeline (Rharbaoui and Guzmán, 2005;Ebensen and Guzmán, 2008). Hence, there is a critical need for novel adjuvants, particularly those exerting their biological activities when administered by mucosal route. This is very important, as most pathogens enter the host via the mucosal tissues. Thus, the stimulation of an effective local response would also enable to block infectious agents at their portal of entry, thereby reducing their capacity to colonize and be further transmitted to other susceptible hosts. It is expected that in the coming years we will see a new generation of well-defined and highly efficient adjuvants coming in the market. This will facilitate the development of a new generation of more effective vaccines, as the availability of adjuvants exhibiting different biological properties will allow efficient fine-tuning of the immune responses elicited according to specific clinical needs.
The third roadblock is related to the need to bridge the translational gap, as well as to current stringent regulations for vaccine testing (e.g. requirement of GMP grade material for phase I studies), which have in turn led to an explosive increase in clinical development costs. To accelerate translation novel strategies are needed for a rapid and cost-efficient screening, selection and prioritization of the most promising candidates. For certain pathogens the most widely accepted animal model are primates (e.g. HIV, HCV). However, one of the most significant issues associated with these animal models is that they do not completely reproduce the pathophysiology of human diseases. Reproducibility is also an issue, as they suffer greatly by the small number of animals that can be studied at any time and by inter-individual variability, which limit their statistical power. Furthermore, primate models are often too expensive and fraught with ethical constraints. Thus, none of the existing models adequately address the needs of the vaccine developer. Hence, there is a clear need for cost-efficient small animal models to address these limitations.
Crystal ball 137
In this context, mice are ideally suited to perform the initial validation of vaccine candidates in a cost-efficient manner. However, the results obtained in mouse-based systems cannot always be extrapolated to humans. A very promising alternative strategy consists in the engraftment of components of the human immune system into immune compromised mice (Shultz et al., 2007;Legrand et al., 2008). When these animals are engrafted with liver or cord blood derived stem cells, proper development of NK cells, B cells, dendritic cells and different T-cell subsets (e.g. CD4+, CD8+, Treg) is obtained. While still experiencing some limitations, these human/mouse chimeras are permissive to infection by different infectious agents, including the HIV (Baenziger et al., 2006;. However, there is still margin for further development, such as the improvement of adaptive cellular responses. It is also critical to ensure that they fulfil with the key features of good animal models, namely ensure their reproducibility and an adequate high throughput, perform thoroughly validation with known human vaccines, and made them available at an acceptable cost respect to their benefit. Nevertheless, these aspects will be fully addressed in the coming years, thereby enabling their routine application for vaccine preclinical validation. It is expected that the use of these advanced animal models for vaccine testing will result in increased predictability for their performance in humans, thereby enabling a rapid and efficient selection of the best candidates to be transferred into the clinical development pipeline.
Consider a proposal
Just as genetics indelibly shaped our understanding of solitary bacterial existence, so it can transform our understanding of bacteria as they engage in community life. This will require a new application of mutant analyses in a community context. Let's call this 'metagenetics', to highlight the concept of an analysis that transcends individuals ('meta' in Greek means 'transcendent'). Metagenetics provides a parallel with metagenomics -genetics and genomics deal with single organisms and metagenetics and metagenomics both apply to analysis of a multigenome unit, or community.
Consider the past
The glory of the last 50 years of microbiology is founded, in large part, on genetic analysis. Every aspect of cellular bacterial life has been cracked by the use of mutants. Metabolic pathways have been defined by analyses of mutants blocked in various biochemical steps. Macromolecular synthesis, membrane function and chemotaxis yielded to the chisel of genetics. Later, the study of protein structure and function emerged based on precise, single amino acid changes generated by point mutations.
The mark of microbial genetics extends beyond understanding the working of the bacterial cell. The foundation for microbial evolution was provided by the classic Luria-Delbrück fluctuation test and the Lederbergs' replica plating experiment. Both provided irrefutable evidence that bacterial evolution depends on pre-existing mutations that are independent of selection pressure. The impact of these landmark experiments was felt throughout biology because they presented potent fortification for the Darwinian concept of evolution that pre-existing variation in populations that are acted upon by natural selection.
In the early days of bacterial genetics, classical crosses and complementation analysis were accomplished through conjugation, transformation, and transduction, fostering associations between genes, functions, and ultimately proteins. Genetics lost some of its abstract nature and advanced to a new level when it became possible to physically isolate genes by cloning. The advent of DNA sequencing generated a new depth of understanding of the nature of mutations, making mutant analysis more powerful than ever. The satisfying level of precision provided by molecular genetic analysis has created a gold standard of proof in modern microbiology. This, in turn, has generated a two-class distinction of sub-fields of microbiology. Sadly, ecology has been largely relegated to the less desirable class by many of those who study solitary bacterial life because they find the types of evidence and structure of arguments in ecological study to lack the precision to which they are accustomed. All of that can change.
Consider the future
Metagenetics will dissect ecological questions at a new level of precision. But unlike the early days of bacterial genetics, the new field of metagenetics will be buttressed by vast databases of sequence from metagenomic analysis. Metagenomics will generate hypotheses to be tested with genetics as well as the sequence information on which to base mutant construction.
Metagenetics will need to embrace both random mutagenesis, which is a way of giving voice to the bacteria, and directed mutant analysis, which is driven (and limited by) the imagination of the investigator. In the first, we will mutagenize a pure culture and then screen the random mutants for a community phenotype. In the second, we will create a defined mutant in a gene of interest and determine its phenotype in the community context. We might, for example screen a randomly mutagenized population of bacteria for the loss of ability to invade a community or we might construct a mutant lacking flagella and determine whether it is affected in the ability to invade. Both approaches will be facilitated by available genome maps, sequence information and extensive '-omics' (transcriptomics, proteomics and metabolomics) data.
Metagenetics on culturable community members may not seem all that different from classical genetics except in the nature of the phenotype tested. But the bold advance will issue from development of genetic tools to study unculturable members. For example, imagine the power of knocking out all homologues of a particular gene in all members of the community? What would happen if we knocked out all of the polysaccharide biosynthesis genes in a community? Alternatively, what if we knocked them out only in one family of bacteria? Highly specific conjugal vectors and sequence-based homing devices can make these approaches reality. Metagenomics will furnish the raw material for such studies -sequence information from the unculturable members of the community that will form the basis for generating hypotheses and genetic devices to make targeted changes.
Metagenetics, in concert with the other tools of the ecologist, including statistics, modelling, microscopy, radioactive labels, chemical analysis and meta-omics, will elevate the level of rigor and precision with which we can approach community-level microbial ecology. Just as early bacterial genetics provided critical data beyond microbiology, to the entire field of evolution, microbial metagenetics may advance the entire field of ecology by answering questions at a level and with tools that are not possible in macroecological systems. The lessons from 50 years of bacterial genetics are powerful. Perhaps 50 years from now we will be reflecting on the advancement of ecology by a parallel metagenetic approach.
Future shock from the microbe electric
Derek R. Lovley, Department of Microbiology, University of Massachusetts, Amherst, MA 01003, USA. How can the future not look bright when you are dealing with a microbial process that can power a light bulb? The study of microbial fuel cells and, more generally, microbeelectrode interactions is rapidly amping up, not only in power production, but also in the number of investigators and areas of study.
The most intense focus has been on wastewater treatment and this is likely to continue for some time. It was probably safe to say 5 years ago that any compound that microorganisms can degrade could be converted to electricity in a microbial fuel cell, but if there was ever was any doubt, this point has been proven over and over again in a plethora of recent studies. It is clear from this work that a major limitation in converting complex wastes to electricity is the initial microbial attack on the larger, difficult to access molecules, just as it is in any other treatment option. It may well be that the intensive focus on the degradation of complex organic matter in other bioenegy fields will soon make a contribution here.
However, there are other issues specific to microbial fuel cell technology. At present the rate that even simple organic compounds can be converted to electricity is much too slow for practical wastewater treatment. For example, columbic efficiency (i.e. the percentage of electrons available in the organic substrate that are recovered as current) is often diminished by methane production, indicating that even relatively slow-growing methanogens are competing with the current-producing microorganisms. This is despite the fact that electron transfer to oxygen, the ultimate electron acceptor in microbial fuel cells, is much more thermodynamically favourable than methane production. Some contend that the limitations to current production in waste treatment can be solved with improved engineering of microbial fuel cell design and that there is little need to focus on the microbiology of microbial fuel cells for waste treatment because as better fuel cell designs are developed, the appropriate microorganisms will naturally colonize the systems and produce more power. That may be, but it also seems likely that, going forward, the mechanisms for microbe-electrode interactions will become better understood and this could significantly inform optimal microbial fuel cell design.
Furthermore, it is likely that we will find that it is possible to greatly increase the current-producing capabilities of microorganisms. This is because there has been no previous evolutionary pressure for microorganisms to optimally produce current. Many of the microorganisms that function best in microbial fuel cells are dissimilatory Fe(III)-reducing microorganisms, which have evolved to specialize in extracellular electron transfer to insoluble, extracellular electron acceptors. However, microorganisms reducing Fe(III) in sedimentary environments are typically in direct contact with the Fe(III). In contrast, when microorganisms are producing high current densities in microbial fuel cells, only a small fraction of the microorganisms in the anode biofilm are in direct contact with the anode surface. Most must transfer electrons over substantial distances through the biofilm. It is not clear that there has ever been substantial selective pressure on microorganisms for such long-range electron transfer. Thus, there should be ample room for improvement.
Another unnatural request that we make on microorganisms when they are asked to generate high current densities is the requirement to metabolize organic compounds very rapidly. The natural habitat of most of the microorganisms that have been shown to be most effective in current production is the subsurface or aquatic sediments. These are rather low-energy environments in which there has probably not been much selective pressure for rapid growth and metabolism. Other challenges to anode-reducing microorganisms include the necessity to tolerate the low pH that can develop within the anode biofilm. This results from the fact that protons as well as electrons are released from organic matter oxidation.
Strains that can better respond to these unusual demands of high density current production will certainly be found or developed. Understanding what characteristics of these strains confer enhanced current-production capability may aid in fuel cell design and these strains may be beneficial in some applications. Strain improvement may include attempts to select better strains from complex microbial communities as well as genetic engineering and adaptive evolution approaches. Some degree of strain selection has taken place in previous studies in which conditions conducive to high current densities have been established in microbial fuel cells and the systems have been inoculated with sewage or some other complex community. The surprising result from a number of laboratories is that such conditions frequently select for Geobacter sulfurreducens, or closely related strains. Pure cultures of G. sulfurreducens can produce current densities as high as any known pure or mixed culture. We have had moderate success in genetically engineering strains of G. sulfurreducens for higher rates of respiration and extracellular electron transfer, guided by a genome-scale in silico metabolic model. However, electron transfer to electrodes appears to be a complex process, and may not be well enough understood to rationally engineer. Adaptive evolution has proven to be a much more promising approach for strain development and major enhancements in power production with this tactic are forthcoming.
As with any optimization procedure, once one bottleneck is relieved another emerges. As better currentproducing strains of G. sulfurreducens have been developed, it has been necessary to use exceedingly small anodes relative to cathode area in order to keep reactions at the cathode from limiting rates of electron transfer at the anode. The ability of microorganisms to accept electrons from a cathode to support anaerobic respiration has already been demonstrated and studies in a number of laboratories have found that aerobic cathodes selectively enrich for specific microorganisms that might promote faster rates of electron transfer from the cathode to oxygen. This is likely to be an area of intense interest in the near future. It will probably be possible to develop microbes with superior capabilities for accepting electrons from cathodes with the reduction of oxygen with approaches similar to those discussed above for improving the current-producing capabilities of anodereducing microorganisms.
What if engineering and microbiology do not overcome the barriers to making microbial fuel cell technology suitable for wastewater treatment? There are many other potential applications for microbe-electrode technology. One near-term application is harvesting electricity from waste organic matter or vegetation to power electronics in remote locations. Sediment microbial fuel cells that power monitoring devices at the bottom of the ocean are already feasible. Self-feeding robots that run on microbial fuel cells have also been proven in prototype. There are many other applications in which relatively low power requirements can probably be met with microbial fuel cells. For example, there are already several organizations planning to distribute in developing countries inexpensive microbial fuel cells that run on wastes and can provide lighting or charge electronic devices. A number of research teams are working on developing implanted medical devices that use blood sugar as a fuel. It seems likely that many other applications that require low levels of electrical current but for which it is difficult to install or continually replace traditional batteries could be helped with microbial fuel cell technology. Future applications may also include microbial transistors, circuits and electronic computing devices, among others.
Environmental technology is likely to be another emerging field for microbe-electrode interaction applications. Anodes are attractive electron acceptors for stimulating the degradation of contaminants in the subsurface because they can be emplaced as a permanent, highpotential, electron acceptor and can adsorb and concentrate many contaminants to co-localize pollutants and the electron acceptor. Current produced from electrodes deployed in anoxic subsurface environments is likely to prove to be a good proxy for estimating rates of microbial metabolism in those environments. Cathodic reactions are also likely to see more application in bioremediation and waste treatment. The potential for stimulating microbial reduction of nitrate, U(VI), and chlorinated contaminants with electrodes serving as the electron donor has already been demonstrated and field application of these technologies are on the horizon.
One of the most exciting areas of future research is almost certain to be the production of specialty chemicals with cathodic microorganisms accepting electrons from an electrode. Fixation of carbon dioxide and its conversion into useful organic commodities powered by electrons supplied directly from an electrode may prove to be one of the most lucrative applications of microbe-electron interactions in the near future. This process is clearly thermodynamically feasible, and the ability for microorganisms to accept electrons for anaerobic respiration has already been demonstrated. It just remains to be seen whether the appropriate microorganisms for this application exist in nature or whether extensive metabolic engineering will be required.
In summary, it would be shocking if the continued increased intensity of study on microbe-electrode interactions did not shed light on additional applications as well as illuminate more of the basic mechanisms by which microorganisms electronically interact with electrodes. The future of this biotechnology looks very bright indeed.
Extensive referencing to recent research on microbeelectrode interactions can be found at the following web sites: http://www.microbialfuelcell.org http://www.geobacter.org
Michael J. McInerney, Department of Botany and Microbiology, University of Oklahoma, 770 Van Vleet Oval, Norman, OK 73019, USA.
The microbial world contains a vast and untapped reservoir of genetic diversity that could be used for the production of novel, biologically active molecules or to develop new strategies to manipulate the activities of microbial consortia. However, we need to understand how microbial species interact and communicate with each other in order to manipulate their interactions. With pyrosequencing and other technological breakthroughs, we are beginning to understand 'who is there' and 'what they are capable of doing'. What we need to do is to get better at understanding 'what they are doing' to exploit fully the diversity of the microbial world. We have made great strides in computational approaches that allow us to assign putative functions to many of the genes present in microbial genomes; but, even with these tools, many coding regions lack functional assignments. Many genomes contain 'cryptic' or 'orphan' gene clusters with the potential to produce novel and structurally complex chemicals (Challis, 2008;Fischbach et al., 2008). These chemicals do not have 'housekeeping' functions, but probably function as signals mediating interactions among microorganisms and between microorganisms and eukaryotes (Straight et al., 2006;Dietrich et al., 2008;Fischbach et al., 2008). These studies suggest that microbes are carrying on a conversation with each other. We must listen to and translate this conversation to understand how microbial species interact with each other.
Once we understand what microbes are saying and why, then we can manipulate the conversations.
We should not be surprised that microbes converse because we know that microbial species work in teams or guilds. These interspecies interactions must be coordinated, which means that there must be specific signals. In the future, we will have a variety of high-throughput tools to identify how microbial populations respond to each other and what molecules are used. Such approaches may be analogous to microarray technologies such as GeoChip . Computational approaches will be available to identify the key regulatory components that receive, translate and transmit the signal to action. As the regulatory networks are defined, we will be able to identify the input chemical stimulus, its receptor and how the signal is transmitted within the cell. Bioengineers can then construct multi-component systems from libraries of standard interchangeable parts engineered from the components identified during the ecological screening process. Professor Endy and his colleagues (Canton et al., 2008) have developed BioBrick (http://partsregistry.org/), a standard biological parts inventory, which includes protein coding sequences and regulatory elements for gene expression and signalling and have defined quantitative measures of performance that will allow bioengineers to use these parts reliably. Future efforts will certainly expand on the parts and chassis (organisms) available for manipulation. By understanding how biosynthetic genes change, move about and recombine, we can understand the processes that generate small-molecule diversity.
Once we understand the microbial conversations, we will have the ability to manipulate the response of a specific microbe or of microbial communities. We will be able to identify signals to turn on gene systems to produce new biologically active molecules that could be used as antibiotics or anticancer drugs. Additionally, we may identify signals to turn on specific functions in complex communities. Understanding why microorganisms make biosurfactants may provide an approach to turn biosurfactant production on in oil wells to enhance oil recovery (Youssef et al., 2007). Alternatively, we should be able to disrupt the microbial conversation to prevent unwanted interactions involved in disease or corrosion.
J. Colin Murrell, Department of Biological Sciences, University of Warwick, Coventry, UK. Thomas J. Smith, Biomedical Research Centre, Sheffield
Hallam University, Sheffield, UK. The isolation of commercially valuable bacteria from the environment has been a cornerstone of microbial biotechnology for many decades. The environment has yielded organisms capable of producing valuable fermentation products such as alcohols and amino acids, strains able to produce diverse pharmacologically active secondary metabolites, as well as microorganisms that can affect highly selective chemical transformations and convert recalcitrant pollutants into non-toxic metabolites. As Microbial Biotechnology (sister journal to the now well established Environmental Microbiology) celebrates its first birthday, we would like to speculate upon the way in which the ongoing revolution in the characterization of uncultured environmental microorganisms may facilitate discovery of valuable microbial enzymes and pathways that are currently beyond reach. The great majority of microorganisms in natural environments have never been obtained in pure culture and represent an important source of microbial diversity that biotechnologists cannot ignore. Cloning of environmental DNA (metagenomics) has already emerged as a rich source of new biocatalysts for production of bulk and high-value chemicals (reviewed in Steele et al., 2009). Metagenomics is reliant on the cloning of genes from complex samples that contain DNA from all manner of organisms, some relevant to the biotechnologist but most otherwise. Hence methodology for specifically increasing the abundance of functional genes (genes encoding key target enzymes) of interest would be of great value in increasing the proportion of the relevant biodiversity that could be accessed. In the sphere of environmental microbiology, stable isotope probing (SIP) techniques employ enrichment cultures containing a 13 Clabelled growth substrate, in which the DNA of organisms growing on the labelled substrate becomes enriched in the heavy isotope and can be separated from bulk environmental DNA by means of CsCl density gradient centrifugation. Originally developed to identify organisms actively metabolizing one-carbon compounds via analysis of 16S rRNA and functional genes, SIP has since been applied to characterize microorganisms utilizing a wide range of microbiological growth substrates (reviewed in Dumont and Murrell, 2005;Friedrich, 2006). In principle, SIP is ideal for increasing the abundance of target genes for subsequent direct cloning or amplification by means of PCR. A report from Daniel and co-workers (Schwarz et al., 2006) was the first that indicated the feasibility of such applications. Daniel and co-workers focused on glycerol dehydratase, a key enzyme during biosynthesis of the valuable product propane-1,3-diol. SIP with glycerol-13 C3 led to an increase of up to 3.8-fold in the frequency of recovery of glycerol dehydratase genes per megabase of cloned environmental DNA, compared with parallel metagenomics experiments where SIP enrichment was not used. While the increase in sensitivity that SIP yielded in this pilot study was modest, we predict that through careful manipulations of enrichment conditions, SIP and related techniques can be developed into a key tool in gene mining. DNA-SIP would give access to valuable functional genes that are present at very low abundance in inhospitable extreme environments or at low levels in complex ecosystems and which are below the threshold of detection of current technology. This would generate a pool of potentially novel target genes that could then be screened in expression libraries or used in gene shuffling experiments in order to generate novel biocatalysts. In addition, heavy DNA in the SIP experiments will become enriched in the genomes of target organisms, thus allowing focussed or targeted metagenomics, and isolation of potentially novel catabolic (or biosynthetic) gene clusters of biotechnological relevance.
Building bugs
Sven Panke, Department for Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland. Biotechnology is one field that has profited immensely from the advent of '-omics' technologies. For example, fluxomics brought a better understanding of the flow of metabolites through the cellular metabolic network, transcriptomics and proteomics helped us appreciate the multiple consequences of gene overexpression, and genomics allowed us to catalogue mutations in highperformer strains and transfer them to new strains. But, while our understanding of the system-wide effects of our current rather subtle strain modifications continuously grows, this change in scope does not yet extend to the manipulation of bio(techno)logical systems. Broadly speaking, we still paste a few genes into a plasmid, insert it into our pet-strain, and hope for the best. However, if it is system-wide consequences that we need to take into consideration, it is most probably system-wide action that we need to take to design truly effective biotechnological systems. I argue that a major line of research in the next years and decades will deal with our enabling of biological system engineering and providing the corresponding arsenal of tools. I predict enabling on three levels: technical, theoretical and organizational.
The first step in this transformation to system level manipulation is easy to spot: de novo DNA synthesis. The technology as such is not new, but it has become now so cheap that it is about to make the crucial step out of the industrial laboratories into the world of academic research as a routine tool. Moreover, the success of de novo DNA synthesis by assembling entire genes from oligonucleotides has re-ignited the search for novel DNA synthesis technologies that might in the future help to bring costs down further and directly accessible DNA sequences longer. Currently, the price of a bp in a synthesized gene halves every 2-3 years, and it is only a question of time when the full force of this technology will drive the art of cloning out of our laboratory.
Of course, the next step will then be to go from single genes to novel entire pathways or even novel genomic sections or entire genomes. The required methods are not yet routinely available, but improvements in DNA synthesis technology and the recruitment of the proper biological tools -such as homologous DNA recombination to assemble DNA fragments in vivo and in vitro -suggest that the corresponding problems will be solved rather quickly .
But where a slow assembly process used to allow the step-by-step verification of underlying scientific assumptions, a 50 kbp sequence that will be delivered 4 weeks from now does no longer allow such luxury. It will become exceedingly important (i) to integrate all available information into the sequence already at the start; (ii) to use predictive tools to substitute for the missing information; (iii) to develop the corresponding experimental technology to obtain the remaining indispensable data rapidly; and (iv) to make sure that the host that is to receive the DNA sequence can read out the information in a predictable and reliable fashion.
The first point is at its heart an organizational challenge. The design-relevant information for one promoter, one ribosome binding site, one RNAse site or one transcriptional terminator sequence might be available in the literature, but locating and exploiting it is currently an achievement in itself, and it is even more so for the 50 genes on the ordered DNA sequence. To make it available for engineering -that is a rational selection of a standard element based on quantitative criteria -this information needs to be made available centrally, such as it is the goal of the Registry of Standard Biological Parts (http:// partsregistry.org). Of course, the 'standard' part would not only encompass requirements for the presentation and completeness of data and information, but it will extend to the data's generation, preferably as a part of the operations of such a facility.
Reliable standardization will also be of crucial importance in the reliable use of computational tools to predict the behaviour of the functions encoded on our artificial DNA sequence (Marchisio and Stelling, 2008). But even more, just as CAD technology helps designing anything from houses to mechanical engineering artifacts by hiding a huge body of knowledge behind the interface, a biotech-CAD will help to recruit the system design knowledge that is available from, for example, electrical engineering into a best practice for biological systems design: What is the most effective way to engineer an oscillation? Or a regulatory circuit that makes signal output dependent on the concomitant availability of two signals (an AND gate)?
Clearly, much of the work that is required for the predicted transformation to systems level is in a sense repetitive: for example, results on ribosome binding site strength for many sites need to be verified under various growth conditions and with sufficient redundancy to be statistically relevant. Or long DNA sequences need to be assembled step-by-step from shorter DNA elements. This work is in principle excellently suitable to automation. Its reduction to micrometer dimensions and its integration into microfluidic systems is then the crucial step that will make it affordable and allow the required parallelization.
While all of the three points above are well underway already today, the future of (iv) is much less clear. From an engineering point of view, the notion that every designed DNA sequence requires a tailor made host to interact with acts as a real deterrent. It will be much more attractive to have hosts available that provide required resources (e.g. protein synthesis) but other than that behave orthogonal to (not or hardly influenced by) the introduced DNA sequence. We are currently far from understanding the central rules of orthogonal design in biotechnology, but it seems safe to say that it will depend on our ability to manipulate chemical interfaces to remove and introduce interactions at will. Already a range of techniques is available that points the way to orthogonal engineering, either by removing unwanted interactions through genome reduction (Posfai et al., 2006) or working with in vitro systems (Jewett et al., 2008), or by engineering novel orthogonal interactions by designing smart selection schemes and then recruiting evolution (Rackham and Chin, 2005).
In my view, to truly flourish, systems biotechnology will need the future toolbox of synthetic biology. The corresponding changes will turn biotechnology into a true engineering discipline and finally produce in full the industry we have been dreaming of for the last 30 years. With the ever-growing increase in quality of life standards and awareness about environmental issues, remediation of polluted sites has become top priority. Because of the high economic cost of physicochemical strategies for remediation, the use of biological tools to clean up contaminated sites has turned out to be a very attractive option. The use of microorganisms associated to plant roots (rhizoremediation) and leaves in the removal of soil and air contaminants is an area in which success is expected in the near future.
Removal of organic toxic chemicals in the rhizosphere and phyllosphere of plants
In recent years knowledge has been gathered on the removal of contaminants by microbes living in plant niches. Plants provide a series of overlapping niches for microbial development, and culture enrichment approaches and new '-omic' technologies have demonstrated that the number of microbes in the rhizosphere (soil around the roots) and phyllosphere (leave surfaces) of plants is larger than expected. On the other hand, metabolite analysis and stable isotope probe techniques, as well as other approaches have shown that microbes associated to plants are metabolically active (Fig. 1). The ability of the microorganisms to proliferate to high densities in the plant's niche depends on the plant providing an appropriate surface for the microbes' development and, most importantly, on providing nutrients that fulfil the carbon, nitrogen and other elements demands, as well as energy needs. Looking at microbes as bioremediation catalysts, one can say that proliferation of microbes to high cell densities in the plant niches acts as a multiplier and can lead to an increase in the efficiency of pollutant removal if the resident microbes are endowed with the appropriate catabolic potential.
The above positive view of bioremediation contrasts with some attempts that have been made to re-introduce microorganisms in soils for pollutant removal, which have turned out to be utterly unsuccessful. For the design of a successful rhizoremediation strategy it is necessary to fulfil at least two minimal requirements: microbes have to be able to proliferate in the root/leave system and catabolic pathways need to be operative. With the advent of micro-array technology, global approaches in expression of genes in the plant's environment are coming to light. Several recent papers (Matilla et al., 2007;Attila et al., 2008) have demonstrated that almost 200 promoters are specifically induced in different strains of the genus Pseudomonas in the presence of root exudates or plant roots. These studies have revealed the mechanisms underlying microbe-plant interactions and we predict that this knowledge will contribute to recognize the best plant-bacteria combination and establish the optimal induction of catabolic pathways in sites undergoing rhizoremediation. To further support our positive view of prospects in bioremediation we can state that some products present in natural root exudates can act as inducers of different catabolic pathways for the degradation of contaminants. Although plants produce a vast amount of secondary metabolites, not all plants can produce every product and these are often generated only during a specific developmental period of the plant. We predict future studies on root/leaf bacterial metabolomes and transcriptomes of plant-bacteria interactions during remediation to establish the best ways to introduce catabolic pathways in sites undergoing remediation. Having said this, successful rhizosphere colonization does not only depend on the interactions between the plant and the microorganism of interest, but also on the interactions with other microorganisms. New techniques to study population changes have greatly improved over the last few years and they are and will be used to determine the changes that the introduction of new microorganisms in the ecosystem will cause and how it might affect the sustainability of the ecosystem in long run.
An important problem is that of reducing pollutants that are associated to air particles. The main limitation probably comes from the bioavailability of the pollutant deposited on the leaves' surfaces. Most organic contaminants are highly hydrophobic compounds that dissolve poorly in water and many of them can form complexes with airborne particles; this lack of bioavailability may lower removal efficiency. Three recent papers have dealt with the degradation of air pollutants, namely, toluene, phenol and phenanthrene (Molloy, 2006;Sandhu et al., 2007;Waight et al., 2007) and studies on the bioavailability of pollutants and on the range of pollutants to be degraded will appear in the next few years. We also envisage advances in unveiling the strategies used by microbes to enhance the bioavailability of hydrophobic compounds [i.e. polycyclic aromatic hydrocarbons (PAHs)] via the production of biosurfactants, extracellular polymeric substances or formation of biofilms. We also envision research in the area of air decontamination to reveal the full remediation potential of microbes in an area where there is little study. It has been argued that beneficial plant endophytes, bacteria that colonize the internal tissues of the plant without causing negative effects, could be an alternative in bioremediation since microbes would be somehow physically protected from adverse changes in the environment. However, successful remediation by endophytic bacteria requires the transport of the contaminant to the plant's interior. Research in this area will reveal whether or not endophytes are of interest in bioremediation. Barac and colleagues (2004) showed improvement in toluene phytoremediation using engineered endophytic bacteria. The authors transferred the toluene-2 monooxygenase (TOM) pathway to an endophytic Burkholderia strain, a natural endophyte of yellow lupine. Although the authors showed that the strain was not maintained in the endophytic community, there was horizontal gene transfer of the tom genes to different members of the endogenous endophytic community, demonstrating new avenues to introduce desirable traits into the community.
The use of plant/microorganisms in biorremediation has also got certain drawbacks; pollutants above a certain level can be toxic for the plant and may limit plant growth in polluted sites (van Dillewijn, 2008). Another limitation comes from the fact that the plant can take up some contaminants and transform them into other chemicals whose toxicity would need to be tested. We predict remediation technologies involving plant/microbes risk assessment assays to come into consideration.
Antonello Covacci and Rino Rappuoli, Novartis Vaccines and Diagnostics, Via Fiorentina 1, 53100 Siena, Italy.
The study of the network of proteins, protein complexes, sugars and surfaces organelles has been frustrated by the complexity of bacterial surfaces and the necessity to rely on molecular coordinates. The logical consequence to develop crystallographic methods for large cellular components and the alternative use of cryo-electron microscopy to solve organelles structure (flagellum, Type IV pili) has been fundamental but inherently slow. Development of new vaccines may also depend on the identification of complexes located at the surface and exposed protective molecules. Prediction of both is part of routine genome analysis using dedicated algorithms. Antibody staining and FACS analysis is a potent tool to visualize exposed molecules while it is heavily dependent from antibody specificity, affinity and outer layer penetration effect (detection of hidden structures by antibody penetration during preparation of the sample).
We suggest it is now possible to merge high-resolution fluorescence and scanning confocal microscopy with sortase tagging (Popp et al., 2007) to generate native molecules bearing a chromophore. This will potentially allow image rendering of the bacterial surface to visualize the dynamics of protein topology during growth and infection in real time. Science and Nature have both expressed their wonder in 2008 about recent advances in light microscopy (Chi, 2008). In addition, white light sources are moving from lasers to inexpensive mass production of LED (http://www.lens.unifi.it). White light is a necessary step to pulse a sample with a wide range of light wavelengths to simultaneously collect signals in the visible spectrum from excited dyes. The resulting live image can integrate all labelled proteins in a single picture and with a different colour tag providing realistic 3D coordinates. The crucial step in closing the loop is in vivo tagging of a target protein. This in theory can provide a repertoire of one strain-one tagged protein with the final goal to colour-code all the proteins of a bacterial species.
Sortases are bacterial enzymes that predominantly catalyse the attachment of surface proteins to the bacterial cell wall (Telford et al., 2006;Popp et al., 2007). Other sortases polymerize pilin subunits for the construction of the covalently attached pili of the Gram-positive bacteria (Telford et al., 2006). The sortase recognition sequence of Staphylococcus aureus sortase A, LPXTG, when engrafted near the C-terminus of proteins without natural sortase specificity, should be part of a sortase-catalysed transpeptidation reaction using artificial glycine-based nucleophiles. The chemical modification of such substrates with fluorophores allows modifications of proteins in in vitro and in vivo conditions. This method can be efficiently scaled-up for high-throughput data capture. Once the expected wave of new microscopes will be available and tagging with fluorophores will be pervasive, this technology should be ready also for fast and unexpensive real-time expression analyses. Eugene Rosenberg,Department of Molecular Microbiology and Biotechnology,Tel Aviv University,Tel Aviv 69978,Israel. The young field of coral microbiology is driven primarily by a desire to understand coral diseases, which are causing worldwide damage to coral reefs. The field is particularly attractive to microbiologists who enjoy combining laboratory research with field work. In this case, the field work takes place in the most beautiful surroundings.
Coral microbiology
Up to now, most of the research in coral microbiology has been concerned with isolating potential pathogens (reviewed by Rosenberg, et al., 2007) and comparing the microbial communities of healthy and diseased corals (e.g. Bourne et al., 2008). These studies have utilized both classical culture techniques and modern molecular methods and provide the necessary background for future coral microbiology.
What lies ahead? Taking inspiration from medical and environmental microbiology, coral microbiologists will in the near future (i) begin to understand the mechanisms of coral disease, (ii) discover the environmental factors which contribute to diseases and their spread (reservoirs and vectors), and (iii) define the positive role of microbes in the health of the coral holobiont. One area of research that has been largely ignored, but may be very important, is coral virology, including bacterial, algal and coral viruses.
In my opinion the most important and achievable goal of coral microbiology is to develop practical technologies for controlling the spread of coral diseases. Application of the techniques that have been developed for land animals and plants will not be sufficient for the treatment of coral diseases in coral reefs. It is going to take a creative breakthrough-and it is difficult to predict when that will occur.
Measurements versus understanding: the (metabol)omics dilemma
Uwe Sauer, Institute of Molecular Systems Biology, ETH Zurich, Switzerland. The mantra of biology is more data, if possible measuring everything, at high resolution and throughput. Everyone who reviews research or PhD proposals is bombarded with statements on the use of top-notch-omics methods. Much rarer are clear visions on how the anticipated results will aid in finally understanding a particular phenomenon. When reviewing upshots of the proposed research (e.g. publications), we often complain about descriptive data gathering. Now, one could argue that this will change once the current proposals start to spin off publications. Still, glancing at old proposals, including those of this crystal glazer, we were hopeful at the time that the anticipated results would indeed provide us with the missing insights into our various subjects. Do we live in an extremely lucky, and thus unlikely time where is finally all downhill or is there a conceptual problem?
Of course there is a conceptual problem because the sheer amount of data, let alone their non-linear dynamic relationships, challenges our intuition and logical reasoning far beyond their capabilities. It is computational analysis, stupid. Historically, the more data mantra is perfectly understandable. In the molecular age, we have only been able to glimpse at tiny fragments of the whole for decades. With the emergent transcriptomics and proteomics methods a dream became true. Blinded by the suddenly available potential, the initial flood of papers was primarily descriptive. Attracted by this potential, computer scientists later developed bioinformatics methods that help now to sift through piles of expression data, identifying sets of co-regulated genes, regulons and functional correlations.
Much of the bioinformatics success is related to the fact that this research still is, for the most part, in a discovery mode to identify involved molecular components and structures of genetic networks. Understanding, however, implies the ability to accurately predict the non-linear dynamic behaviour. For this we need computational models that represent relevant biological mechanisms in a quantitative fashion, enabling what-if simulations and predictions of behaviour in not-yet-studied situations. While computational modelling has not been the pride of the biological toolbox so far, the last couple of years have brought forward a number of promising applications that make intelligent use of transcriptomics and proteomics data. Obviously further technical developments are still to come, in particular for proteomics, but even with standard technology a single PhD student can generate piles of '-omics' data today. The times they are a changin' for biology. Development of models and computational methods to analyse and integrate such '-omics' data and to design key follow-up experiments for unravelling complex mechanisms becomes the key challenge. One indication that resistance to the change is dwindling is the launch of a new section on computational biology in the traditional ASM flagship Journal of Bacteriology (Zhulin, 2009).
For the more recent addition to the '-omics' arsenalmetabolomics -the technical challenges are perhaps even greater because of the chemical heterogeneity (and sometimes extreme similarity) of metabolites, their rapid turnover, chemical instability, dynamic range and often unknown structure. Nevertheless, I have no doubt that these problems will be solved eventually. The question is: will the experience from the above '-omics' history promote faster intelligent use of metabolomics data? In one incarnation, metabolomics focuses on profiling of as many as possible metabolites to identify functional biomarkers for biological traits, with medical and plant metabolomics at the forefront. This line of research is primarily in the discovery mode and has clearly learned its lessonsuitable bioinformatics methods are available and are routinely used.
An entirely different matter is the nascent field of quantitative metabolomics. As the catalytic interactions between metabolites and enzymes are known for major parts of metabolic networks, the focus is not discovery but monitoring sometimes only subtle response of known system components to perturbations. Consequently, the data contain important functional and mechanistic information, but these are not immediately obvious. What does an increase in one metabolite signify, and what does it mean when changes occur in distant parts of the network? In sharp contrast to transcriptomics and proteomics, metabolite concentrations are not directly linked to genes. Instead, the concentration of a given metabolite is determined by the presence and in vivo activity of its cognate enzymes, their kinetic and regulatory parameters, the pathway flux and other factors. As the most informative metabolites are typically connected to many different enzymes, changes in their concentrations are extremely difficult to trace back to particular events.
An obvious molecular interpretation approach is kinetic models of metabolism, and the lack of such data for modelling was a key motivation for metabolomics method development in the first place. Here is the prediction: although there are still significant analytical and work-flow problems to be solved for quantitative microbial metabolomics, I predict a dramatic increase in the availability of such data in the near future. Profiling metabolite data are already generated in vast amounts and there are no conceptual problems for high-throughput (semi)quantitative metabolomics. As cost, effort and time per single analysis are only a fraction of those for other '-omics' techniques, metabolomics time-course and large-scale screening data sets will soon outnumber those from gene-based '-omics' by far.
This development will create a dilemma because we lack currently appropriate concepts, beyond simple correlation analyses, to obtain mechanistic insights from the expected metabolomics data. Unless the experiments are specifically designed for this purpose, kinetic modelling will not be able to exploit large-scale metabolomics data to a significant extent, simply because metabolite data alone are insufficient. Different from gene-based '-omics', logical reasoning and the current bioinformatics/statistics methods will also not be overly useful. Unless my prediction is far off, the gap between our technical capacity for metabolomics data generation and our ability for digesting them will soon become huge. Thus, the call is open for intelligent computational methods. My guess (and that is all it is) is that methods enabling identification of the most probable conditions or mutants from metabolomics screens for specific follow-up analyses, as well as the design of such further experiments, have initially the greatest potential for obtaining mechanistic insights.
Engineered exploitation of microbial potential
Ian Thomspon,Department of Engineering Science,University of Oxford,Begbroke Science Park,Sandy Lane,Yarnton OX5 1PF,UK. We are at a critical point in our history, faced with the challenges induced by our own large-scale activities, over many years, which are leading to dramatic changes in our climate and an urgent need for remedial measures. Added to these concerns is the continual growth in populations and the pressure this puts on resources and environmental quality. These multiple stresses have stimulated the need to improve efficiencies in current technologies and the search for alternatives for sustainable energy, securing reliable water supplies, treating waste and the generation of sustainable products. One of the positive features of this alarming situation is the increasing awareness among microbiologist and nonexperts alike, of the potential of microorganisms as providers of some of the remedies. For instance, issues of environmental quality and energy have impacted on the waste industry in such a way it is now seen to be a resource opportunity, and anaerobic digestion is considered to be the way forward for treatment and sustainable energy generation.
With such high hopes riding on microbial potential, it is reassuring to know that in recent decades we have invested significant funding in techniques for improving our knowledge of the microbial world. This includes increasingly elaborate and sophisticated molecular methods for detecting unculturable populations and rapid sequencing for improving genetic understanding. However, we are now at a critical point whereby we urgently need to translate this vast mountain of knowledge and microbial system insights into solutions to the demanding global challenges. Furthermore, if new or even established microbial technologies are going to have any significant impact on climate change or any other of our global problems, they must be effective at very large scales and importantly, be controllable. Control of microbial potential en masse is in fact an engineering challenge.
Manipulation of microbial biomass on a large scale and in a controlled manner is by no means an easy task. However, mankind has had some notable successes in harnessing the potential of microbial processes, most notably the effective exploitation of microbial communities in municipal sewage systems and agriculture.
Although such systems are very effective and have served us well for generations, they do not represent examples of where key insights from new genetic information have been exploited to develop novel environmental technologies for solving mankinds' problems.
This has yet to be achieved. Furthermore, the successes in terms of harnessing of microbial potential were achieved by Victorian engineers and agriculturalists, not microbiologists. The kind of opportunity today for such life-changing exploitation would be the identification of microbial gene sequences/strains that enable problematic CO 2 to be converted directly by algae into methane or even clean fuels. However, even if we could isolate or generate such strains the challenge of effective exploitation would have to be resolved, most probably by engineers. This is because in order to harness light, algae have to grow on the surface of ponds and such two-dimensional growth leads to self-shading. The solutions to such issues require effective collaboration with good engineers to develop algae holding systems or bioreactors, which enable maximal light exposure in three dimensions, reducing the foot-print and improving yield. Other scenarios whereby cross-disciplinary collaboration will eventually lead to more effective microbial exploitation include hybrid approaches for treating trace levels of contaminants (such as hormones) in drinking water, by employing a combination of high affinity nanofilters, which concentrate the contaminant, making it bioavailable to catabolic strains.
Encouragingly, there are signs that microbiologists are beginning to open their minds in terms of developing new physical technologies for exploiting cells en masse, in a more controlled manner. These approaches include novel approaches for moving bacteria through soil and manipulating biofilm formation by electrokinetics (Andrews et al., 2006), stimulating biodegradation by manipulating bacterial genomes in situ employing ultrasound (Song et al., 2007), and the application of nanomaterials for in situ detection and stimulating cell activity (Chien et al., 2008). Although early days, such novel approaches provide hope that the microbial potential can be engineered and more reliably harnessed on a large scale. This will require a new generation of microbiologists who have more crossdisciplinary training, who embrace the opportunities that physical and engineered techniques offer, and who have the imagination to consider complementary approaches for the limited array of microbial cell manipulation methods we traditionally employ (e.g. pH, temperature, concentration). This is very good news as the quicker we realize that sequencing more genomes is not the only option for resolving our problems, the quicker we can generate some effective solutions. Such critical advances will be accelerated by employing more systems biology approaches, linking information from the cell to the whole community, an approach which again will require multidisciplinary training, in this instance computing science and mathematics.
The Victorians may have not realized when they developed sewage and clean water systems the extent they had harnessed the potential of microbial communities to solve their problems. However, what they achieved and what we need to learn again is that the solution to many of our current problems is going to come from effective engineering of microbial systems, as this is the only way to control more effectively and provide the scale-up required to have significant global impacts. Personalized medicine, the monitoring, prevention and treatment of disease in an individual, specifically tailored to his or her specific genomic make-up, is rapidly gaining interest in the medical, scientific and general population, as our understanding of genetic susceptibilities to disease and health-relevant causal relationships between our individual genetically determined physiologies and environmental factors advances. The current basis of personalized medicine, the genetic diversity of humans and the resulting diversity of susceptibilities of individuals to disease, is of course only part of the equation, because it is well appreciated that we, like all animals and plants, are covered by second outer 'skins' of populations of phylogenetically and physiologically diverse microbes, which add and integrate metabolic functions that profoundly influence our physiology and health. We are organized 'biomes' of interacting communities of human and microbial cells and tissues. The next level of personalized medicine will therefore integrate the genetic and physiological diversity of our microbial biome partners.
References
Disease is a negative aspect of health and living, and current personalized medicine is principally targeted at disease prevention and treatment, particularly in susceptible individuals. However, personalized medicine will also be applied to the positive side of health and livinglifestyle medicine for healthy people -and will thus be applied to anyone at any time, in most instances over a large part of the lifespan. Just as the treatment of chronic disease is often of more commercial importance than that of acute disease, so chronic health is particularly attractive commercially for personalized lifestyle medicine.
Human biome biotechnology will contribute to personalized medicine, including lifestyle medicine, in a variety of spheres; the one I elect to deal with here is our desire to smell nice for personal satisfaction and to please/ attract others. The commercial importance of this is reflected in the high value of the odour ingredients (in perfumes, colognes, body lotions, deodorants, scented soaps and the like) that form the core of the body care business.
Our smell -the olfactory perception of volatile compounds emitted from our skin, body hair and bodily orifices -is determined by a number of rather variable factors (see Anesti et al., 2004;Eggert et al., 1998Eggert et al., -1999Jacob et al., 2002;Roberts et al., 2008;Wood and Kelly, 2009;Yamazaki et al., 1998Yamazaki et al., -1999, and citations therein), such as (i) the 'personal' composition of volatiles and non-volatiles that we secrete onto our skin surface, which depends on our particular physiology (our individual genomic programme), (ii) the age-related changes in composition of these secretions, (iii) the temporary/ periodical secretion of additional chemicals resulting from changes in health/hormonal balance/recent food intake/ etc., (iv) the washing of skin with detergents, many of which are perfumed, (v) the application of body care products, many of which are also perfumed (see Wood, 2009), and (vi) the composition and activity of our microbial 'skin' that interacts with our natural secretions and applied products, metabolizing and thereby changing their composition, and also creating further volatiles, either metabolites from the secretions, or that are purely microbially derived (see e.g. Anesti et al., 2004). It is this mix of volatiles of endogenous human secretions, endogenous microbial skin secretions, metabolites resulting from the microbial transformation of human secretions, and perfume supplements, which structures our individual odour profile at any moment in time.
*Inspired by Ann Wood, Don Kelly and Joan Timmis, and dedicated to the memory of Rose Timmis, who activated and cultivated my olfactory appreciation of the enormous diversity of smells wafting on the air of the English countryside, flower garden, vegetable garden and dining table.
It is also this individual diversity of odour-structuring parameters that leads to the same perfume/cologne on different women/men smelling subtly (or non-subtly) different ('Gosh: what is that awful perfume you're wearing today?' 'What do you mean: you said how nice it was on Felicity!'). This has led to the current practice of the empirical testing of the whole range of options in the perfumery in order to find the product that most pleases and matches an individual's preferences and skin odour characteristics. Products that modify our own odour are generally applied daily, usually after showering, and are subject to change -physical, chemical and biologicalon the skin surface -over the course of the day, changes that are both generic and individually determined by our particular genome-determined physiology, skin flora and daily activities, such that the smell created at the time of application changes in time, and differently on different people (Ann Wood: 'and some disappear much faster than others, making them very expensive odour-rentals!').
Elucidation of the key interactions and causal relationships that determine personal odour, and how it can be modified to achieve a desired quality, will ultimately allow the development of rational design of enhanced odour profiles. Replacement of the current hit-and-miss empiricism of odour selection by human chemistry:microbial ecology-based procedures to create customized odour profiles, and to optimize their temporal development over the course of the day, may become a significant activity of the lifestyle branch of the personalized medicine industry.
The scientific progress necessary to accomplish this will include elucidation of the key human genomic and physiological determinants, the ecological interactions of the skin microbial flora with the epidermal surface and its secretions, the individual variations in these, which underlie personal odour specificity, and the manner in which these interact with and modify relevant personal care products. This will involve not only analysis of the specifics of different anatomical areas of the body surface (axillae, feet, ears, neck, etc.) exhibiting different secretion characteristics, but also different microscopic niches within such areas that characterize the spatial differentiation of the epidermal tissue structure and their diverse ecological chemistries. Functional metagenomics of such micro-niches will be a key component of this research. At the core of the functional genomics of the skin biome will be the development of ultrasensitive analytical procedures to identify and quantify volatiles at odour-relevant concentrations, namely 'odouromics'.
Such studies will result in major advances in our understanding of physiological and ecological determinants and triggers of production of specific volatiles, and identification of the principal players mediating changes in such volatiles and the agents of regulation of change. This will lead to identification of compounds, microbes and procedures that enable modification of volatile composition. This in turn will lead to the formulation of personalized products (e.g. creams containing volatiles, volatile precursors/metabolites, inducers of volatile formation, inhibitors, prebiotic and/or probiotic microbes, etc.) that modulate skin chemistry and microbial flora such that production/maintenance of production of specific volatiles are favoured and formation of undesirable volatiles by the skin biome are disfavoured, and that, in combination with specific perfumed products, create custom odour profiles in individuals that match their aspirations.
Underpinning all of this will be newly developed powerful, but simple to operate and interpret odouromics instrumentation to analyse odour profiles, and skin microbe functionalities, that, in combination with personal genomic profiles, will produce individual assessments for the customization of commercial products.
The other aspect of personal odour is, of course, its olfactory perception, which also varies from person to person. As a consequence, self-perception of one's own smell can be quite different from the perception of the same smell by someone else. This is not an issue if one buys perfume for self-satisfaction, but may be a major one if the goal is to please or attract others. Thus, a further level in personalizing odour profiles may well be the tuning to a partner's preferences.
The aspect of attracting others inevitably leads to another issue, namely that some skin volatiles may act as pheromones: the composition of our volatiles also contributes to our non-visual sexual attraction to others. This aspect will also undoubtedly be of some interest to the personal care branch of the personalized medicine industry.
It is perhaps worth noting that this research may not only result in the ability to custom design personal olfactory images but also lead to a clearer understanding of the fundamental relationships between our volatile emissions and physiological and mental states. This in turn will lead to improved diagnostic procedures in clinical medicine for certain disease states and syndromes (the chemical analysis of volatiles in breath is already used for such purposes: e.g. see Salerno-Kennedy and Cashman (2005), as well as advances in dermatology itself, but also perhaps for other purposes, such as lie detection (in judicial investigations, job interviews, relationship conflicts, etc.), aptitude assessments; partnership compatibility assessments. And all of this will drive the development of new technology and instrumentation. Only the imagination limits the range of applications. Go to it, biotech! design completely new and well-functioning networks from scratch. However, in a longer time-frame (20 years?) I am convinced that this will also become possible.
If I am going to be really brave I might predict that the developments in Systems and Synthetic Biology will become a major turning point in the history of man. It may (together wit other developments) lead to an understanding and control of fundamental processes such as aging and consciousness. Such breakthroughs would have enormous and obvious impacts on society, and it is hard to see why it should never become possible. Again, it is mostly a matter of when. However, in a five two ten-years perspective synthetic bugs will dominate and as usual these technologies can be applied both to the benefit and harm of mankind. What generally worries me is that we are getting in control of stronger and stronger forces, such that our traditional ways of thinking about our roles as scientists may soon become outdated. Al Gore has also noticed this by stating that man has become a force of nature. Another prediction is therefore that scientists in the future must even more than in the past think about the consequences of what they are doing. I believe that this will become a major issue in the next 10 years! 2008) and can be expanded to be of use for nutritional and pharmaceutical interventions as well as to discriminate between health and disease state of the intestinal tract.
Given the abundance and vast coding capacity of the intestinal microbiome, it will be of great interest to follow the developments in the International Human Microbiome Consortium and see whether it can serve as a model for other ecosystems where next-generation sequence technology is applied to expose the coding capacity and function of the microbial world on our planet. environmental disturbance? (v) Can the functional stability and future status of a microbial community be predicted based on the metabolic functional conservation and differentiation of individual microbial populations? (vi) Can a microbial community be manipulated to achieve a desired stable function by manipulating the metabolic traits of the community? (vii) How can the information be scaled from molecules to populations, to communities, and to ecosystems for understanding ecosystem behaviours and dynamics? (viii) Can the molecular-level understanding of microbial community structure improve our predictive power of the ecological and evolutionary responses of microbial communities to environmental changes, especially global climate changes?
Addressing the above questions in a quantitative and predictive way requires rigorous experimental designing and systematic intensive sampling of the microbial systems studied. Selection of experimental systems with appropriate complexity and replications could be very important. I believe that two general strategies can be employed. One is to focus on surveying complex natural microbial systems by using high-throughput metagenomics technologies to systematically compare the commonality and differences of microbial community diversity patterns, metabolic capacities, and functional activities across various spatial and temporal scales. While such survey-based approach provides rich information on microbial community diversity patterns and dynamics, it could be difficult to establish detailed definitive mechanistic linkages between microbial diversity and ecosystem functioning because the microbial systems in natural settings are generally very complex. Another complementary strategy is to establish well-controlled laboratory systems such as bioreactors with simplified communities to systematically examine the responses of microbial communities to environmental changes and the impacts of their responses on ecosystem functioning. Such laboratory systems are important to establish cause-and-effect relationships, because they have great advantages in terms of system controls, monitoring, data collection, replications and modelling. Determination of cause-and-effect relationships is much easier with simpler, engineered, laboratory-based bioreactor systems than with complex natural communities, as input and output parameters can be controlled, along with environmental conditions. Although the community in a controlled system is not a natural community, such systems would offer the best opportunity to acquire mechanistic understanding of the fundamental principles of interactions among various microorganisms and the molecular level ecological and evolutionary responses of microbial communities to environmental changes. Therefore, well-controlled laboratory engineered systems will be critical to predictive microbial ecology studies.
Predictive microbial ecology requires not only highthroughput experimental tools but also high performance computational capabilities. System-level understanding of the dynamic behaviour of microbial community structure, functions and their relationships to ecosystem functioning faces several grand computational challenges. First, microbial diversity is extremely high. The number of genes in a genome or populations in a community far exceeds the number of sample measurements due to high cost of measurements. It is difficult to apply classical mathematical tools such as differential equations to simulate high-throughput metagenomics data because no sole solution can be obtained for the constructed models. New mathematical theories and approaches are needed to deal with such dimensionality problems. Second, metagenomic data from analyses of transcriptomes, proteomes and metabolomes, as well as physiological and geochemical data, are heterogeneous. Synthesizing various types of large-scale data together to make biological sense is also difficult. Rapid high performance parallel computational tools are needed for data processing, computation and visualization. In addition, because the dynamic behaviours of biological systems at various levels (cells, individuals, populations, communities, and ecosystems) are measured on different temporal and spatial scales, linking cellular-level genomic information to ecosystem-level functional information for predicting ecosystem dynamics is even more challenging. Novel mathematical framework and computational tools are needed for achieving systems-level understanding and prediction of microbial community dynamics, behaviour and functional stability.
With the rapid continuing advances of metagenomicsbased high-throughput experimental technologies and associated high performance computational tools, microbiologists should be able to perform more quantitative modelling studies of microbial systems as macroecologists have done since last half century. There is no doubt that the era of quantitative predictive microbial ecology is coming. | 2018-04-03T01:28:34.055Z | 2009-02-18T00:00:00.000 | {
"year": 2009,
"sha1": "17d52bc752f5ef7c7de63c75d255fd38e7df801c",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1751-7915.2009.00090_17.x",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "17d52bc752f5ef7c7de63c75d255fd38e7df801c",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
119577815 | pes2o/s2orc | v3-fos-license | Properly discontinuous actions on Hilbert manifolds
In this article we study properly discontinuous actions on Hilbert manifolds giving new examples of complete Hilbert manifolds with nonnegative, respectively nonpositive, sectional curvature with infinite fundamental group. We also get examples of complete infinite dimensional K\"ahler manifolds with positive holomorphic sectional curvature and infinite fundamental group in contrast with the finite dimensional case and we classify abelian groups acting linearly, isometrically and properly discontinuously on Stiefel manifolds. Finally, we classify homogeneous Hilbert manifolds with constant sectional curvature.
T p M depending smoothly on p and defining on T p M ∼ = H a norm equivalent to the original norm of H.
The investigation of global properties in infinite dimensional geometry is harder than in the finite dimensional case essentially because of the lack of local compactness. For example there exist complete Hilbert manifolds with points that cannot be connected by minimal geodesics and one can construct on such manifolds finite geodesic segments containing infinitely many conjugate points [13]. A complete description of conjugate points along finite geodesic segment is given in [6] and similar questions have been studied in [5,21,22,23,24]. Moreover, there exist complete Hilbert manifolds such that the exponential map fails to be surjective [1]. However, Ekeland [11] proved that almost all points can be joined to a prescribed endpoint by a unique minimal geodesic.
Stiefel and Grassmann manifolds have been intensively studied for many authors in different contexts [2,9,12,14,19,30]. We recall that the Stiefel manifold of the p frames in a Hilbert space H, that we denote by St(p, H), can be endowed by a Riemannian metric ·, · , induced by the natural embedding of St(p, H) into the Hilbert space L(R p , H) of linear maps of R p in H. The orthogonal group O(p) acts freely and isometrically on St(p, H) and so the Grassmann manifold of p subspaces of H, Gr(p, H) = St(p, H)/O(p), can be endowed by a Riemannian metric such that the natural projection π : St(p, H) −→ Gr(p, H) is a Riemannian submersion [12,14]. In [14], using the computation given in [12], it has been proved that, if H is separable, any two points in these manifolds can be connected by a minimal geodesic (in section 2.2 we remove the condition of separability).
In the present paper we study properly discontinuous actions on Stiefel and Grassmannian manifolds applying arguments developed in [4]. The study of this kind of actions is needed in order to give new examples of Hilbert manifolds with given fundamental group. Our first main result is the following where G is a torsionfree group. Let St(p, H) be the Stiefel manifold of the p orthonormal frames in H, where H is an infinite dimensional Hilbert space whose Hilbert basis has the same cardinality of G. Then H acts linearly, isometrically and properly discontinuously on St(p, H) if and only if p i = p j whenever i = j. Moreover, G acts properly discontinuously on Gr(p, H).
The above theorem gives a complete classification of the abelian groups acting linearly and properly discontinuously on the Stiefel manifolds. Since St(p, H) is contractible whenever H has infinite dimensional (see [10]), an important conclusion of Theorem 1.1 is the existence of a complete Hilbert manifold with nonconstant sectional curvature with fundamental group isomorphic to H.
One can define on St(p, H) another metric g, called canonical metric [12]. We shall prove that (St(p, H), g) is a complete Hilbert manifold with nonconstant and nonnegative sectional curvature and any two points in (St(p, H), g) can be connected by a minimal geodesic. We also investigate complex Stiefel and Grassmann manifolds and in section 4, we prove the following result Theorem 1.2. Let G be a torsionfree group. Then there exists a complete Hilbert manifold M of nonnegative and nonconstant sectional curvature such that Moreover, there exists a complete and infinite dimensional Kähler manifold M with positive holomorphic sectional curvature whose fundamental group is isomorphic to G.
Note that the last part of the Theorem is in contrast with the finite dimensional case since a finite dimensional Kähler manifold with positive holomorphic sectional curvature is compact and simply connected [26]. Finally we classify homogeneous Hilbert manifolds of constant sectional curvature and we give a new example of an infinite dimensional complete Hilbert manifolds of negative constant sectional curvature whose fundamental group is isomorphic to Z k = Z ⊕ · · · ⊕ Z k for every k ∈ N.
The paper is organized as follows. In Section 2 we briefly discuss properly discontinuous actions on Hilbert manifolds and the geometry of the Stiefel and Grassmann manifolds. From Section 3 to Section 5 we prove our main results.
Preliminaries
2.1. Properly discontinuous actions. In this section we will briefly discuss properly discontinuous actions on a Hilbert manifold. Most of the results hold in a more general context.
Let G be an abstract group and let M be a Hilbert manifold. An action of G on M is a map ρ : G × M −→ M such that: ρ(g, ·) is a diffeomorphism, ρ(e, m) = m and ρ(g 1 g 2 , m) = ρ(g 1 , ρ(g 2 , m)) for every g, g 1 , g 2 ∈ G and for every m ∈ M , where e ∈ G denotes the neutral element of G. Throughout this article we always denote µ(g, m) = g(m). The subgroup G m = {g ∈ G : g(x) = x} is called the isotropy group of x. The orbit throughout m is the set G(m) = {g(m) : g ∈ G}. One say the G action is transitive if G has just one orbit, i.e., for every p, q ∈ M , there exists g ∈ M such that g(p) = q. The G-action is called effective, if g(m) = m for every m ∈ M , implies g = e. Hence G acts effectively on M if and only if x∈M G x = {e}; if G x = {e} for every x ∈ M , we say that G acts freely on M . Note that any action can be reduced to an effective action. Indeed, N = x∈M G x is a normal subgroup and G/N acts effectively on M .
A G action on M is called properly discontinuous if the following conditions hold: a) for every m ∈ M , there exists an open neighborhood U of x such that g(U ) ∩ U = ∅ for every g = e; b) for every y / ∈ G(x), there exist neighborhoods U and V of x ∈ U and y ∈ V respectively such that g(U ) ∩ V = ∅ for every g ∈ G.
The second condition means that the orbit space M/G is Hausdorff. Note that a properly discontinuous action is free and a finite group G acts properly discontinuously on a manifold M if and only if it acts freely on M . The following result is well-known [17].
Proposition 2.1. Let G be a group acting properly discontinuously on an Hilbert manifold M . Then orbit space M/G admits a differential structure such that π : M −→ M/G is a covering map.
Assume now that G acts freely on M . We say that G acts discontinuously on M if for every x ∈ M and every sequence a n (where a n are all mutually distinct) then a n (x) does not converge. Hence a properly discontinuous action is discontinuous.
We will say that G acts isometrically on (M, ·, · ) if the transformation g : M −→ M is an isometry of M for every g ∈ G. The following result is a useful criteria for properly discontinuous actions [17].
Proposition 2.2. If G acts discontinuously and isometrically on M , the action is properly discontinuous. In this case M/G admits a Riemannian structure such that the natural projection π : M −→ M/G is a Riemannian covering map. Moreover, if M is a complete Hilbert manifold so is M/G. The last part of the above proposition is proved in [5].
2.2.
Infinite dimensional Stiefel and Grassmann manifolds. In this section we will briefly investigate the Riemannian geometry of the Stiefel manifolds with respect to the euclidian metric and with respect to a Riemannian metric which is called the canonical metric. We also study the Riemannian geometry of the Grassmannian manifolds.
Let H be a Hilbert space and let p be a positive number. L(R p , H) denotes the set of linear maps from R p into H. This is a Hilbert space endowed by the following Hilbert product: x, y = Tr(x t • y). Here x t ∈ L(H, R p ) denotes the transpose with respect to the metric on H and on R p respectively, i.e.
for any v ∈ R p and for any Z ∈ H. In particular we have the following orthogonal decomposition: The Stiefel manifold St(p, H) is the set of linear isometric immersion of R p into H. Equivalently, St(p, H) = {x ∈ L(R p , H) : x t •x = Id R p } and so it is a closed smooth submanifold of finite codimension 1 2 p(p + 1) of L(R p , H). The Hilbert product ·, · on the Hilbert space L(R p , H) induces a Riemannian metric on St(p, H) such that (St(p, H), ·, · ) is a complete Hilbert manifold. The tangent space at Y is given by If T : H −→ H is a linear map, then T induces in a natural way a linear mapT on L(R p , H), by settingT It is easy to check that if T : H −→ H is an isometry, thenT is an isometry of L(R p , H) preserving St(p, H). ThereforeT is also an isometry of (St(p, H), ·, · ). In particular, if H acts linearly and by isometry on H, then it also acts isometrically on St(p, H), by setting Hence the orthogonal group O(H) of H acts isometrically and transitively on St(p, H). More generally, if T : V −→ W is a linear isometric immersion, then the mapT : is smooth and the following result holds [14].
In [14] the authors proved that (St(p, H), ·, · ) is Hopf-Rinow whenever H is separable. This means that any two points can be connected by a minimizing geodesic. The next result shows that the hypothesis of separability is not necessary.
Proof. Let x, y ∈ St(p, H). By Ekeland Theorem [11], there exists a sequence y n converging to y such that: there exists a unique minimizing geodesic joining x and y n ; lim n →∞ d(x, y n ) = d(x, y). Now, any geodesic joining x and y n belongs to a finite dimensional Stiefel manifolds [12,14]. Hence there exits a separable Hilbert space W such that: x, y, y n ∈ St(p, W ) as well as the minimizing geodesic joining x and y n . By [14] there exists a minimizing geodesic joining x and y. Since St(p, W ) is totally geodesic in St(p, H) then the unique minimizing geodesic joining x and y n is also the unique minimizing geodesic in St(p, W ) as well. Now, keeping in mind that d(x, y n ) is the same either St(p, H) or St(p, W ), we get that the minimizing geodesic in St(p, W ) is a minimizing geodesic in St(p, H) concluding the proof.
The orthogonal group O(p) acts freely and isometrically on ( is the Grassmann manifold of the p-dimensional subspaces of H. It can be endowed by a Riemannian structure, that we also denote by ·, · , such that the natural projection π : Proof. Let p ∈ Gr(p, H). Consider the geodesic ball where d is the distance function defined by the Riemannian metric. Then B(p, ǫ) is a complete metric space, for ǫ small (see [11]). Let {x n } be a Cauchy sequence in Gr(p, H). Then there exist n 0 such that x n ∈ B(x no , ǫ) for every n ≥ n o . Since Gr(p, H) is homogeneous, B(x no , ǫ) is a complete metric space as well. Hence the sequence admits a convergent subsequence and, being Cauchy, it converges. We can apply now the results in [14] and the argument in Proposition 2.3 to get the result.
We can also consider the so called canonical metric on the Stiefel manifold. H). Indeed, keeping in mind the orthogonal splitting Note also that L Y is a symmetric and positive definite operator of H. We know that the isomorphism L Y induces endomorphismL Y of L(R p , H), by setting This is a continuous and invertible endomorphism whose inverse in given by: this proves that g Y is a nondegenerate bilinear form. We shall prove that g Y defines a Riemannian metric on St(p, H). Firstly, we check that g Y is a symmetric bilinear form.
where the last inequality follows from the fact that Tr( Moreover, if e 1 , . . . , e p is an orthonormal basis of R p , and we denote by P Im Y the orthogonal projection on Im Y , then where the inequality follows from the fact that Y is a linear isometric immersion of R p in H. Therefore which also means that (St(p, H), g) is a complete Hilbert manifold since (St(p, H), ·, · ) is. Moreover the following result holds.
Lemma 2.2. If T : H −→ H is an isometry, then the induced mapT is an isometry of (St(p, H), g). Therefore if G acts isometrically on H, then G acts isometrically on (St(p, H), g) as follows: Therefore O(H) acts isometrically and transitively on (St(p, H), g) Proof. We only check the first part of the Lemma. [8,12]. Hence by the O'Neil formula, see [8], the sectional curvature of (St(p, H), g) is nonnegative and nonconstant whenever p ≥ 2 and H is finite dimensional. The O(p) action is also an isometric action with respect to the canonical metric so it induces a Riemannian structure on Gr(p, H) such that π : (St(p, H), g) −→ Gr(p, H) is a Riemannian submersion. We point out that the metric induced on Gr(p, H) is the same of the metric induced by the euclidian metric. Indeed, ⊥g . We will denotes again by g such Riemannian metric on Gr(p, H). As before O(H) acts transitively and isometrically on Gr(p, H) and so (Gr(p, H), g) is a complete Hilbert manifold. The following Lemma is just Lemma 2.1 for the canonical metric and the proof is similar to one given in [14]. For sake of completeness we give the sketch of the proof.
We identify W with L(W ) and consider the splitting V = W ⊕ W ⊥ . Let Proof. We shall prove the result for (St(p, H), g). The proof for (Gr(p, H), g) is similar. By Theorem 2.1 in [12], we get that any geodesic is contained in a finite dimensional Stiefel manifold. Hence, using the arguments developed in [14], it can be proved that St(p, H) is Hopf-Rinow whenever H is separable.
Moreover, Proposition 2.3 works in this context and so (St(p, H), g) is Hopf-Rinow.
Let Y ∈ St(p, H) and let V, W ∈ T Y St(p, H). It is easy to check that V = span Y (R p ), V (R p )), W (R p )) is a finite dimensional subspace such that Y ∈ St(p, V) and V, W ∈ T Y St(p, V). Hence by Lemma 2.3, the sectional curvature K Y (V, W ) is nonnegative and the result follows. [14] can be proved in our context. Let Y ∈ St(p, H) such that Y (R p ) = W . As we saw before,
Remark 2.2. Theorem 3 in
and so the following diagram: is commutative. Therefore (dσ W ) W = −Id | T W Gr(p,H) .
Complex Stiefel and Grassmannian manifolds.
In this subsection we briefly discuss complex Stiefel and Grassmannian manifolds. Let H be a complex Hilbert space and let L(C p , H) the set of complex linear maps which is a complex Hilbert space with respect to the following Hermitian Hilbert product: h(X, Y ) = Tr(Y * X). Moreover, h(·, ·) = ·, · − iω(·, ·), ·, · defines a real Hilbert structure on L(C p , H) and ω is a symplectic form, i.e. ω : L(C p , H)×L(C p , H) −→ R is skew-symmetric and the natural map associated to ω, i.e.,ω : L(C p , H) −→ L(C p , H) * defined by settingω(L) = ω(L, ·), is an isomorphism. If we denote by J the multiplication by i in H, J defines an almost complex structure on L(C p , H), that we also denote by J, such that ω(·, ·) = J·, · . Hence ω is a Kähler form on L(C p , H) [18]. The complex Stiefel manifold can be viewed as St C (p, H) = {x : C p −→ H : x * •x = Id C p } and its tangent space is given by skew-hermitian}. If we restrict ·, · on St C (p, H), then (St C (p, H), ·, · ) is a complete Hilbert manifold. The Lie group U(p) acts isometrically on L(C p , H) by setting A·φ = φ•A * and it preserves St C (p, H) on which it acts freely. Moreover, one can check that H) is the natural projection. Hence ω and J on L(C p , H) induce a Kähler structure on Gr C (p, H). Note also that U(H) commutes with the U(p) actions and so it acts by holomorphic isometries and transitively on Gr C (p, H). Therefore Gr C (p, H) is a complete Kähler manifold and if a group G acts isometrically on H, then it acts isometrically on L(C p , H) by setting (g, Y ) → g • Y , and so it acts by holomorphic isometries on Gr C (p, H). The following Lemma can be proved as Lemma 2.3 and Corollary 2.1 keeping in mind that Gr C (p, H) has positive holomorphic sectional curvature whenever H is finite dimensional. Indeed, 2 p ≤ K Y (X, J(X)) ≤ 2 for every Y ∈ Gr C (p, H) and for every X ∈ T Y Gr C (p, H) (see [29]). L(Gr C (p, W )) ֒→ Gr C (p, V ) is totally geodesic. Moreover Gr C (p, H) is a complete Kähler Hilbert manifold with positive holomorphic sectional curvature.
We conclude this section pointing out that Gr C (p, H) is simply connected. Indeed, let γ : [0, 1] −→ Gr C (p, H) be a loop. Then there exists an ǫ > 0 such that for every t ∈ [0, 1], exp γ(t) is a diffemorphism from the ball of radius ǫ in T γ(t) Gr C (p, H) onto the geodesic ball of γ(t) of radius ǫ. Then there exists a broken closed geodesic c starting from γ(0), homotopically equivalent to γ, with the following property: there exists a partition 0 < t 1 < . . . < t n < 1 such that c | [t i ,t i+1 ] is the unique minimal geodesic joining c(t i ) and c(t i+1 ) with length lesser than ǫ for i = 0, . . . , n − 1. Hence if we take W = span(c(0), . . . , c(n),ċ(0), . . . ,ċ(n)), by the above Lemma it is easy to prove c(t) ∈ Gr C (p, W ). This means that c, and so γ, is homotopically equivalent to a constant loop since Gr C (p, W ) is simply connected whenever W has finite dimension [15].
Proof of Theorem 1.1
We know that if H acts linearly and by isometry on H, then it also acts isometrically on St(p, H) by setting The following fact holds. Proof. Since H acts freely on the unit sphere of H, then it also acts freely on St(p, H). We prove that H acts properly discontinuously on St(p, H) applying Proposition 2.2.
Let a n be a sequence in H and let x ∈ St(p, H). If a n (x) converged to y ∈ St(p, H), then (a n x)(v) would converge to y(v) for every v ∈ R p . However, if v = 1, then both (a n x)(v) and y(v) belong to the unit sphere of H and so this is an absurd. Therefore H acts properly discontinuously on St(p, H).
Now we shall prove a similar result for the Grassmannian manifolds.
Lemma 3.2. Let G be a torsionfree group acting isometrically and properly discontinuously on then unit sphere of H. Then G acts isometrically and properly discontinuously on (Gr(p, H), ·, · ).
Proof. Firstly we prove that G acts freely on Gr(p, H). Indeed, if g(σ) = σ, then Z would act properly discontinuously on the unit sphere of σ, by setting (n, v) → g n (v), which is not possible since σ has finite dimensional. Let σ ∈ Gr(p, H) e let a n be a sequence in G and suppose a n (σ) converges to σ o . Let x ∈ St(p, H) such that x(R p ) = σ and let y ∈ St(p, H) such that y(R p ) = σ o . Since π : St(p, H) −→ Gr(p, H) is a fiber bundle, there exists a sequence k n ∈ O(p) such that a n • x • k n converges to y ∈ St(p, H). We may assume that k n → k. Therefore a n (x • k) converges to y that is a contradiction by Lemma 3.1.
Let G be a set. It is well known, see [25], that is a Hilbert space and a Hilbert basis is given by the functions e h (g) = δ hg , h, g ∈ G. Moreover, for every bijective map φ : If G is a group, then G acts isometrically on l 2 (G) by setting: where R g is the right translation.
The following result is proved in [4].
Therefore, by Lemma 3.1, if G acts properly discontinuously on H, then it acts properly discontinuously on (St(p, H), g). Since (St(p, H), g) is contractible (see [10]) and has nonnegative sectional curvature, then applying Propositions 2.2, we have a complete Hilbert manifolds with nonnegative and nonconstant sectional curvature whose fundamental group is isomorphic to where G is a torsionfree group and p i = p j whenever i = j. This proves the first part of Theorem 1.2.
Let G be a set and consider Then l C 2 (G) is a complex Hilbert space and a Hilbert basis is given by the functions e h (g) = δ hg , h, g ∈ G. Moreover, if G is a group, then G acts isometrically on l C 2 (G) by setting: If G is a torsionfree group, it can be proved as in [4], that G acts properly discontinuously on the unit sphere of l C 2 (G) and so by the arguments developed in Section 2.3 G acts by holomorphic isometry on Gr C (p, H).
Moreover, one can check that Lemma 3.2 holds in this context. Summing up we have proved that G acts by holomorphic isometries and properly discontinuously on Gr C (p, l C 2 (G)) and the quotient Gr C (p, l C 2 (G))/G inherits the structure of a Kähler manifold such that the covering map π : Gr C (p, l C 2 (G)) −→ Gr C (p, l C 2 (G))/G is holomorphic. Since Gr C (p, l C 2 (G)) simply connected, by Lemma 2.4 and Proposition 2.2, we get the last part of Theorem 1.2.
Homogeneous Hilbert manifolds of constant sectional curvature
Let (M, g) be a complete Riemannian manifold of constant sectional curvature K. It is well-known, see [20,28], that M is isometric toM /Γ, wherẽ M is the complete simply connected Riemannian manifold with constant sectional curvature K, Γ is a linear group acting isometrically and properly discontinuously onM and the natural map π :M −→M /Γ is a Riemannian covering map. In this section we investigate homogeneous Riemannian manifold of infinite dimensional of constant sectional curvature. Our main result is the extension of a result of Wolf [27,28] in the infinite dimensional context. We begin with the following Lemma that is well-known in the finite dimensional context.
We will say that f is a Clifford translation if δ f is constant. . This geodesic exists since M is Hopf-Rinow [20]. Since f is a Clifford translation, then and so the curve formed by γ and f (γ(t)) is a geodesic and the result follows.
Remark 5.1. The above Lemma works for any Hilbert manifold satisfying the Hopf-Rinow Theorem.
As in the finite dimensional, the following Lemmata hold [28]. We can now classify complete Hilbert manifolds of constant sectional curvature. Naturally we can assume that the sectional curvature is 0, or ±1. Proof. By Lemmata 5.3 and 5.4, if M/Γ is homogeneous and K < 0, then Γ = {e} and so M is isometric to the hyperbolic space. If K = 0, then Γ must contains just ordinary translation. Hence we may write Γ = {v i ∈ H, i ∈ I}. Let J ⊂ I be a finite subset of I, and let σ = span(v k : k ∈ J). Let J = {v ∈ Γ : v ∈ σ}. ThenJ acts freely and properly discontinuously on the finite dimensional subspace σ. Therefore, see [28], there exists v ′ 1 , . . . , v ′ k ∈J such thatJ = span Z (v ′ 1 , . . . , v ′ k ) and the theorem follows. Theorem 5.2. Let S(H) be the unit sphere of an infinite dimensional Hilbert space H. Let Γ be a group acting isometrically and properly discontinuously on S(H). Then S(H)/Γ is homogeneous if and only if H is a Hilbert space over F where F is one of the fields R, C and Q (quaternions), Γ is a finite multiplicative group of elements of norm 1 in F which is not contained in a proper subfield F 1 , R ⊂ F 1 ⊆ F of F and Γ acts on S(H) by F-scalar multiplication of vectors. Conversely, all such manifolds are homogeneous manifolds of constant sectional curvature 1.
Proof. Let T be a Clifford translation. By Lemma 5.2, T leaves a geodesic γ invariant on S(H). Now, it is well-known that any geodesic in S(H) is closed and it spans a two dimensional vector space. Hence T must act as a Clifford Translation on the unit circle and so it has finite order n [28]. Moreover, T is a Clifford translation on the unit sphere of any subspace spanned by x, T (x), . . . , T n−1 (x) and so it acts by multiplication on it. Now, H = W ⊕ W ⊥ and we can iterate the above argument on W ⊥ . However, the displacement function of T must be constant and so T must act by multiplication on H and so the Theorem follows directly by a Theorem of Wolf [27,28].
We conclude this section giving a properly discontinuous Z n action on the hyperbolic space of infinite dimensional.
It is well-known that H n can be also described as {(x 1 , . . . , x n , t) ∈ R n+1 : t > 0, x 2 1 + · · · + x 2 n − t 2 = −1}. The Riemannian metric on H n , that we also denote with g, is the restriction of the Minkowski metric of R n+1 on H n . The isometry group Iso(H n , g) = O(n, 1) + is the set of the isometries with respect to the Minkowski metric which preserve H n . Therefore, we have an injective homomorphism Z n ֒→ O(n, 1) such that Z n acts linearly, isometrically and properly discontinuously on the hyperbolic space H n . | 2013-09-16T15:33:21.000Z | 2013-09-16T00:00:00.000 | {
"year": 2014,
"sha1": "f7b6d42e5e6f7e0b3b49fa19dbf95927605f0931",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1309.4006",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f7b6d42e5e6f7e0b3b49fa19dbf95927605f0931",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
236390994 | pes2o/s2orc | v3-fos-license | Cooperation During Orthodontic Treatment of Patients with I Phase and Phase Orthodontic Treatment
Cooperation during orthodontic treatment of patients with I phase and II phase orthodontic treatment. Int. J. Odontostomat., 15(2) :526-531, 2021. ABSTRACT: To evaluate differences in cooperation of adolescent patients in active orthodontic treatment between those who received one phased treatment (no prior interceptive or early treatment) and two phased treatment (prior interceptive or early treatment and subsequent corrective treatment). A prospective cohort study was carried out in 132 patients undergoing orthodontic treatment between 10 and 17 years old at CES University Dental Clinics and in 9 private practices in Medellín Colombia; two groups of 66 patients were defined; one that received two phased treatment and one that received one phased treatment. The Orthodontic Patient Cooperation Scale (OPCS) was applied to all individuals every three months during the first year of treatment in order to assess cooperation. Statistical differences between both groups were assessed using the SSPS® software program. Significantly greater cooperation (M = 4.6) was observed in patients who had received two phased treatment compared with those who were only subjected to one phased treatment (M = 2.3). Patient cooperation during orthodontic treatment does not seem to be affected by two phased treatment and to the contrary seems to have a positive impact when comparing it with individuals with one phased treatment. The most important factors found to influence cooperation were correlated with attitude, interest and commitment to treatment, patient and parental motivation. The OPCS scale proved to be useful for evaluating cooperation and making comparisons with other studies.
INTRODUCTION
The success of conventional orthodontic treatment depends on various biological, biomechanical and psychosocial factors (Albino et al., 2000;Bos et al., 2005). Different studies have evaluated the influence of patient cooperation and have concluded that it is an important factor that can affect treatment outcome (Daniels et al., 2009;Carvajal & Sierra, 2013).
Albino (Albino et al., 2000) as well as Sinha & Nanda (2000) reported that adequate patient cooperation ranges between 40 % -60 % and is influenced by different aspects such as patient and parental motivation, compliance, adequate use of appliances, and treatment length (Albino et al.; Sinha & Nanda). Regarding the latter, it has been suggested that longer treatment periods could be associated with a decrease in cooperation (Mavreas & Athanasiou, 2008;Abu Alhaija et al., 2010). However, said reports refer to the duration of conventional orthodontic treatment and not to previous interceptive orthodontic treatment or two-phase treatments. In contrast, Gross et al. (1985) reported that patients who initiated with interceptive treatment presented high levels of cooperation initially, but decreased between 20 and 90 % during the corrective phase. Other authors such (Slakter et al., 1980) found that cooperation prolonged treatment times; moreover, Skidmore et al. (2006) reported early treatment termination or suspension due to lack of commitment, motivation and cooperation.
Although a significant percentage of patients undergoing two phased treatment, and it could be inferred that increased treatment times could affect cooperation during corrective treatment, there are no reports in the literature that have evaluated its impact. The purpose of this study was to therefore to evaluate differences in cooperation between patients who received two phased treatment and those who only one received one phased treatment.
Design.
A non-probabilistic sample of 132 patients between 10 and 17 years old requiring orthodontic treatment who consulted between 2014 and 2015 at CES University Dental Clinics and 9 private practices in Medellin, Colombia was selected; the sample was divided in two groups of 66; an exposed group consistent of patients who had received two phased treatment, and a non-exposed group that included patients who initiated one phased treatment. Patients who failed three or more consecutive appointments during treatment were excluded.
Patient cooperation was assessed with the Orthodontic Patient Cooperation Scale (OPCS) ( Table I) (Amado et al., 2008), consisting of a 10 item Likert type scale to be completed by the orthodontist, during the first year of treatment at 3, 6, 9 and 12 months.
Calculation of the sample size was made with a 95 % confidence level, 80 % power, a risk of noncooperation of the unexposed group (patients who receive one phased treatment) of 25 % a risk of noncooperation of the exposed group (patients who received two phased treatment) of 50 %.
The study design was previously approved by the CES University Institutional Ethics committee and was in compliance with Colombian Legal resolution 008430 that typifies ethical regulations for the performance of clinical human studies.
Statistical analysis. The psychometric properties of the OPCS were verified by means of the Rasch model, in order to perform data analysis (Rojas et al., 2019). A univariate analysis was carried out using Orthodontic Patient Cooperation Scale Please read the following list of behaviors, keeping in mind the patient named on the cover sheet accompanying this q uestionnaire. For each item, decide to what extent the statement describes the patient's behavior. Then circle the response that most closely reflects your estimate of the patient's behavior. 1 . This patient keeps appointments and is prompt. Always Frequently Sometimes Rarely Never 2 . This patient has distorted wires and/or loose bands. Always Frequently Sometimes Rarely Never 3 . The parent(s) of this patient is (are) observed to be interested and involved in treatment. Always Frequently Sometimes Rarely Never 4 . This patient speaks of family problems or a poor relationship with parent(s) or demonstrates such problems in nteractions with parent(s), which I have observed. Always Frequently Sometimes Rarely Never 5 . This patient acts enthusiastic and interested in treatment. Always Frequently Sometimes Rarely Never 6 . This patient's behavior is sullen, hostile, belligerent, or rude. Always Frequently Sometimes Rarely Never 7 . This patient cooperates in the use of headgear and/or el astics. Always Frequently Sometimes Rarely Never 8 . This patient complains about treatment procedures. Always Frequently Sometimes Rarely Never 9 . This patient demonstrates excellent oral hygiene. Always Frequently Sometimes Rarely Never 1 0. This patient complains about having to wear braces. Always Frequently Sometimes Rarely Never Table I. Scale used for scoring cooperation in adolescent orthodontic patients.
the Stata v 12.1 ® program (College station, Texas), considering central tendency measures, and dispersion according to the category of the variables. For the bivariate analysis, a comparison was made between the variables of exposure, outcome and adjustment, with the linear model of repeated measures, determining mean cooperation for each category of the independent variables, with a confidence interval of 95 % and their association through Eta statistic. For the multivariate analysis, a linear regression of repeated measures was performed considering the exposed group as intersubject effect and sex, age, socio-economic level, type of malocclusion as co-variables; intra-subject effect (dependent variable) measures were performed at defined times.
RESULTS
With regards to the sociodemographic and clinical characteristics of the sample, patients had an average age that ranged between 14 -15 years (SD = 1.55 -1.9). Twenty five percent belonged to Status 4 socio-economic level (upper middle-class) and 17 % to Status 3 socio-economic level (middle-class); approximately 60 % were females (Table II). Table III presents the estimation of average range of cooperation for each treatment period. Cooperation of individuals with two phased treatment was higher (M = 4.6, DS = 6.3) than those who only received one phased treatment (M = 2.3, DS = 2.7), with similar significant differences in each time period.
The associations found between cooperation and the characterizing variables of the sample such as socio-economic level, sex, and malocclusion revealed significant differences depending on the history of treatment; those with two phased treatment revealed an average of cooperation 5.1 -8.1, while those who received one phased treatment did exhibited an average between 2.4 and 3.4. After the initial evaluation period, increased cooperation was observed for both groups (Fig. 1).
Cooperation was expressed differently for males; those who only received one phased treatment, presented an average ranging between 1 and 3, while those in the group that had two phased treatment, cooperation ranged between 6 and 9 (Fig. 2) Cooperation according to socioeconomic level was similar with an average ranging between 3 and 9 for both groups, although greater cooperation was evidenced in patients with a two phased treatment (Fig. 3).
Regarding differences in cooperation between malocclusion types, similar values were observed for both groups, except during the final treatment period when increased cooperation was found in those with a two phased treatment. In contrast, individuals with Class III malocclusion exhibited less cooperation overall than those who had one phased treatment (Fig. 4).
DISCUSSION
The purpose of this study was to assess the effect of one phased treatment and two phased treatment on cooperation. Evaluation of cooperation yielded positive results for both groups; average cooperation of patients with two phased treatment was higher coinciding with Lee et al. (2008) and Hsieh et al. (2005). Results, on the other hand, contrast with those of Carvajal & Sierra who found cooperation decreased in patients undergoing two phased treatment with a longer duration. The findings of this study could be explained by the fact that in current clinical practice, socio-economic and cultural conditions have influenced patients and their parents to accept two phased treatment (Amado et al.).
Regarding sex, girls from both groups showed an average cooperation that ranged between 3.1 and 6.8 during the first year of treatment, in contrast with boys who presented lower averages in both groups with scores between 1.5 and 6.2, in agreement with Daniels et al. who reported significant differences between boys and girls, and which could be attributed psychosocially to the fact that girls mature earlier, tend to be more aware and self-conscious about their appearance due to social stereotypes (Daniels et al.). This assessment in the case of this study depended on the commitment of participants and their parents taking into account that they outnumbered males (Mtaya et al., 2009). Moreover, a detailed analysis indicated that in both groups, boys and girls who had two phased treatment were more cooperative than their sex counterparts who had not.
The most important factors that influenced cooperation were correlated with those presented in previous studies (Mehra et al., 1998;Hsieh et al.;Tsomos et al., 2014) based on attitude, interest and commitment, motivation, compliance, proper use of appliances, oral hygiene and duration of treatment.
An analysis of treatment periods, indicated differences, that revealed increased cooperation during the first six months, and a decrease after nine months in both groups, possibly due to a perception that no relevant changes in their appearance were observable, to decreased parental supervision, and prolonged treatment times. In contrast, cooperation increased again after 12 months, which could be likely associated to the fact that this is a period when patients begin to observe changes in their appearance.
In summary, the results of this study indicate a higher cooperation rate in patients who received two phased treatment compared to those who did receive one phased treatment, which seems to be associated with patient participation during the interceptive stage, improved communication, motivation and preparation for the corrective stage.
CONCLUSIONS
-Patients with two phased treatment were statistically more cooperative when compared to patients who only received one phased treatment.
-Differences were evident between sexes, revealing increased cooperation in girls -The most important factors found to influence cooperation were correlated with attitude, interest and commitment to treatment, patient and parental motivation.
-The OPCS scale proved to be useful for evaluating cooperation and making comparisons with other studies.
ACKNOWLEDGMENTS
The authors would like to thank the professionals who participated in evaluating the study patients for the development of the research. | 2021-07-27T00:04:57.407Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "769cf3cb2a0811dbdbaf47bb4802c8d48dd64087",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.cl/pdf/ijodontos/v15n2/0718-381X-ijodontos-15-02-526.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5874a31e0da76968fe915bdec0c672f1fea80611",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
260435456 | pes2o/s2orc | v3-fos-license | You Are How (and Where) You Search? Comparative Analysis of Web Search Behaviour Using Web Tracking Data
We conduct a comparative analysis of desktop web search behaviour of users from Germany (n=558) and Switzerland (n=563) based on a combination of web tracking and survey data. We find that web search accounts for 13% of all desktop browsing, with the share being higher in Switzerland than in Germany. We find that in over 50% of cases users clicked on the first search result, with over 97% of all clicks being made on the first page of search outputs. Most users rely on Google when conducting searches, and users preferences for other engines are related to their demographics. We also test relationships between user demographics and daily number of searches, average share of search activities among tracked events by user as well as the tendency to click on higher- or lower-ranked results. We find differences in such relationships between the two countries that highlights the importance of comparative research in this domain. Further, we observe differences in the temporal patterns of web search use between women and men, marking the necessity of disaggregating data by gender in observational studies regarding online information behaviour.
Introduction
Web search engines are ubiquitous nowadays and act as major information gate-keepers in high-choice media environments. Google alone handled around 6.9 billion queries per day in 2020 (Petrov 2019) with an average user of Google.com turning to the site 18.15 times per day as of April 2021 (Alexa 2021). When Google experienced an outage in 2013 for 5 minutes, there was a drop of 40% in the global web traffic (Svetlik 2013). The numbers are staggering, especially given that Google is just one of the search engines -though the dominant one on most markets. Furthermore, search engines are highly trusted by their users:according to Edelman Trust Barometer (Edelman 2021), in 2020 search engines were reported to be the most trusted information source globally.
Given the importance of search engines for shaping public opinion, it is crucial to understand users' web search behaviours. Yet, our knowledge in this context remains limited and primarily relies on two types of data. eye-tracking Schultheiß, Sünkler, and Lewandowski 2018) and search engine transaction log data (Jansen and Spink 2006;Weber and Jaimes 2011). Both of these data sources have their limitations: eye-tracking studies typically rely on small user samples and can hardly be generalized to broader populations.On the contrary, log-based studies capture the behaviour of the large groups of users, but on the aggregate level, thus limiting possibilities for inferring the impact of users' individual characteristics on how they search for information. Additionally, log-based studies can not reliably infer the connection between search results ranking and user behaviour, because researchers can not, in retrospect, identify how search results were ranked and presented to individual users due to the temporal changes in the results and effects of search personalization (Hannak et al. 2013;Kliman-Silver et al. 2015) and randomization (Makhortykh, Urman, and Ulloa 2020;Urman, Makhortykh, and Ulloa 2021).
In the present study, we address these limitations by relying on a type of data source that, to the best of our knowledge, has not been used in the context of web search behaviour. We utilize the combination of web tracking data (Christner et al. 2021) with demographic data about individual users acquired via survey to explore users' web search behaviour. Web tracking data includes information on user desktop-based browsing behaviour along with the actual HTMLs of the browsed content. By acquiring HTMLs of pages viewed by the users, we can infer the exact composition and ranking of web search results users were exposed to and, consequently, find out which of these results they clicked on.
Using the combination of web tracking and survey data collected in Germany and Switzerland in spring 2020, we aim to address several gaps in the existing scholarship on web search behaviour. First, we scrutinize the effect of individual demographic characteristics on search behaviour using a large sample of users. Second, unlike earlier large-scale (i.e., log-based) search behaviour studies, which were focused on single-country populations (usually, the US), our study offers a comparative perspective and goes beyond the US context. Third, we examine the user clicking behavior in relation to web search results ranking in real-life conditions -in contrast to eye-tracking studies that typically rely on smaller samples and are carried out in lab settings.
Specifically, we address the following research questions: 1) how frequently do users with different demographic characteristics and socio-economic status use search engines?; 2) what are the temporal patterns of web search use and do they differ by demographics? 3) are there demographic or socio-economic status-based differences in the choice of specific search engines (i.e., Google/Bing/other)?; 4) how does the rank of a search result relate to the clicking behaviour of users with different demographics? We also examine country-level differences in relation to each of the four questions.
Related work
Studies on web search behaviour to date have relied on either of the two data source types: eye-tracking and search engine transaction log data.
Eye-tracking-based studies are typically conducted on smaller samples, usually not demographically representative ones, and within lab settings. The advantage of such studies is that they allow examining user attention patterns in the context of web search and, for instance, exploring the relation between the ranking of search results and users' clicking behaviours. In one of the earliest studies (Granka, Joachims, and Gay 2004), the authors have examined attention and clicking patterns of web search engine users based on a student sample (n=36), and found that top-ranked results receive disproportionately more attention and clicks than lower-ranked ones. This finding was corroborated in numerous further studies (e.g., Schultheiß, Sünkler, and Lewandowski 2018;Joachims et al. 2007;Guan and Cutrell 2007)).
Eye-tracking studies have also investigated the impact of additional factors on search result selection. For instance, two studies used a small (n=18) sample of users of diverse ages and occupations ) and a student sample (n=22) ) from the US found that clicking decisions are influenced not only by ranking but also perceived relevance of search results. A replication of the latter study (Schultheiß, Sünkler, and Lewandowski 2018) conducted circa 10 years after the original one on a student sample (n=28) in Germany has found similar effects thus indicating the stability of observed effects across time and different national contexts.
Despite providing important insights in user search behaviour, eye-tracking studies are subjected to a number of limitations, in particular their limited scalability. While some potential solutions for scaling are being offered in recent years -e.g., eye-tracking via webcam devices (Papoutsaki, Laskey, and Huang 2017) -their precision remains lower than that of more conventional lab-based dedicated eye-trackers (Holmqvist 2011). Due to the scalability problem, eye-tracking studies are based on small samples which are not demographically representative and, often, are made of student samples recruited in the US. This leads to a limited generalizability of eye-tracking-based findings : it is unclear whether users with different demographics search the web in similar ways, and whether there are country-level differences in how they do it.
Transaction logs-based studies, on the contrary, allow examining web search behaviour on a large scale. One of the earliest studies utilizing transaction log data (Silverstein et al. 1999) was conducted in 1999. Based on circa 1 billion search queries entered into AltaVista search engine over a period of 6 weeks the authors found that users tend to type in short queries and rarely navigate beyond page 1 of the search engine. Similar findings were reported by authors of a 1-week-long study based on a Korean search engine Naver (Park, Ho Lee, and Jin Bae 2005).
Log data has also been utilized to examine temporal aspects of web search (Zhang, Jansen, and Spink 2009) and the patters of search query usage (Weber and Jaimes 2011). Such studies allow inferring real-life web search usage patterns and are based on large data samples -as contrasted to eye-tracking-based lab studies. However, log-based studies also have several limitations. First, due to the difficulty of obtaining search logs data owned by proprietary companies, most of the transaction logs-based studies focus on single search engines. It underminesthe generalizability of their findings since usage patterns ocan be affected by the differences in search engine interfaces and/or the differences in the demographics of their users. Even studies such as (Jansen and Spink 2006) that analyze log data from multiple search engines can not match the users across these engines, which prevents them from examining ifthe same users utilize multiple different engines and, if so, whether and how their behavior is different depending on the engine.
The absence of reliable demographic data about the users is another limitation of logs-based studies. Such data is sometimes available on users' gender and age, but not on other variablessuch as education or income level that can only be inferred by the researchers (e.g., (Weber and Jaimes 2011)). However, even such inference-based studies are rare Third, log-based data is inherently noisy, because search requests might be executed not only by human users but also by bots, and it is difficult to differentiate between organic and automated requests (Jiang, Pei, and Li 2013). Finally, transaction log data, does not allow tracing the position of the search results a user clicked on and the only rankingrelated parameter availableis the number of the search result page on which a user selected a result.
The aforementioned limitations of both approaches can be addressed by utilizing web tracking data that includes full HTMLs of the pages browsed by users and is combined with survey data. Unlike eye-tracking, this approach is scalable and allows observing user behavior in real-life circumstances, not in a lab setting. Unlike with transaction logs, with tracking data it is possible to reliably know users' demographics (from the survey), observe user behavior across multiple search engines, make sure that the data comes from real users and not bots or machines and, finally, through matching the data from the scraped web search HTML with the URLs subsequently clicked by the user, infer the exact position of the result a user clicked on. Thus, this approach allows combining the strengths of both approaches previously utilized to measure web search behavior, while overcoming their limitations.
Data
To collect data for our study, we recruited a sample of Internet-using participants in the age range of 18-75 years from Germany and German-speaking Switzerland. The re-cruitment was conducted via the market research company Demoscope in early March 2020 using online access panels with 200,000 members (Germany) and 35,000 members (Switzerland). Participants were randomly selected in accordance to quotas regarding gender, age, and education to construct a representative sample of the German and the German-speaking Swiss population. For Germany, the region of residence (West vs. East) was used as an additional sampling criterion.
The selected participants were invited to participate in the survey, which was completed by 1,952 participants in Germany and 1,297 in Switzerland. As a requirement to take part in the survey, participants were asked whether they agreed to participate in the online tracking study using a browser extension that records their online behavior. While agreement to be tracked was required to partake in the survey, participants were informed that they could opt out from being tracked at any time.
After agreeing to participate in online tracking, each participant received a link to the website where extensions (i.e., plugins) for desktop versions of Chrome and Firefox browsers could be downloaded and installed. The extensions were designed specifically for the project and based on the screen-scraping principle, namely capturing HTMLs of web content appearing in the browser, where the extension was installed (Christner et al. 2021). The captured HTML content together with the URL address of the page from which it was captured were sent to the remote server, where data were encrypted and stored.
To protect participant privacy, the extensions were supplemented by a "hard" denylist (i.e. a list of websites whose content was not captured and visits to which were not recorded; this included insurance companies, medical services, pornography websites, bank websites, messengers, and e-mail services) and a "soft" denylist (i.e. a list of websites whose content was not captured, but the visits to which were recorded; the list included commercial websites). Participants were also provided with the possibility to switch browser extensions 'private mode', where no HTML content was captured, so they could browse privately if they felt the need to.
Out of the original sample of participants expressing agreement to being tracked, 587 (Germany) and 601 (Switzerland) participants had successfully registered at least one website visit by the end of the tracking period (March 17 to May 26 2020). The present sample consists only of those who registered at least one web search -563 participants in Switzerland and 558 in Germany. The participants were asked about their age, gender, education and income. The reported levels of education were collapsed into three subgroups to ensure comparability between the two countries: obligatory school only, full secondary education, tertiary education. Participants also reported their monthly income (according to pre-defined income breaks, different for Switzerland and Germany due to the differences in the overall income levels between the two countries). The demographic distributions in the samples are as follows: selfreported gender: CH -43.8% female, DE -44.8% female; age: CH -mean=43.8, median=42, DE -mean=49.3, me-dian=51; education: CH -3.6% obligatory education, 54.9% -full secondary education, 41.6% -tertiary education, for DE corresponding numbers are 12.9%, 51.3%, 35.8%; income: CH -7.2% not reported, 37.7% below 3999CHF, 37.1% between 4000 and 6999 CHF, 17.9% above 7000 CHF; DE -3.1% not reported, 48.4% below 1999 EUR, 44.8% from 2000 to 4999 EUR, 3.8% above 5000 EUR.
Methods
To filter out only web search visits from the overall tracking data, we used url-based filtering by domain first. This step was based on a list of search domains constructed by us based on the lists of engines commonly utilized by European users as indicated by sources such as (Edelman 2021) and included the following engines: Google, AOL, Bing, Duck-DuckGo (DDG), Ecosia, Gigablast, Metager, Qwant, Swisscows, Yahoo, Yandex. Then, we also filtered the data by subdomains and URL parts that would point to a service from a search engine company other than web search (e.g., Google Photos in the case of Google). Then, we calculated the share of visits to image search, video search as well as news search among all search traffic. In total, we have recorded 348018 user visits to text, image and video search across all engines combined. The results have demonstrated that visits to these services are infrequent: image search accounted for 0.2% of search engine traffic, video search for 0.06%. Thus, we focused on text search only as it accounted for over 99.7% of all search traffic.
After filtering out text search results we merged the tracking data about participants' web search visits with their demographic data (self-reported gender and age) and socioeconomic data (self-reported income and level of education) obtained via the survey. This merged data was used in the next steps of the analysis. Each step was performed separately for the German and the Swiss subsamples, with comparisons drawn between the two when reporting the results.
RQ1: frequency of search use. To establish how frequently users with different demographic characteristics utilizeweb search, we computed descriptive statistics about the average proportion of visits to web search engines to the overall number of visits tracked and average number of search queries executed daily by users from different demographic (age and gender) groups. Then, we tested the association between user demographics and socio-economic characteristics, and frequency of their web search usage viaa generalized linear model. We usedthe average number of searches executed by each user as a dependent variable, and users' characteristics as predictors. In this and other regression models described below we controlled for the users' overall web activity as expressed by the total number of web pages each user browsed throughout the tracking period.
RQ2: temporal patterns of web search. To examine temporal patterns of web search and their differences by country and demographics, we have calculated the frequency of web search use by day of the week and time of day (morning = 6am to 12pm; afternoon = 12pm to 6pm; evening = 6pm to midnight; night = midnight to 6am). We then compared the results in terms of the patterns that emerged.
RQ3: search engine preferences. To assess the differences in the choices of specific search engines by demographics, we calculated descriptive statistics across engines. First, we calculated the share of each search engine in overall web search traffic in our sample. Then, for the search engines that accounted for at least 1% of search traffic in either of the two (German and Swiss) samples, we calculated the share of users in each demographic (gender; age) group that used the engine at least once during the observation period.
Then, among those users who used an engine at least once, we calculated the average share of search traffic each user (disaggregated by gender and age groups) devoted to a specific engine to assess the strength of users' preferences towards specific engines. As Google, unsurprisingly, emerged as adominant engine in all of the cases, we decided to examine whether demographic and socio-economic characteristics of users are associated with their likelihood to "ditch" Google in favor of one of otherengines. For that, we relied on regression analysis using zero-inflated Poisson models due to the nature of the data (i.e., most users recorded 0 visits to engines other than Google). As a dependent variable, we used participants' share of visits to engines other than Google in their overall search traffic (varying from 0 to 100), and as predictors we used socio-economic and demographic characteristics, controlling for the total number of web pages visited. RQ4: result ranking and clicking behavior. To establish the association between result ranking and clicking behavior, we have extracted links fororganic text results from the HTML files corresponding to pages browsed by the users for Google and Bing. We focused on these two engines due to their relative prevalence in users' search (see Results, subsection 1). We extracted the links in the same order as they appeared in users' browsers. Then, we matched this data to the URLs accessed by users after each visit to a search engine to infer which of the URLs displayed in search results a user clicked on. Based on this inference, we calculated summary statistics about the share of clicks different search pages and differently ranked results received. Then, to infer whether certain demographic or socio-economic characteristics contribute to users' likelihood to click on higheror lower-ranked results, we performed regression analysis using an average ranking of a Google search result a user clicked on as a dependent variable, their demographic and socio-economic characteristics as predictors, and total number of web pages visited by them as control. We focused here exclusively on Google due to its dominance in users' web search -thus, running the analysis on Google, unlike on other engines, allowed us to preserve for the analysis the number of participants that is high enough to be representative of the general sample.
Frequency of search engine use by demographics
On average, participants used text search 8.8 times per day (see Table 1). There were no major differences in the number of average daily searches between the subsamples fromthe two countries: in Switzerland, users engaged in web search on average 8.76 times per day, while in Germany 8.84 times per day. However, there were differences in theratio of web search visits to the overall number of visits tracked. In the overall sample, on average each user had 13% of their total tracked browsing devoted to web search. In the German sample this number was 10.4%, while in the Swiss one -15.5%. Thus though the users from both countries executed similar numbers of searches on a daily basis, in the Swiss sample web search accounted for a higher share of total internet browsing. This is in line with the discrepancy in the average number of pages browsed in total in each sample: 2322.1 pages in Switzerland versus 3833.7 in Germany.
In Table 1 we present summary statistics for the average share of search in web browsing per user and average number of searches executed daily per user disaggregated by demographic groups with respect to gender and age. Though there are apparent gender-based discrepancies with regard to the average number of searches executed daily per user in Germany and with regard to the share of search in overall browsing in Switzerland, in regression analysis gender is not a significant predictor in either sample. The only significant predictor of either of the aforementioned variables is age. In Switzerland, it is negatively associated with both the average daily number of searches and the share of search in total browsing. In Germany, the observed relationship is significant only for the former variable but not for the latter. Detailed regression outputs are presented in Tables 2 and 3.
Temporal patterns of web search
The analysis of temporal patterns of web search reveals major differences with regard to when users of different gender tend to use search engines. The average numbers of searches executed per user on each weekday and time of the day periods are presented in Fig. 1.
In both, Switzerland and Germany, men's search patterns are stable throughout the week, with the most active search times being morning and afternoon. In the evenings, searching is less prevalent than in the mornings and afternoons, followed by a major drop in searching at night. Women's search patterns, however, are different. In Germany, they are similarly to men's consistent throughout the week but women tend to search way less than men in the mornings and afternoons, slightly less in the evenings, and with the same intensityin the night. The differences in German women's search activity by the time of the day are way less drastic than in men's search activity. In Switzerland, on the other hand, time-of-day-based differences in women's search activity are similar to those observed for men. However, Swiss women's searches in the mornings and afternoons are unevenly distributed throughout the week: from Monday to Thursday women actively use web search in the afternoons, but from Friday to Sunday their search activity in the afternoon is reduced compared to the first part of the week. Morning search activity among Swiss women decreases progressively from the start to the end of the week.
The observed differences among women and men in both countries are the most pronounced among younger (18-45 years old) part of the populations, suggesting that they might have to do with the life circumstances of younger women such as, potentially, child-bearing and child-caring duties. Table 4: Share of searches within a specific search engine among all searches from all users by the country subsample. The numbers are reported only for specific search engines for which the share is above 1% in one of the subsamples. The share of all other search engines is aggregated as "Other".
However, based on the available data, it is hardly possible to verify this interpretation. Regardless of the reasons behind the observed discrepancies, our findings once again highlight the necessity of disaggregating data about information behaviour by gender (Perez 2019)to grasp the behavioral patterns of all parts of the population properly.
Usage of specific search engines
Google largely dominates search traffic in both country subsamples, being slightly more prevalentin Switzerland than in Germany, as demonstrated in Table 4. In Germany, it is followed by Bing with the latter accounting for 7.4% of all traffic -substantially lower than Google but more than 3 times higher than all other engines in either of the country subsamples. In Switzerland, Bing is less prominent, with Ecosia being the second most popular engine that accounts for % of search traffic. The only other engine that received at least 1% of the traffic in either of the samples is DuckDuckGo (DDG) with 1.3% of traffic in Switzerland. These findings are in line with the reports made by companies monitoring global search traffic on country-level such as Statcounter (Statcounter 2020).
In Table 5 we report the observations on the share of participants who used each search engine at least once disaggregated by demographics. In light of the already stated observations about the share of search traffic, it comes as no surprise that Google was used at least once by almost all participants The second most popular engine by the share of users who turned to it at least once is Bing. It was used at least once by 17.7% of German participants and 10.7% of Swiss participants, with the share of men turning to Bing being higher than the share of women in both cases. The patterns of Bing usage by age groups, however, vary between the two countries with its prevalence being higher among elder users in Germany and among younger users in Switzerland. The other engines were used at least once by a marginal share of users -less than 5% across both subsamples and all demo- graphic groups. We observe no consistent gender/age patterns with regard to Ecosia and DuckDuckGo usage across the two countries.
In Table 7 we report average shares of search traffic via a given search engine per user among participants who used the engine at least once, and in Table 6 the share of users who used more than one search engine. This way we can assess how "partisan" participants are in terms of their search engine preferences -i.e., whether participants from different demographic groups tend to search within one search engine almost exclusively or engage in search across different engines, and whether this partisanship varies from one engine to another.
In both samples, around a quarter of all participants used more than one search engine; the share of such participants tends to be higher among men and older users in both cases (Table 6). We observe that Google tends to be the default search engine for all demographic groups in both countries, with participants who used it at least once directing around 90% of their search traffic there. The participants were less "partisan" in their usage of other search engines, with differences in the levels of "partisanship" between Bing vs Ecosia and DuckDuckGo. Though Bing has been used at least once by a larger proportion of participants than the other engines (Table 5), most of the demographic groups tend to direct only around 30-40% of their search traffic through this engine, thus indicating the absence of a strong preference for it. Those who used Ecosia or DuckDuckGo tend to be more "partisan" in their preference for these engines, directing a 50-60% of search through them, though the strength of the preference is clearly way lower than that of Google users.
In Tables 8 and 9 we report the results of regression analysis conducted to establish which -if any -demographic and socio-economic characteristics are associated with a higher likelihood of using non-dominant -that is, not Googlesearch engines.
In both countries, we observe no relationship between gender and age and the one's likelihood to use a search engine other than Google (zero model outputs). However, in Switzerland income is negatively associated with the usage of Bing, while its the association with the usage of Ecosia and DuckDuckGo is positive. In Germany, income is positively associated only with the usage of Ecosia, and no other demographic or socio-economic factors are signif- Table 6: Share of users in each demographic group who used more than one search engine.
icantly associated with the usage of specificsearch engines. In Switzerland, the usage of Ecosia is also negatively associated with the participants' level of education.
When it comes to the prevalence of use of each of the search engines among the participants that used each engine at least once (count model outputs), a lot of significant relationships emerge. Gender (being female) is positively associated with the usage of Ecosia in Switzerland and the usage of Bing in Germany, and negatively associated with the usage of DuckDuckGo in Germany. In both countries, there is a positive association between age and the usage of Bing and a negative association between age and the usage of Ecosia. The relationship between the usage of DuckDuckGo and age is significant (and negative) only in Switzerland. Education level has significant relationships only with the usage of Ecosia and Bing both countries. For Bing, the relationship is negative in both cases, and for Ecosia positive in Germany and negative in Switzerland. Finally, income in Switzerland is negatively related with the usage of Bing and DuckDuckGo and positively with the usage of Ecosia. In Germany, the directionality of the relationship is the same for DuckDuckGo but reversed for Ecosia and Bing.
Overall, these observations suggest that there are countrylevel differences in the users' preferences towards different search engines, including the relation between such preferences and users' demographics and socio-economic status. Importantly, income was significant more often than other variables, thus suggesting that participants with different economic status tend to have a preference for different engines.
Ranking of search results and user clicking behavior
We have examined users' clicking behavior on Google and Bing -the two engines which were visited most frequently by the users in our sample. We found that on both engines participants clicked disproportionately more on top results. On Google, 97.11% percent of all clicks were associated with the results displayed on the first page; on Bing this number was even higher -99.49%.
Even within the first page, users' clicks are distributed unequally. On Google, 51.3% of all clicks were associated with the very first result, followed by 15.68% of clicks on the second result, 9.23% on the third, 5.93% on the fourth, 4.21% on the fifth. As such, top-5 search results accounted for over 86% of all clicks on Google. On Bing, the corresponding distribution of clicks for the top-5 results is as follows: 52.18%; 19.72%; 9.24%; 7.35%; 3.57%. Thus, on Bing top-5 results received around 92% of all clicks. We did not observe major differences in the users' clicking behavior between the two engines, suggesting that it does not depend on the engine, at least when both engines have a similar (i.e., results presented in a ranked list) interface.
We found no significant relationship between (Google) users' likelihood of clicking on higher-or lower-ranked results and their demographic or socio-economic characteristics in Germany (see Table 10). In Switzerland, only age emerged as a significant variable, though its effect size is small. This suggests that the relationship between search results ranking and users' clicking behavior is largely independent from users' demographic and socio-economic characteristics. However, there seems to be a difference in the behavior of the users from two countries as in the Swiss subsample tendency to click on higher-ranked results is even more pronounced than in the German one. The mean of the average ranking of the results clicked per user for Switzerland is 2.49 (median=2.3), and in Germany the corresponding values are 3.13 and 2.55.
Limitations
Our study has two major limitations. First, we examine only desktop-based browsing behaviour because mobile-based tracking is notoriously complex to implement, especially in a waythat would make mobile data comparable with the the desktop one (Christner et al. 2021). This is an important limitation given that mobile devices account for roughly a half of the global internet traffic. Also, given the possibility that users' desktop-and mobile-based behaviours might differ, our findings have to be interpreted only in the context of desktop browsing. We also suggest that future work focusing on mobile browsing specifically is necessary to gain important insight about the differences in desktop-and mobile-based searching behaviours. Another limitation is that our data collection happened to take place in the spring of 2020 -the beginning of the COVID-19 pandemic -and
Discussion
Our analysis shows substantial differences across countries in all four aspects of web search behaviour we examined. This highlights the need for more comparative research on web search behaviour. Since behaviours of users from Germany and Switzerland, two geographically and, in part, culturally proximate countries, are vastly different, it is reasonable to assume that the behaviours of those in other countries are different as well. Thus, like with other online phenomena (Krishnan, Teo, and Lymm 2017;Urman 2019;Mahmood, Bagchi, and Ford 2004), the context of the country in which web-based behaviour is studied needs to be accounted for, and generalizations from single-country samples to the global populations should be avoided.
The fact that web search accounts for a rather high share of desktop browsing ( 13%), highlights the importance of search engines for the public. Similarly, the fact that top-5 results attract 90% of all clicks and more than 97% of search visitsdo not go beyond page 1 of search outputs, underscores the influence search rankings have on user information consumption. Given that search engines' retrieval and ranking algorithms are usually obscureand the outputsare heavily affected by personalization (Hannak et al. 2013;Kliman-Silver et al. 2015;Robertson, Lazer, and Wilson 2018) and randomization (Makhortykh, Urman, and Ulloa 2020;Urman, Makhortykh, and Ulloa 2021), algorithmic auditing studies, with a particular focus on the ranking of top search results, are necessary to understand how specific factors affect the search outputs and the quality of information consumed by the users.
As web search algorithms tend to optimize the results based on user behaviour (Agichtein, Brill, and Dumais 2006), demographic and socio-economic status-based differences in the ways users search the web can have important implications for the selection of information that users access. For instance, if a search engine, especially that with- Table 9: Zero-inflated model outputs, predicting the usage of specific most popular search engines except Google, Germany. 453 463 * * * p < 0.001; * * p < 0.01; * p < 0.05 Table 10: Regression model output, dependent variable is the average ranking of Google search results clicked on by the user out explicit personalization such as DuckDuckGo, is used disproportionately more by users with certain characteristics (i.e., male users and/or those with lower income, see Tables 8, 9), the search results will be tailored to the preferences of users with such characteristics. In extreme cases, this can lead to systematic biases in search results -e.g., the perpetuation of "male gaze" in results concerning the representation of women (Noble 2018). | 2021-05-12T01:16:36.546Z | 2021-05-11T00:00:00.000 | {
"year": 2021,
"sha1": "12e0613b32cf5cc2d8d58cd2bc773ef3f2b4a178",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "12e0613b32cf5cc2d8d58cd2bc773ef3f2b4a178",
"s2fieldsofstudy": [
"Computer Science",
"Sociology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
229160882 | pes2o/s2orc | v3-fos-license | Rational and Irrational Dynamics of Automobile Demand in Turkey
The automobile, which was invented for the purpose of carrying passengers and freight, with the increase in sales, it has become an indispensable part of daily life. However, due to various factors, automobile demand fluctuated throughout the periods. This study discusses rational and irrational dynamics affecting the demand for automobiles in Turkey and it aims to analyze the effect of macroeconomic variables on automobile demand by using ARDL approach. The findings show that the most important variables affecting the demand for automobiles in Turkey are unemployment, car prices and in short term is inflation.
Introduction
Historically, the first cars were invented and developed in the late 19th century in France and Germany. Despite a lack of consensus, 1901 Mercedes classifies as the first modern car. The first diesel powered car was introduced into the market by Citroen in 1933. The major change to the market came in 1908 when Henry Ford introduced the first mass produced car. Mass production allowed economies of scale which made it possible to have a moderate price of $650.
The diesel cars gained popularity especially in Europe. After the emission scandal of Volkswagen in 2015, the diesel cars started losing market share. Environmental issues also forced countries to ban gasoline powered and diesel powered cars by the year 2030-2050. While state constraints affect producers decision to produce, economic and social factors such as environmentalism affect consumers purchasing decision. Therefore, both production and consumption preferences are being reshaped in the automobile market. Consequently the automotive market nowadays is trying to shift from diesel and gasoline powered automobiles to hybrid and electrical automobiles.
Nonrational factors for Turkey, include the perception of the automobile being an investment with a positive return and liquidity as a substitute for other financial instruments. The return in an investment sense is lower than the financial instruments, however since the automobile prices are directly linked to euro rates, it provides a natural hedge against price increases. Conspicous consumption is the second nonrational factor. The consumption of individuals are not just based on needs but also on wants. An automobile might be a necessary need for a consumer but it transforms into a want if the automobile purchased is above the corresponding income level. One reason is peer comparison which is the urge to compare yourself to your peers and purchase a similar segment automobile.
The aim of this study is to discuss the nonrational factors which are crucial in automobile purchasing decision and analyze the effects of the rational factors on Turkish automobile sales between 2005-2018. The sections of the study are as follows. Section 2 provides information about the Turkish Automobile Market. Section 3 discusses the nonrational and rational dynamics which may have an impact on automobile sales. Nonrational dynamics are hard to quantify nonetheless can range between insignificant to somewhat significant. Section 4 summarizes the studies that analyze the determinants of automobile demand. Section 5 describes the data related to the rational dynamics used in the empirical study and presents the empirical framework. Section 6 is devoted to the presentation and discussion of the results. The last section concludes.
Automobile Market in Turkey
The Turkish Automotive Market is an attractive market based on sales and market share. In 2005 Turkish Market was the 8th largest market in terms of sales. In 2018 despite a fall in sales, the Turkish market was still 8th largest after Germany, UK, France, Italy, Spain, Belgium and Poland. Out of 16,110,807 automobile sales in Europe in 2018, Turkey had a market share of 3,02%. (ODD, 2019) In 1960, per capita income in terms of 1995 purchasing power parity (PPP) was 2,500 USD in Turkey. In 2002 per-capita income increased to 6,100 USD. Vehicles per thousand population in 1960 was only four. The expected automobile ownership per thousand people in Turkey for 2017 was 212 (Piskin, 2017, p. 10). This expectation was still not met in 2018. According to TÜİK, (2019) the automobile ownership per capita in 2018 was 151. The ownership in the European Union (ACEA) is 573 automobiles. (ACEA, 2017). With an average growth of 7.7% automobile ownership has increased to 96 per thousand population. Still there is a long time needed to reach the European Union average.
Based on the projections done by Dargay, Gately and Sommer (2007) the percapita income would increase to 14,000 USD in 2030. Vehicles per thousand population on the other hand would increase to 377, with an average growth rate of 5%. Despite this increase the automobile ownership would be the lowest among OECD North American and European countries. These projections imply that although Turkey's automobile ownership will increase by 5% compared to the OECD increase of 0.6% between 2002 and 2030, the gap will narrow but will be far off the target of matching the OECD automobile ownership averages. Turkey's vehicle per thousand population will still be 52.9% of the OECD average.
The total domestic market has been fluctuating around 500,000 to 600,000 automobiles for several years. According to Turkish Automotive Distributors Association (ODD), in 2018, passenger automobile sales went down by 32,71% in comparison to the previous year and were 486,321. In the previous year, the sales were 722,759. 10 year average for annual sales were 579,268 units. (ODD, 2019)
Nonrational and Rational Dynamics
Automobile sales are affected by nonrational factors as well as macroeconomic variables. Nonrational dynamics are the factors based on behavioral aspects of decision making. Rational dynamics are based in contrast on financial and economical factors.
Nonrational Dynamics
The automobile, which was invented for the purpose of carrying passengers and freight, has gained many different additional equipment with the development of technology and the needs differentiated over time. It was demanded by many people from almost all walks of society. The development of production technology and the possibility of personalization has increased the demand for differentiated products that will make individuals feel special rather than mass production.
The automobile is not merely a normal commodity. The limited production of different brands, differentiated products, led to the classification of automobiles. In 2005 the Turkish Market automobile sales had a following distribution based on segments: A segment 1%, B segment 47%, C segment 37%, D segment 12%, E segment 2% and F segment 1%. In 2018 there were minor changes in market shares for all but B and C segments. From 47% market share in 2005, the B segment dropped to 26,4%. At the same time the C segment increased it's market share from 37% to 56,7%. Some of the increase may be due to economic factors, while the rest may be due to conspicuous consumption and peer comparison.
When individuals demand automobiles, they consider many factors instead of just meeting the need for transportation. One of the nonrational factors affecting demand is conspicuous consumption according to Veblen. As Veblen (1994, p. 29) thinks, property is becoming evidence of the power that the owner has provided to the rest of the group. Conspicuous consumption is a tool used to influence people, to message through possessions, to be accepted in society and to be included in the upper classes.
In demanding goods and services, there is a need to emulate the wealthy people, to establish privileges and social connections (Karoui, Khemakhem, 2018, p. 2). Duesenberry (1949) conceptualized the motive behind this behaviour of "keeping up with the Joneses". The same concept applies to automobile demand. Therefore, individuals do not consider their own needs while demanding a property; they also pay attention to how their goods are perceived by their environment.
Another nonrational factors affecting the demand for automobiles in Turkey, consumers are buying an automobile for investment purposes. The logic behind treating an automobile purchase as an investment in Turkey can be difficult to grasp at first. According to the statistics issued by Fleeteurope (2017) the average depreciation rate of automobile prices globally range between 29,4% and 53,6%. It's important to note that the depreciation percentages can mask differences in performance in terms of actual money lost. In Turkey the depreciation rate was 29,5% which is the lowest after China. Certain buyers in Turkey are purchasing automobiles believing that the depreciation will not be very high since the automobile prices are directly linked to mainly Euro rates. Stated differently, European automobile buyers' highest automobile related expense is depreciation. On average a four year automobile loses 50% of its value or more. In contrast the depreciation rate in Turkey is much lower. The new automobile prices go up in line with the increase in euro. When the prices go up the secondhand automobile prices track the new automobile prices. These opposing factors lower the depreciation cost for most of the automobile buyers. Most of the time the secondhand price of an automobile would be higher than the original purchase price, making the customers believe in the notion of not losing money when they purchase a new automobile. Furthermore the depreciation of Turkish Lira against Euro increases the automobile prices every year. In practice a buyer uses a automobile for 4 years and sells it at a higher price than the purchase price.
Rational Dynamics
The effect of rational factors on determining automobile demand is higher than nonrational factors. The most important factor in effective demand for automobiles in Turkey is perhaps tax. In Turkey, there is a value added tax and a special consumption tax. First the special consumption tax is calculated based on the basic price. Then special consumption tax and basic price is added together. The value added tax is based on the new total. This is a pyramid taxing system since one tax is based on another tax (Tepav, 2013). In addition the government classifies automobile according to their engine sizes. The private consumption tax for automobile will be between 30-60%. If the engine size is between 1,600cc and 2,000cc the tax will be between 100% and 110%. The larger engine automobiles of 2,000cc and above will be taxed 160%. (Official Gazzette, 2018). If the sale price of the car is 100,000, and the engine size is less than 1,600cc, the VAT will be 18% and the private consumption tax (PCT) will be 18%. The total tax will be 34,811 TL. 34.8% of the sales price will be paid for the taxes. If the sales price goes up to 150,000TL, VAT is 18% and the PCT is 35%. The share of the total taxes go up to 37,2%. The dramatic increase in taxes start with a bigger engine size. If the engine size is between 1.600-2.000cc and the car price is 150,000TL, VAT is 18% and PCT is 100%. 57,6% of the sale price consists of taxes. Finally when the engine size is above 2,000cc and our sample price of 150,000TL is considered, the VAT is 18% and the PCT is %160. In this case the total taxes are 67,4% of the sales price.
As the examples show clearly the tax issue for automobiles has the utmost importance for the demand in the market. The tax revenues originating from the new automobile sales is a great source of income for the Turkish Government.
In Turkey, where about 65% of imported automobiles sold, the exchange rate is an important determinant of demand. The exchange rate increase against Turkish Lira is a direct factor affecting the automobile prices. In developed countries the exchange rate changes are absorbed by automobile companies to a large extent and only a small percentage adversely affects the automobile prices. For instance despite an increase in Euro against USD by 15%, the price of German automobiles in USA might only go up by 2-3%. In Turkey, some brands list prices based on Euro which changes everyday. This means that there is no time lag between Turkish Lira depreciation and price increase. The other brands list prices in Turkish Lira and they reflect the effects of the depreciation in installments in couple of months. If Turkish Lira loses value against Euro by 15%, the prices of all automobiles would increase by 10-15% in a couple of months.
Unemployment has negative consequences both economically and socially. High unemployment rates lead to reduced purchasing power and uneven distribution of income. While unemployed individuals are less likely to buy new automobiles, this may not true to for the whole economy in general; the fact that the lowincome group is unemployed often affects the demand for automobiles negatively. Moreover, there is a two-way relationship between unemployment and automobile demand. First, unemployment affects automobile sales, on the other hand, with reduced demand for automobiles, manufacturers employ fewer people.
Bahar/Spring 2020 Cilt 10, Sayı 1, ss.43-61 Volume 10, Issue 1, pp. 43-61 Many people take into account credit accessibility and interest rates when making an automobile purchase decision. People who do not have income to buy an automobile and cannot find credit will postpone the purchase decision. Particularly, people with limited liquidity have a greater response to interest rates and changes in maturity (Attanasio et al., 2008, p. 433). In times of high interest rates, people often give up the demand for automobiles they want to buy or prefer a lower model.
Literature Review
Most of the studies undertaken regarding automobiles were empirical studies trying to find the factors that are influential on the demand for automobiles in several countries. A study used data of 80 countries to analyze the relationship between household expenditures, degree of urbanization, population density, gini coefficient for income inequality against passenger car ownership. The results indicate that automobile ownership is positively correlated with the level of urbanization and household expenditures but negatively correlated with inequal income distribution and population density. (Jetin, 2015) A study which covered the years between 2000-2010 compared automobile ownership to the per capita income at an annual level. The income elasticity calculated was 1.75 which means that a 1% increase in GDP per capita leads to a higher (1.75%) increase in car ownership. (Duruiz and Erdem, 2015) A similar study by Dargay and Sommer (2007) Belarus. The result of the study was a medium level relationship between auto sales and exchange rate, GDP and interest rates. The relation between passenger car sales and GDP were studied by Babatsou and Zervus (2011). There was a 0.95 linear correlation between those two variables. GDP increase definitely increases the auto sales. If the household expenditure increases, the likelihood of purchasing a auto decreases. On the other hand a general increase in national income rises the number of potential customers which leads to a rise in demand for automobile. (Alper and Mumcu, 2005) A study undertaken in Turkey found out that the profession of the family household head, the disposable annual income of the family and the monthly expenses were statistically significant in determing the automobile sales. (Akay and Tümsel, 2015) One study used artificial neural networks to predict the forthcoming automotive sales, in this study, monthly data between January-2007 and June-2011 has been used. Gross domestic product, real sector confidence index, investment expenditures, consumer confidence index and USD exchange rate has been used as determinants of automobile demand. The results were According to the results of the regression analysis, net disposable income moves with automobile sales, while it moves in the opposite direction with unemployment, inflation and fuel prices.
An econometric study made for the German Market (Zeng, Schmitz and Madlener, 2018) came with the conclusion that GDP and government incentives are important macroecomic factors while price, gasoline consumption, quality and facelift of the cars strong predictors of auto sales at the microecomic level. Another study analysed 13 EU countries from January 1999 to August 2010. The results of this study showed that automobile sales have a direct relationship with trade volume, interest rates and industrial production (Erdem and Nazlioğlu, 2013).
One study concentrated on the relationship between fuel prices and demand for automobiles. The study showed that increases in gasoline prices had significantly reduced demand for automobiles, but declines had no significant effect (Kilian and Sims, 2006). Final study to be mentioned tried to find the relation between auto loans and the auto sales. Empirically the results showed that during the purchasing process auto loans play a very important role. (Eken and Çiçek, 2009).
Data and Methodology
This study investigates the relationship between automobile sales and independent economic variables. Monthly data were used covering the period from the January of 2005 to the December of 2018. All variables except inflation and unemployment are transformed into log form to provide that all the data are stationary. The automobile price index was formed by taking the average prices of vehicles using gasoline and diesel and weighting them according to the number of sales. All variables used in this study given in the appendix.
Our focus in this study will be the passenger automobile market. The difficulty of analyzing the passenger automobile market is due to the lack of past data of fleet automobiles and automobiles purchased based on operational leasing. The dynamics of private purchasers, operational leasers and fleet purchasers are not based on the same factors. The deciding factors for companies buying fleet cars include depreciation regulation, tax advantages and tax deductibility of expenses. For private buyers price of the automobile, cpi, unemployment and the automobile loan rates become more significant. Due to the complexity of the tax system and the daily changing fuel prices, tax rates and fuel prices are excluded from the Bahar/Spring 2020 Cilt 10, Sayı 1, ss.43-61 Volume 10, Issue 1, pp. 43-61 51 analysis. Our study will analyze the several factors that might affect the automobile sales using the ARDL method.
Autoregressive distributed lag (ARDL) cointegration analysis
In order to examine the relationship between automobile sales, automobile prices, cpi, loan interest and unemployment, linear natural logarithm equation is specified as following: (1) Pesaran and Shin (1995) and Pesaran et al. (2001) introduce a new method of testing for cointegration called the "Autoregressive Distributed Lag" (ARDL) approach. The ARDL estimate both the long-run and short-run relationships simultaneously in an automobile demand model. In the ARDL bounds analysis, the variables of the model are allowed to possess mixed integration (Pesaran et al, 2001) (2) where and i are the first difference operator and the white noise term, respectively. The ARDL method estimates regressions to obtain the optimal lag length for each variable. The Vector Error Correction model used to analyze the relationships between variables is formulated as follows.
where residuals are independently and normally distributed with zero mean and constant variance. It can be defined as the error correction term. After a shock indicates the speed of adjustment to the equilibrium level. How quickly the variables approach the equilibrium are also the outcome of this parameter. Pesaran (1997) and Pesaran et. al. (2001) argued that is important to as certain the constancy of the long-run multipliers by testing the error correction model for the stability of its parameters.
Unit Root Test
Firstly, the order of integration of the variables is examined. In the study, ADF (Augmented Dickey-Fuller) unit root test developed by Dickey and Fuller (1981) was used to test of order of integration for each variable. The ARDL boundary test is based on the assumption that the variables are I (0) or I (1). Stability tests are performed at levels and then first difference to determine the presence of unit roots and the order of integration in all variables. The results indicate that Unemployment is stationary at the level I(0) and lnCpi, Interest, lnSales and lnPrice are stationary at the first difference, I(1). It is confirmed that all variables are stationary in I (0) or I (1). We also used the Akaike info criterion (AIC) to determine the optimal number of lags. Unemployment -3,5917*** -2,7451 -2,6053* -2,5937 I(0) * Significant at 10% level. ** Significant at 5% level. *** Significant at 1% level.
Co-Integration Analysis
The ARDL approach to co-integration is preferred over other conventional cointegration techniques such as Engle and Granger (1987) and Allan and Hansen (1996). The overall F-and t-statistics are used to determine the presence of longrun relationship. The results indicate that in all the specifications, the F-statistic is greater than the upper critical value (CV) at 5% and 1% significance level. This study therefore rejects the null hypothesis of no co-integration. This indicates that there is a longrun equilibrium relationship between automobile sales and other regressors. The long run elasticities results are also displayed in Table 3. The estimated coefficients of the long-run relationship are significant for lnprice and interest but not significant for lncpi and unemployment. The long run impact of interest on automobile sales is generally negative as expected. According to the results of the analysis, in the long run, the effect of prices on sales is not as expected, lnprice and automobile sales move in same direction. The short run dynamics are generally consistent with the long run findings. However, cpi is significant in the short term, as we cannot find a significant relationship in the long term. In the short term, the increase in automobile prices negatively affects the sales volume as expected. Contrary this finding in the long price increases do not the affect the automobiles of demand negative. As expected credit interest rates negatively affect automobile sales both in the short and the long run. The results indicate that there is an insignificant relationship between unemployment in Turkey's auto sales.
The coefficient on the lagged error-correction term is significant at 1% level with the expected sign, which confirms the result of the bounds test for cointegration. This indicates a rapid speed of adjustment to equilibrium. The results indicate that, on average, the disequilibrium of the previous period is corrected by about 60% in the following period.
Plot of CUSUM and CUSUMSQ tests for the parameter stability Kaymaz, V. and Akdağ, A. A. Bahar/Spring 2020 Cilt 10, Sayı 1, ss.43-61 Volume 10, Issue 1, pp. 43-61 To ensure that model pass the stability test, CUSUM and CUSUMSQ tests are applied to the residuals of the error correction model. (Brown et al. 1975). The results of the test are shown in figure 1. The lines indicate the limits of 5% significance levels. From the figures, it appears that the parameters are stable; The sum of the square residuals moves within the critical limits of 5% significance.
Conclusion
The automotive market is vital for the Turkish Economy both in terms of the size of the local market and the manufacturing plus export capability. The main goal of this study was to disect the factors infuential in the automobile purchasing behavior in the Turkish Market. We believe that there are nonrational as well as rational factors determining the automobile sales over the years. Nonrational behavior is discussed under two topics. The first one is the conspicous consumption. The automobile buyers basically push their budget limits in order to afford a higher segment car or a more expensive car in the same segment. This type of behavior needs to be analyzed further to generate data that can be used in a study. The second nonrational behavior being the overshadowed depreciation effect. In developed countries where the currency movements are less volatile, customers lose almost the half value of their cars due to depreciation. In Turkey during four years the depreciation of the car and the depreciation of TL against Euro counter balance each other creating a less dramatic cost of buying a new automobile.
The rational factors we have selected for our study were consumer loan interest rates, , unemployment rates, consumer price index and finally the car price index. Both short term and long term effects of these variables were analyzed.
The results of the study showed that there was a negative relationship between the loan interest rates and the car sales both in the short term and the long term. One study has found a similar relationship between interest rates and car sales.
Unemployment rates had no significant effect on automobile sales both in the short term and the long term. The reason might be the family structure in Turkey where an unemployed person is taken care of by the family in financial terms. The second reason might be that the unemployed people will be in the middle-lower or lower income class where they do not have any purchasing power for cars. This fact is supported by the low car ownership in Turkey. In our research we did not come across a study analyzing the relationship between unemployment and car sales.. The inflation rate (CPI) is significant in the short run and have a negative relation with car sales. In the long run the results were insignificant. Hence the short term inflation is in negative relation with car sales, the higher the inflation the lower the sales. This causality disappears for the long term suggesting that the automobile buyers income levels adjust to inflation rates and does not stop Finally the price of the cars are found out to be significant by our model both short term and long term. However while there is a negative relation between the prices and the car sales in the short term, the situation reverses itself and the relation becomes positive. Short term negative relation needs no further discussion. Long term on the other hand needs further reasoning. As explained at the section two, car prices and Euro rates are directly linked. Furthermore the customers' income levels in the long run are adjusted after a currency shock. In other words TL loses value first, the automobile prices go up lowering the demand for cars in the short run. Since depreciation of TL does not continue after a period, the buyers income catch up with the price changes and in the long run prices and car sales move in the same direction. One study has found a medium level negative relationship between exchange rates and the car sales. Another study has found a negative relation between car prices and the automobile sales where the relation changed from one segment to the other segment.
Our study findings show that the outcomes show some similarities with other academic studies, however Turkish automobile market has a specific character where a short term relationship between a variable and car sales might be reduced to an insignificant level or reverse itself in the long run.
The results of this study can help us make some suggestions for the automobile manufacturers and the automobile retailers. The unemployment and consumer price index are uncontrollable factors for the automotive sector. Car prices can be controlled by the sector but eventually the foreign exchange rate changes have to be integrated to the price. The results show that in the short term the car prices and the car sales have a negative relation though the relationship reverses direction in the long run. Shock price changes have to be avoided as long as possible until a reasonable time period passes for the inflation to catch up with the consumer's perception.
The results also displayed that the car loan interest rates were significant. Automotive sector may use this fact either by establishing a finance company to offer attractive rates for the car loans or establish more comppetive relations with the commercial banks. | 2022-06-06T11:44:26.259Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "85ef63a762f5141c7a7bba97eabc8ac3263f7f03",
"oa_license": null,
"oa_url": "https://dergipark.org.tr/en/download/article-file/1178492",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "85ef63a762f5141c7a7bba97eabc8ac3263f7f03",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
3606603 | pes2o/s2orc | v3-fos-license | Similarity measures for vocal-based drum sample retrieval using deep convolutional auto-encoders
The expressive nature of the voice provides a powerful medium for communicating sonic ideas, motivating recent research on methods for query by vocalisation. Meanwhile, deep learning methods have demonstrated state-of-the-art results for matching vocal imitations to imitated sounds, yet little is known about how well learned features represent the perceptual similarity between vocalisations and queried sounds. In this paper, we address this question using similarity ratings between vocal imitations and imitated drum sounds. We use a linear mixed effect regression model to show how features learned by convolutional auto-encoders (CAEs) perform as predictors for perceptual similarity between sounds. Our experiments show that CAEs outperform three baseline feature sets (spectrogram-based representations, MFCCs, and temporal features) at predicting the subjective similarity ratings. We also investigate how the size and shape of the encoded layer effects the predictive power of the learned features. The results show that preservation of temporal information is more important than spectral resolution for this application.
INTRODUCTION AND RELATED WORK
Searching for audio samples is a core part of the electronic music making process, yet is a time consuming task, and a key area for future technological development [1]. This task typically involves browsing lists of badly labelled files, relying on filenames such as 'big kick' or 'hi-hat22'. Such methods for browsing sound libraries limit the users' ability to efficiently find the sounds they are looking for. Meanwhile, the voice provides an attractive medium for effectively communicating sonic ideas [2,3], as it can be used to express timbral, tonal and dynamic temporal variations [4]. Moreover, previous research demonstrates that musicians are able to accurately vocalise important acoustic features of musical sounds [5,6].
Query by vocalisation (QBV) is the process of searching for sounds based on vocalised examples of the desired sound. Typically, QBV systems extract audio features from a vocalisation, which can then be compared to the features of sounds in a sample library (to return class labels or a ranked list of sounds). Initial approaches to QBV used heuristic based features [7,8]. Morphological features describing the high-level temporal evolution of sounds have also been applied to QBV [9], however drum sounds generally have similar high-level temporal morphology (i.e. rise-fall), so these types of features are less applicable here.
Recent work has shown that features learned using stacked autoencoders (SAEs) outperform heuristic descriptors such as MFCCs (Mel-frequency cepstral coefficients) for QBV tasks. SAEs utilise a deep learning structure where multiple layers learn an efficient representation to encode the input. These have been applied in 2 QBV scenarios: supervised learning, using the features to train a classifier [10]; and unsupervised search, based on distance between sounds in a Euclidean feature space [11,12]. Furthermore, in [13] the authors present a QBV system based on convolutional neural networks (CNNs) implemented in a semi-Siamese network structure. In this case the convolutional layers are trained to learn feature representations from constant-Q spectrograms of vocal imitations and the imitated sounds. The CNN is followed by fully connected layers to match input vocalisations to audio samples, requiring each sample in a sound library to be compared to a vocal query. The system shows promising results for matching vocal imitations to the imitated sounds, however in the general case QBV systems require efficient, deployable querying. Using this method, a single query on a dataset with N data samples requires N forward-pass computations of the network, which is significantly demanding, for example compared to nearest neighbour search in a feature vector space.
Whilst both SAE and CNN approaches show promising performance in terms of retrieving an imitated sound from a set of audio samples, none of the above mentioned QBV methods consider the perceptual similarity between the query and retrieved sounds. Central to the evaluation of these approaches is the assumption that the target sound is indeed the sound that was imitated, and the task is to match the imitations and imitated sounds accordingly. However, we consider a use case in which the query is not necessarily an imitation of a sound in the database, and investigate which feature representations correlate well with the perceptual similarity between an imitation and a set of audio samples.
In this paper we evaluate the performance of both heuristic and learned features for QBV of drum sounds. An overview of our approach is illustrated in Fig. 1. We present a set of convolutional auto-encoders (CAEs) trained on a dataset of ∼ 33k audio samples and ∼ 6k vocalisations. These are used to extract features from 420 vocal imitations of 30 drum sounds. The feature sets are evaluated using perceptual similarity ratings between the vocal imitations and the imitated drum sounds. We include 4 types of features: (1) a spectrogram based representation from [14], which the authors show to correlate strongly with perceptual similarity between drum sounds; (2) MFCCs; (3) temporal descriptors; (4) encoded representations from the CAEs. We compare 11 CAEs, which differ in both the size of the encoded feature tensor and the shape of the encoded layer in the temporal and spectral dimensions. Fig. 1: Overview of the complete work flow. All audio (training and test data) is preprocessed to create 128x128 barkgram representations. The trained CAE is used to extract features from the test data. Euclidean distance between each imitation and its imitated sound is then computed, and fitted with the rating data to an LMER model. Performance of the 14 feature sets (3 baselines and 11 CAE networks) is measured by 1) AIC for model fit, and 2) the proportion of imitated sounds that have a significantly negative slopes for rating ∼ distance.
PROBLEM DEFINITION
The task is to establish which audio features best correlate with perceptual similarity between real drum sounds (the imitated sounds) and vocal imitations of drum sounds (the imitations). Specifically, we are interested in i) how heuristic descriptors perform compared to learned features using CAEs, and ii) the importance of temporal vs. spectral dimensions and the size of the encoded tensors from the CAEs. We limit the problem to a set of 30 drum sounds: 6 from each of 5 classes (kick, snare, cymbal, hi-hat, tom-tom), and consider only the similarity between imitations and within-class sounds (e.g. between the imitation of a snare and the actual snare sounds).
Baseline Methods
We use 3 baseline methods. The first (PK08) is a spectrogram-based measure of similarity from [14]. This has been shown to correlate highly with perceptual similarity ratings between within-class drum sounds, and we are interested in how well it transfers to our application. In summary, similarity between 2 sounds is measured as the Euclidean distance between their vectorised barkgrams, constructed from a spectrogram with the following parameters: 93ms window; 87.5% overlap; Bark scale (72 bins); loudness in dB and scaled using Terhardt's ear model [15]. The barkgrams are time-aligned, and where 2 sounds are not of the same length the shorter is zero padded to the length of the longer one. For the second method (MFCC) we calculate the first 13 MFCCs for each sound (excluding MFCC 0) with first and second order derivatives, using a 93ms time window and 87.5% overlap. The mean and variance of each MFCC and its derivatives are calculated for each sound, yielding 78 features. The third method (TEMP) is a set of 5 temporal features: log attack time (LAT); temporal centroid (TC); LAT/TC ratio; temporal crest factor (TCF); and duration. We calculate LAT and TC as per the definition in [14]. TCF is calculated over the entire time domain signal (rectified), and is the maximum value divided by the root mean squared.
Model Architecture
The basic architecture is a CAE with four 2D convolution layers in its encoder/decoder. Each convolutional layer is followed by batch normalisation and ReLU activation layers. To avoid checker board artefacts caused by deconvolution layers [16] we apply upsampling prior to each decoding convolutional layer. As such, each decoding deconvolution layer is an upsampling layer followed by a 2D convolution layer with (1, 1) stride. We vary the kernel size of the first and last layers while using fixed (10, 10) kernels for the other convolution layers. The encoding layers have [8,16,24, 32] kernels (layers 1-4 respectively) which is mirrored in the decoder, i.e., [32,24,16,8]. A single-channel convolution layer is used for the output layer. The activation of the last layer of the encoder is flattened and taken as the feature vector for a given test sample.
The kernel size and stride of the convolution (or upsampling) layers are varied in order to compare the shape (i.e. square, wide, tall) and size of the encoded representation, respectively. Details for 11 variants of the above model are given in Table 1.
Training Data and Pre-processing
The network is designed to learn a broad range of vocal and percussion related sounds including i) short, percussive/non-percussive and pitched/unpitched sounds, and ii) non-verbal vocalisations. The training dataset is made up of 24,294 percussion sounds, 4,884 sound effects and 4,523 single note instrument samples. In addition, we include 4,429 vocal imitations of instruments, synthesisers and everyday sounds from [17], and 1,387 vocal imitations of 72 short synthesised sounds from [6]. This results in a dataset of ∼ 39k sounds, of which ∼ 6k are vocal imitations.
For each sound in the training set we compute the barkgrams from spectrograms with a 93 ms time window and 87.5% overlap, using 128 Bark bins. As with the PK08 baseline, the magnitudes are scaled (in dB) using Terhardt's ear model curves [15]. To achieve a fixed size representation for all sounds, we either zero-pad or truncate the barkgrams to 128 frames (≈ 1.5 seconds).
Training Procedure
The models are implemented using Keras [18] and Tensorflow [19]. Training and validation sets are 70:30% split from the training data (Section 3.2.2). As the training dataset contains 5.5 times more audio samples than vocal imitations, and we are equally interested in learning both sound types, we specify a 50/50% split of audio samples/vocal imitations for each batch (128 data samples). The models are all fitted using the Adaptive Moment estimation (Adam) optimiser [20] with a learning rate of 0.001, and mean squared error loss function. We use the early-stopping scheme for no improvement in validation loss after 10 epochs. The best (i.e. lowest validation loss) model for each parameter setting is selected for the analysis.
Test data
The 30 drum sounds were taken from the fxpansion 1 BFD3 Core and 8BitKit sample libraries, which include a range of acoustic and electronic drum samples. Vocal imitations of each sound were recorded by 14 musicians (>5 years experience), giving 420 imitations. The recordings took place in an acoustically treated room at the Centre for Digital Music, Queen Mary University of London 2 .
Perceptual similarity ratings between the imitations and each of the within-class drum sounds were collected from 63 listeners via a web based listening test, using a format based on the MUSHRA protocol for subjective assessment of audio quality [21]. Whilst the MUSHRA standard specifies the use of expert listeners, it has recently been shown that lay listeners can provide comparable results to experts for measuring audio quality [22]. Each listener was presented with 30 tests. For each test the listener was presented with a (randomly selected) vocal imitation and the 6 within-class drum sounds (one being the imitated sound). The listener then rated the similarity between the imitation and each drum sound (giving 6 similarity ratings per test), on a continuous scale from 'less similar' to 'more similar'.
Of the 30 test pages, 28 were unique and 2 were random duplicates. These were included for post-screening of the listeners, as recommended in the MUSHRA standard [21]. Listener reliability was assessed using the Spearman rank correlation between the two duplicate test pages for each listener. We considered reliable listeners as those who were able to replicate their responses for at least one of the duplicates with ρ >= 0.5, i.e. large positive correlation [23]. There were 51 reliable listeners, for whom ρ = 0.63/0.04 (mean/standard error), giving 9,126 responses from 1521 tests (excluding duplicates). We then computed Kendall's coefficient of concordance, W [24] on the ranked responses for each imitation. The mean/standard error of W = 0.61/0.01, indicating moderate to strong agreement amongst the reliable listeners [25].
Analysis of the ratings indicated that listeners were able to correctly identify the imitated sound with above chance accuracy (37% of cases, chance = 16%), and the imitated sound was rated first or second most similar to the imitation in 60% of tests. This indicates that although the imitations were often rated as being most similar to the imitated sounds, there are a considerable number of cases (up to 40%) where 2 of the 6 within-class sounds were rated more similar to the imitation than the imitated sound. This highlights the potential importance of perceptual similarity measures for tasks such as QBV, depending on whether the task is to identify and return an imitated sound, or to return the most similar sound. The 9126 similarity ratings are used as as a ground truth from which to measure the performance of each of the feature sets.
Linear mixed effect regression modelling
For a given feature set, distance is measured between each of the 420 imitations and their respective 6 within-class sounds, giving 2520 distance values. We use Euclidean distance in keeping with the PK08 baseline method, and the distances for each feature set are normalised between 0-1. Linear mixed effect regression (LMER) models are then fitted for predicting the ratings from the distances. LMER is well suited to this task given that all listeners did not provide ratings for all imitations but only a randomly-selected set of 28 imitations (giving an unbalanced dataset). In addition, it allows us to include the dependencies between ratings for each listener and imitated sound.
Maximum likelihood parameters for the models are estimated using the lme4 package in R [26]. The general model is fitted with rating y ijk as the dependent variable for each rating i, random intercepts for each listener k, and fixed effects of distance xij and imitated sound j, with an interaction term between distance and imitated sound. The model is given by: where β1j is the slope of rating over distance for a given instance of j, and γ k is the random intercept for a given listener k. We note that model analysis showed heteroskedasticity in the residuals. Parameter estimates were therefore compared to those from robust models [27], and no major differences were found. As such the non-robust models were used for the analysis. Wald 95% confidence intervals (CIs) were then calculated for the slope of each interaction (β1j). For imitated sounds where the upper CI for β1j < 0, we can infer the slope is significantly below 0 (α < 0.05). This indicates that the feature set is a good predictor for the imitated sound in question.
The performance of each feature set is evaluated using two metrics: The percentage of imitated sounds for which β1j is significantly below 0 (accuracy); and Akaike's information criterion (AIC), which gives a measure of model fit (note: lower AIC = better model fit). An ideal feature set would have a significantly negative β1j (perfect predictor = -1.0) for all 30 imitated sounds, and be a good fit to the rating data given the model in Eq. 1.
RESULTS AND DISCUSSION
The results are given in Table 1. The encoded features from all CAEs outperform the baseline feature sets. The LMER model from the best performing feature set (11) gives fitted slopes for rating ∼ distance that are significantly less than 0 (α < 0.05) for 83.3% (25/30) of the imitated sounds, and has the lowest AIC. This shows the feature set is generally a good predictor of perceptual similarity between the vocal imitations and imitated sounds tested here, and has the best fitting LMER model.
Interestingly, preservation of the temporal resolution is more important than spectral resolution for our task: for CAEs wide in time and narrow in frequency (8)(9)(10)(11) performance improves as the size of the encoded layer decreases. This indicates redundancy in the spectral information: encoded shapes with spectral dimensions > 1 have an adverse effect on performance. The similarity ratings are only for sounds in the same class (e.g. kick, snare etc.), and we Table 1: Details of the CAEs and results for 14 feature sets. CAEs differ in the kernel shape of L1 and L8, and the shape of the encoded layer (determined by strides). Results are given in terms of i) the LMER model fit (AIC), and ii) the percentage of imitated drum sounds for which the rating ∼ distance slope is significantly less than 0 (α < 0.05). Note: lower AIC = better model fit. (11). A negative slope indicates a decrease in perceptual similarity with an increase in distance, i.e. sounds for which the feature set performs well. expect high spectral similarity within each class. As such, overall energy differences in time may be more salient than the spectral distribution, providing the cues used by listeners when giving the ratings. This hypothesis is supported by comparing the square and tall CAEs: where reducing the size of the time dimension decreases performance. However there is also some redundancy in the temporal information, as can be seen comparing feature sets 10 and 11. As a post-hoc analysis we tested variants of CAE 11 using smaller encoded kernel shapes: (1, 2) and (1, 1), and found a decrease in performance below (1,4). This effect can also be seen in models 4-7, where performance decreases as width is reduced from 4 to 1.
Regarding the baseline features, both TEMP and MFCC show similarly poor performance in terms of AIC (MFCC performs slightly better in terms of accuracy). This indicates that although the learned temporal features appear to be most important for our task, the 5 heuristic temporal features are not sufficient to capture the salient cues used by listeners. The benefits of learned features over MFCCs concur with previous work [10], however we see greater disparity in performance. This may be specific to the sounds used in the evaluation (in [10] a much wider range of sounds was used). The improved performance of PK08 compared to the other baselines indicates that this measure is somewhat transferable to vocalised drum sounds, although still only achieving an accuracy of 53%.
Further analysis of the LMER model for the best performing feature set (11) shows the individual slopes for each drum sound (Fig. 2). Here we observe considerable variation between the imitated sounds. In particular, we note that the 5 sounds for which the upper CI crosses 0 (3 kicks and 2 toms) are all pitched (although they are not the only pitched sounds in the dataset, indeed, all the toms are pitched). This suggests that reducing the size of the encoded spectral shape to 1 may work best over all the drum sounds used here, however the predictions for some pitched sounds suffer as a result.
Finally, we note the slopes, although generally below 0, do not approach -1. Listener rating data is inherently noisy, and the concordance amongst listeners varies across the sounds. As such, there will clearly be a glass ceiling for performance, and a perfect model fit would not be useful for a real world application of the LMER model. Indeed, a perfect model fit is not desirable if one is interested in generalisability of the fitted LMER model.
CONCLUSIONS AND FUTURE WORK
In this paper we apply convolutional auto-encoders (CAEs) to query by vocalisation (QBV) for drum sound retrieval. We present a novel evaluation using perceptual similarity ratings between vocal imitations and the imitated drum sounds, providing insight into how learned features perform at predicting these ratings. Specifically, we compare CAEs that differ in both the size and shape of the encoded layer, in terms of the spectral and temporal dimensions. Our experiments show that CAEs outperform 3 sets of heuristic features by a considerable margin. Furthermore, we show that reducing the size of the encoded layer height (frequency) increases the predictive power of the learned features, yet reducing the width (time) has the opposite effect. This finding is partly unexpected given that drum sounds generally have a similar overall temporal envelope (attack followed by a decay), however understandable given that we compare within-class sounds (e.g. kick, snare etc.), which are also likely to share similar spectral distributions. For future work we would like to investigate more fine-grained morphological features to represent the temporal evolution that appears to be so important here. In addition we would like to investigate the generalisability of the best performing fitted LMER model to other QBV tasks, to determine how a model fitted on one set of sounds and similarity ratings performs given a larger sound library, as might be used in a typical music production environment. | 2018-02-14T16:08:09.000Z | 2018-02-14T00:00:00.000 | {
"year": 2018,
"sha1": "eab422b69605e047597a743ebb657b2728ad7f8e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1802.05178",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "37d36a04ac8671c6f85bee4ad20efca8792cf0bd",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
244730493 | pes2o/s2orc | v3-fos-license | Psychological distress resulting from the COVID-19 confinement is associated with unhealthy dietary changes in two Italian population-based cohorts
Purpose To examine the relationship between psychological distress resulting from the COVID-19 lockdown and dietary changes. Methods Cross-sectional analysis from 2 retrospective Italian cohorts recruited from May to September 2020: (1) The Moli-LOCK cohort consists of 1401 participants from the Moli-sani Study (n = 24,325) who were administered a telephone-based questionnaire to assess lifestyles and psychological factors during confinement; (2) the ALT RISCOVID-19 is a web-based survey of 1340 individuals distributed throughout Italy who self-responded to the same questionnaire using Google® forms. Psychological distress was measured by assessments of depression (PHQ-9 and depressive items from the Screening Questionnaire for Disaster Mental Health- SQD-D), anxiety (GAD-7), stress (PSS-4), and post-traumatic stress disorder (SQD-P). Diet quality was assessed either as changes in consumption of ultra-processed foods (UPF) or adherence to Mediterranean diet (MD). Results In ALT RISCOVID-19, increased UPF intake was directly associated with depression (both PHQ-9 and SQD-D; p < 0.0001), anxiety (p < 0.0001), stress (p = 0.001) and SQD-P (p = 0.001); similar results were obtained in the Moli-LOCK cohort except for perceived stress. When psychometric scales were analysed simultaneously, only depression (SQD-D) remained associated with UPF (both cohorts). In both cohorts, psychological distress poorly influenced changes toward an MD, except for depression (SQD-D) that resulted inversely associated in the ALT RISCOVID-19 participants (β = − 0.16; 95% CI − 0.26, − 0.06). Conclusions Psychological distress from the COVID-19 confinement is directly associated with unhealthy dietary modifications in two Italian cohorts. In view of possible future restrictive measures to contain pandemic, public health actions are warranted to mitigate the impact of psychological distress on diet quality. Supplementary Information The online version contains supplementary material available at 10.1007/s00394-021-02752-4.
Introduction
The coronavirus disease 2019 (COVID-19) has aggressively spread across the globe since late 2019 and, as of September 10, 2021, more than 220 million confirmed cases and over 4 million deaths have been reported [1].
The COVID-19 pandemic led the Italian government to enact unprecedented restrictive measures nationwide between March 9 and May 3 2020, in order to limit the fast spread transmission of the disease and the subsequent overwhelming of hospitals and health care systems [2].
Moli-LOCK Study Investigators are listed in the Supplementary
Appendix.
During the lockdown, Italian residents were required to stay at home and only essential needs/services were permitted, with huge limitations in terms of working activities that were converted into home working as far as possible. As a consequence, daily routine was dramatically disrupted with potential negative effects on mental health and dietary habits: the latter are highly susceptible to psychological wellbeing [3,4] and vice versa [5,6].
The negative effects of lockdown on psychological health were already documented by studies analysing the effect of confinement during the SARS and MERS outbreaks in 2003 and 2012 [7,8], and there is evidence that the psychological impact of quarantine is wide-ranging, substantial, can be long lasting [9] and may translate to severe distress eventually leading to significant psychological disorders [7,9].
More recently, a study on 6,882 individuals from 59 Countries found that the COVID-19 pandemic has affected global mental health with increased prevalence of depression and anxiety [14]. Consistently, findings from a national Irish cohort indicated that the COVID-19 quarantine was associated with stress and significant increase in symptoms of depression and anxiety [15].
However, the impact of psychological distress induced by COVID-19 lockdown on dietary changes during confinement has been poorly investigated. Available data from a national French survey indicated that depression and anxiety clustered with unfavourable nutritional changes or behaviours during the lockdown period [16]. Similarly, data from a webbased survey in France revealed that a negative change in mental health was strongly associated with adverse changes in nutrition [17].
We therefore aimed at examining the association between psychological distress induced by the confinement with concurrent changes in diet quality, by using data on 2,741 Italian men and women from two population-based cohorts recruited from May 2020 to September 2020.
The rationale of our study relies on the well-established association between mental health and diet quality reported in a number of epidemiological settings [18][19][20].
Study design and participants
The Moli-LOCK Study The Moli-LOCK Study was designed as an observational cohort study aiming to retrospectively investigate dietary, lifestyle and psychosocial changes that possibly occurred after Italy's lockdown resulting from the COVID-19 pandemic, that is in the period of time between March 9 2020 and May 3 2020. The population of the Moli-LOCK Study consists of a subgroup of men and women who had first been recruited in the larger Moli-sani Study cohort [21] in 2005-2010 (n = 24,325) and then re-examined in 2017-2020 (n = 2572). From May 2020 to September 2020, subjects were contacted by telephone by trained researchers to assess lifestyle, dietary and psychosocial changes during the confinement resulting from the COVID-19 pandemic. A total of 1,563 completed the questionnaire. As compared to the eligible sample who did not participate (n = 1009), individuals included in the study were slightly younger (66.4 ± 8.6 vs 67.5 ± 9.3; p value < 0.001 for analysed vs excluded) and had higher education (upper secondary school or higher = 66.2% vs 61.5%, respectively, p = 0.015) while no differences were found for sex (men = 43.7% vs 47.1%, respectively, p = 0.09) and presence of chronic diseases (cardiovascular disease = 7.4% vs 9.3%, p = 0.08; cancer = 9.4% vs 9.2%, p = 0.72, respectively). The Moli-LOCK study complies with the Declaration of Helsinki and was granted the approval of the Ethics Committee of the IRCCS Neuromed, Pozzilli (IS), Italy. Verbal informed consent during the telephone interview was obtained from all participants. After exclusion of participants with missing information on one or more psychometric scales and dietary data, we finally analysed 1401 subjects (Supplementary Fig. 1).
The ALT RISCOVID-19 Study ALT RISCOVID-19 is a cross-sectional web-based survey carried out among Italian adults aged ≥ 18 years, resident in Italy during the confinement.
Data were collected through a structured self-administered questionnaire created in Google® Forms (Google LLC, Menlo Park, CA, USA). All subjects aged ≥ 18 from the general population, residing in Italy during the Italian lockdown, with access to electronic devices and the Internet (e.g. personal computer, smartphone) and fluent in Italian were eligible.
Individuals were invited to participate in the survey via social media (Facebook® and Whatsapp®) and e-mail contacts and the data collection occurred between June and September 2020. A total of 2060 subjects throughout Italy completed the survey.
Before starting the questionnaire, participants were informed about the aims of the study and were ensured that all data would be used for research purposes only; participants were required to accept the data sharing and privacy policy before taking part in the study. To protect the confidentiality of the participants, their personal information and data were anonymous, according to the provisions of the General Data Protection Regulation (GDPR 679/2016).
3
The study was granted the approval of the Ethics Committee of the IRCCS Neuromed, Pozzilli (IS), Italy.
Participants with missing data on one or more psychometric scales and diet were excluded and the analytic sample consisted of 1340 subjects ( Supplementary Fig. 1).
Data collection
The ALT RISCOVID-19/Moli-LOCK questionnaire was constructed by the Department of Epidemiology and Prevention at the IRCCS Neuromed. The questionnaire was divided into modules including questions on sociodemographic characteristics, medical history, COVID-19 related aspects, dietary and lifestyle practices, psychological assessment and sources of information (Supplementary Appendix 1).
Psychological distress included assessment of depression, anxiety, stress and post-traumatic stress disorder (PTSD) that were respectively assessed by administration of validated versions of the Patients' Health Questionnaire (PHQ-9) [22], the General Anxiety Disorder (GAD-7) [23], the 4-item Perceived Stress Scale (PSS-4) [24] and the Italian version of the Screening Questionnaire for Disaster Mental Health (SQD) [25] that includes nine items to assess PTSD (SQD-P) and six items to screen for depression at the same time (SQD-D).
Dietary changes were evaluated through a 41-food-item questionnaire asking participants to indicate whether during lockdown their consumption of each food was increased, decreased or unchanged as compared to their usual intake just before the confinement (Supplementary Appendix 1).
Modifications in adherence to the MD were evaluated by computing a score (MD score) which scored foods according to their position in the Mediterranean diet pyramid: we assigned 1 point to increased consumption of highly recommended foods (e.g. fresh fruits, nuts and seeds, fresh vegetables) and minus 1 point if that consumption was decreased; conversely, decreased consumption of foods less frequent in the Mediterranean diet and for which it is recommended from low to moderate intake was given 1 point (e.g. red meat, white meat, milk and yogurt) and minus 1 point was assigned to increased intake. Unchanged intakes received 0 point (Supplementary Table 1). The MD score potentially ranged from -18 to 18 with higher values reflecting maximal switching to an MD.
Changes in UPF consumption was assessed through questions aimed at evaluating modifications possibly occurred during lockdown in the intake of 19 food items grouped according to the NOVA classification system based on the degree of food processing (Supplementary Table 2). Briefly, we categorized each food item into one of the following categories according to the extent and purpose of food processing: (1) unprocessed or minimally processed foods (e.g. fruits and vegetables, meat and fish); (2) processed culinary ingredients (e.g. butter, oils); (3) processed foods with salt, sugar, or oil (e.g. canned or bottled vegetables and legumes, canned fish); (4) UPF containing predominantly industrial substances and little or no whole food (e.g. carbonated drinks, processed meat, packaged snacks). For the purpose of the present analyses we used the fourth NOVA category.
We then computed an UPF score by assigning 1 point to increased intake, − 1 point to decreased intake while unchanged intakes received 0 points (Supplementary Table 3). The score potentially ranges from -19 to 19 with higher values indicating an increase in UPF consumption during confinement.
Statistical analyses
Data are represented as number and percentage in parentheses (%) for categorical variables, or mean and standard deviation (± SD) for continuous variables.
We tested the association of psychological distress (used as the exposure variable) with MD and UPF (dependent variables) by using multivariable linear regression analysis. Each psychometric score was scaled by its standard deviation so that regression coefficients indicate the variation in diet quality for 1 standard deviation change (cohort specific) for each measure of psychological distress. Associations were obtained by using the following models: Model 1 (adjusted for age and sex); multivariable Model 2 that was further controlled for main sociodemographic factors, namely geographical area (for ALT RISCOVID-19 only), living area, educational level, household income, marital status, number of cohabitants, occupational class, history of chronic diseases, diagnosis of ≥ 1 diseases during confinement, use of psychoactive drugs before and during lockdown; multivariable Model 3 as in Model 2 and including all the psychometric scales simultaneously.
Missing data on covariates (educational level = 7; household income = 381; marital status = 23; occupational class = 32; number of cohabitants = 22; living areas = 45; history of chronic disease = 21; diagnosis of disease during lockdown = 14; use of psychoactive drugs during lockdown = 79) were handled using multiple imputation (SAS PROC MI, followed by PROC MIANALYZE) to maximize data availability for all variables, avoid bias introduced by not-at-random missing (MNAR) data patterns and achieve robust results over different simulations (n = 10 imputed datasets).
Statistical tests were two-sided, and Bonferroni correction for multiple comparisons was applied to each analysis according to the number of independent tests carried out (see Tables). Data analysis was generated using SAS/STAT software, version 9.4 of the SAS System for Windows©2009.
Results
The pooled sample of 2741 subjects had a mean age of 58.1 years (± 15.3) and included 40.8% men. The largest proportion lived in Southern Italy (75.0%), was well-educated (postgraduate education 45%), had high occupational class (43.2% professional/managerial) and prevalently lived in pairs (73.1%) ( Table 1). MD score and UPF were inversely correlated (spearman correlation coefficient = − 0.11; p < 0.0001, data not shown), and correlations among psychometric scales were also moderate to high (Supplementary Table 4).
Overall, we found a slight increase in consumption of MD (0.4 ± 2.2) and a mild decrease in UPF intake (− 0.3 ± 4.0) experienced during the lockdown (Table 1).
On the contrary, psychological distress was unlikely to be relevant to changes in MD, with the exception of depressive symptoms as assessed by the SQD-D that resulted inversely associated with modifications toward an MD (β = − 0.09, 95% CI − 0.15, − 0.03; Table 2, models 2 and Supplementary Fig. 2A). Results from multivariable-adjusted model 3, including all the psychometric scales simultaneously, showed that only depression as measured by the SQD-D was independently associated both with reduced adherence to an MD (β = − 0.16, 95% CI − 0.26, − 0.06) and an increase in the intake of UPF (β = 0.14, 95% CI 0.05, 0.24) ( Table 2, models 3).
A number of diet-related factors resulted associated with psychological distress as well (Tables 4 and 5). Analyses on the ALT RISCOVID-19 sample showed major behavioural changes associated with depressive symptoms (SQD-D) while other indicators of psychological distress did not independently correlate with changes in eating behaviours during lockdown. In particular, depressive symptoms resulting from a traumatic life event were positively associated with higher body weight during confinement, an increase in the number of daily meals and reduced water intake (Table 4). Analyses from the Moli-LOCK cohort provided similar results, with the SQD-D being more strongly associated with changes in eating behaviours, as reflected by increased body weight, lower consumption of local food and more frequent food supplement use (Table 5). In this Moli-LOCK cohort we also observed some behavioural changes associated with PSS-4 ( Table 5).
Discussion
We analysed the association between psychological distress and diet quality during the Italian nationwide confinement imposed from March 9 to May 3 2020 to contain the spread of the SARS-CoV-2 virus. Our findings from two Italian population-based cohorts indicated that higher levels of psychological distress experienced during lockdown were directly associated with unhealthy dietary changes in the same timeframe.
Diet quality was measured by an assessment of the consumption of either Mediterranean diet or UPF; in the ALT-RISCOVID-19 web survey, participants with higher psychological distress reported an increase in highly processed foods, which are usually rich in sugar, saturated fats and dietary cholesterol, and a concurrent lower adherence to an MD. Similarly, in the Moli-LOCK cohort, psychological distress was linked to an increased consumption of UPF.
When psychometric scales were analysed simultaneously, only current depressive symptoms, as measured by the SQD-D, remained associated with unfavourable dietary changes in both cohorts. This may be explained by the fact that, differently from the PHQ-9, the SQD-D is conceived to detect depressive symptoms immediately after the occurrence of a traumatic life event which is the case of the confinement resulting from the COVID-19 pandemic, and thus reinforcing the hypothesis that lockdown-induced depression could have had an impact on dietary changes.
Our findings align with those from a large study on UK individuals indicating that depressive symptoms during the COVID-19 lockdown were associated with an increased risk of experiencing any changes in eating behaviours possibly through emotional eating, even though this potential link could not be directly assessed [26]. Our a priori research hypothesis was that psychological distress during confinement, possibly resulting from disruption of daily life and social isolation, would negatively impact diet quality and not vice versa, due to the relatively short timeframe in which such association has been analysed. Mental health can be influenced by diet through several mechanisms, including chronic inflammation as follows: healthy diets are usually associated with lower inflammation [27,28] while raised inflammation has been associated with a broad range of psychiatric disorders [29]; thus, positive health-related behaviours might reduce the risk of adverse mental health also by a favourable modulation of the inflammatory pathway. Therefore, it is likely that mental health and dietary habits show a bidirectional relationship, especially over long time ranges. However, in our study the association of psychological distress and modification of diet quality has been investigated within a two-month period, which is a relatively short period of time for the inflammatory process possibly caused by unhealthy diets to have an effect on mental health.
The association between psychological health and diet is largely supported by previous investigations conducted not in a context of pandemics [4-6, 18, 19], but only a few studies to date have analysed the association of psychological distress resulting from COVID-19 lockdown with diet quality during the same timeframe.
An early investigation on 5545 Spanish adults conducted 2 weeks after the nationwide lockdown highlighted that a healthy and balanced diet was predictive of reduced levels of anxiety symptoms and depression [30]. A national survey on 42,000 Brazilian adults found that people with previous diagnosis of depression were at higher risk of unhealthy behaviours during the COVID-19 pandemic quarantine, including low frequency of fruit or vegetable consumption and elevated frequency of UPF intake [31].
A small-sized web-based survey on French adults highlighted that the lockdown led to a decrease in nutritional quality of diet on average, which could be partly explained by an increase of mood as a food choice motive [32]. In the NutriNet-Santé cohort study including nearly 40,000 French adults [16], higher scores for depression and anxiety clustered with unfavourable nutritional changes or behaviours during the lockdown period. More recent studies are concordant in showing that psychological distress experienced during the lockdown and dietary changes are highly correlated; indeed, a community-based cross-sectional study on Portuguese adults indicated that there was a significant indirect effect of the experienced psychosocial impact of COVID-19 pandemic on disordered eating behaviours mediated through psychological distress [33]. An online survey on a small sample of Italians provided evidence of the negative effects of isolation and lockdown on emotional wellbeing, and, relatedly, on eating behaviours [34]. Perceived worsened diet quality during the COVID-19 pandemic possibly led to higher odds for clustered mental ill-health, including anxiety and depressive symptoms in a large cohort of Swedish people [35].
A favourable association between psychological health and an MD has been previously reported in numerous crosssectional [36][37][38] and longitudinal analyses [39,40] from population studies; consistently, a diet rich in UPF was unfavourably associated with mental health rather being associated with increased risk of incident depression [41,42], while the association with other psychological disorders, such as stress or anxiety, has not been extensively addressed, with few exceptions [43]. Our study is one of the first to provide evidence of an association between anxiety, stress and PTSD symptoms with a diet rich in UPF.
The role of psychological distress, including depression and anxiety, in the development and progression of cardiovascular disease is well established [44,45] as well as that of unhealthy and unbalanced diets which are associated with a variety of negative health outcomes [46,47].
Strength and limitations
A major strength of this study is the use of two populationbased cohorts to examine the impact of psychological stress resulting from the lockdown on diet quality modifications. Also, the study relies on the use of a number of covariates which limit at least in part confounding, and data were collected shortly after the end of the Italian lockdown. Table 2 Association of psychological factors with changes in adherence to Mediterranean diet (MD) and consumption of ultra-processed foods (UPF) during the Italian lockdown resulting from the COVID-19 pandemic (March 9-May 3, 2020) in the ALT RISCOVID-19 web-based survey (n = 1340) using data obtained from multiple imputation Model 1 obtained from multivariable-adjusted linear regression including age and sex Model 2 obtained from multivariable-adjusted linear regression including age, sex, geographical area, living area, educational level, household income, marital status, number of cohabitants, occupational class, history of chronic diseases, diagnosis of ≥ 1 diseases during lockdown and use of psychoactive drugs before or during lockdown However, our results should be interpreted in light of some limitations. First, the ALT RISCOVID-19 is a web-based survey with potential selection bias and selfreported information that may lead to misreporting. However, our analyses also rely on data collected within the Moli-LOCK study which includes participants from a well-established population-based prospective cohort who were telephone-interviewed and this limits misreporting. Both cohorts used retrospective data; thus recall bias cannot be excluded and changes in dietary intakes were self-reported rather than assessed objectively through administration of dietary questionnaires before and after lockdown. Another limitation is the lack of assessment of some personality variables of the participants, such as psychological resilience, which has a clear influence on the psychological distress that a person can present in a situation like this of confinement.
Finally, this study was conceived to test the hypothesis that lockdown-induced psychological distress could be unfavourably associated with changes in diet quality during confinement; however the direction of the association cannot be derived, due to our cross-sectional design.
It might be possible that mental distress could result in dietary changes [48]; however, robust longitudinal data in recent years have strengthened the evidence that healthy diets may reduce the risk of depressive symptoms [49], while Western-like diets and processed foods are independent predictors of depression and anxiety [47,50].
Whatever the cause-effect relation, the present findings show that during confinement resulting from the COVID-19 outbreak psychological distress and diet quality were closely related. Table 3 Association of psychological factors with changes in adherence to Mediterranean diet (MD) and consumption of ultra-processed foods (UPF) during the Italian lockdown resulting from the COVID-19 pandemic (March 9-May 3, 2020) in the Moli-LOCK cohort (n = 1401) using data obtained from multiple imputation Model 1 obtained from multivariable-adjusted linear regression including age and sex Model 2 obtained from multivariable-adjusted linear regression including age, sex, living area, educational level, household income, marital status, number of cohabitants, occupational class, history of chronic diseases, diagnosis of ≥ 1 diseases during lockdown and use of psychoactive drugs before or during lockdown Regression coefficient β with 95% confidence interval (95% CI) obtained from multivariable-adjusted linear regression model 3 including age, sex, living area, educational level, household income, marital status, number of cohabitants, occupational class, history of chronic diseases, diagnosis of ≥ 1 diseases during lockdown and use of psychoactive drugs before or during lockdown, and including all the psychometric scales simultaneously Significant associations surviving Bonferroni correction for multiple testing (α = 0.0008) are highlighted in bold PHQ-9 Patients' Health questionnaire, GAD-7 General anxiety disorder scale, PSS-4 Perceived stress scale, SQD Screening Questionnaire for Disaster Mental Health, SQD-P
Post-traumatic
Stress Disorder, SQD-D Depressive symptoms | 2021-12-01T06:23:53.208Z | 2021-11-30T00:00:00.000 | {
"year": 2021,
"sha1": "324ae4ac8e7be7f8c1f9af722040797773df87a3",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00394-021-02752-4.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "c26d40d3dff70b20d0cbd98033463489695abcb2",
"s2fieldsofstudy": [
"Psychology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219551457 | pes2o/s2orc | v3-fos-license | Hypophosphataemia after ferric carboxymaltose is unrelated to symptoms, intestinal inflammation or vitamin D status
Background Intravenous iron replacement is recommended for iron-deficient patients with inflammatory bowel disease (IBD), but may be associated with hypophosphataemia, predisposing to osteomalacia and fractures. This study aimed to evaluate the incidence and risk factors for hypophosphataemia following intravenous ferric carboxymaltose (FCM) in patients with IBD. Methods This prospective observational study of patients with and without IBD evaluated serum phosphate for 28 days following intravenous FCM, and assessed associations with symptoms, markers of inflammation and vitamin D status. Results Twenty-four patients with IBD (11 with Crohn’s disease [CD], 13 with ulcerative colitis [UC], mean age 45 years [range 19–90], 7 female), and 20 patients without IBD (mean age 56 [22–88] y, 11 female), were included. Overall, serum phosphate declined by a mean of 36% at Day 7, with a mean fall of 42% (SD 19%) at some time point over 28 days (p < 0.001). Twenty-four of 44 (55%) patients developed moderate to severe hypophosphataemia (serum phosphate < 0.6 mmol/L). No differences between patients with and without IBD were seen, but patients with CD had greater decline in phosphate than those with UC. There was no association between hypophosphataemia and symptomatic adverse events, faecal calprotectin, C-reactive protein, albumin, platelet count, 25(OH) vitamin D, or 1,25(di-OH) vitamin D. Serum phosphate < 1.05 mmol/L on Day 2 predicted susceptibility to moderate-severe hypophosphataemia (OR 7.0). Conclusions Hypophosphataemia following FCM is common, unrelated to symptomatic adverse events, baseline intestinal or systemic inflammation, or vitamin D status.
Background
Iron deficiency, with or without anaemia, is one of the commonest systemic complications in patients with inflammatory bowel disease (IBD), affecting between 13 and 90% of patients [1][2][3][4]. Iron deficiency anaemia in patients with IBD is considered a marker of disease activity, and is associated with a reduced quality of life, impaired cognition and social functioning, and an increased risk of hospitalisation [5][6][7]. Recognition and correction of iron deficiency independent of IBD activity is associated with an improvement in these parameters [7,8], and intravenous iron is recommended for patients with moderate to severe anaemia or intolerance to oral iron formulations [2,4,5,9].
Of particular relevance to patients with IBD, repeated iron infusions, malnutrition and vitamin D deficiency may aggravate risk of hypophosphataemia [19]. Furthermore, iFGF-23 is upregulated by systemic inflammation, which may, theoretically, be a further risk factor for hypophosphataemia in patients with IBD [20]. This study aimed to prospectively evaluate the incidence and severity of intravenous FCM-associated hypophosphataemia in patients with and without IBD, its association with symptoms, and ascertain associated risk factors including systemic and intestinal inflammation, and vitamin D status, to enable potential preventative strategies.
Subjects
Consecutive non-pregnant iron-deficient patients with and without IBD attending a tertiary gastroenterology unit, who were deemed to require intravenous FCM as per clinician judgement, were invited to participate. Patients had evidence of iron deficiency based upon ferritin < 30 ng/ml or ferritin < 100 ng/ml with evidence of blood loss or inflammation. Non-IBD controls had iron deficiency secondary to occult or overt gastrointestinal bleeding (n = 16) or menstrual blood loss (n = 4).
Protocol and analytical assays
One gram FCM was administered to all patients. Baseline demographics and disease characteristics, haematological and biochemical indices, 25-hydroxy vitamin D and serum for analysis of iFGF-23, c-terminal FGF-23 (cFGF-23, the cleaved fragment of iFGF-23) and vitamin D binding protein (DBP), were collected. Faecal samples were analysed for calprotectin by fluorescence enzyme immunoassay (Phadia 100 EliA™ Calprotectin, Thermo Scientific, Scoresby, Australia). Clinical and biochemical assessment was repeated 2, 4, 7, 14 and 28 days after infusion, including direct questioning of gastrointestinal symptoms and adverse events, and serum stored for analysis of iFGF-23 and cFGF-23 at Days 2, 7 and 28.
Endpoints
The endpoints for this study included mean reduction in serum phosphate levels from Day 0 to Day 7, the proportion of patients experiencing a moderate (Grade 3, serum phosphate < 0.6 mmol/L) to severe (Grade 4 toxicity, < 0.3 mol/L) hypophosphataemia according to Common Terminology Criteria for Adverse Events (CTCAE) [22] at any stage during the follow-up period, difference in rate of hypophosphataemia between patients with and without IBD, and correlation between hypophosphataemia and symptomatic adverse events, also graded according to CTCAE. The association of hypophosphataemia with degree of systemic (C-reactive protein) or intestinal (faecal calprotectin) inflammation, and serum vitamin D status was additionally evaluated.
Statistical analyses
Results were analysed by SPSS v23 (IBM Corp) and Graphpad Prism v6 (GraphPad Software, Inc., California, USA) using paired and unpaired t-tests, Fisher's exact test, and multiple regression analyses as appropriate. A two-tailed p-value of < 0.05 was considered statistically significant for all associations.
Ethical considerations
The protocol for this study was approved by the Office of Research and Ethics at Eastern Health (LR 17-2017, approved 17 March 2017), and was performed in accordance with Australian regulations and the principles of the Declaration of Helsinki 1954 and its later amendments. Written, informed consent was obtained from all participants included in this study.
Results
Twenty-three non-IBD controls and 27 patients with IBD were recruited. Three patients from each group were excluded due to loss of follow-up within 1 week following infusion, leaving 24 patients with IBD (11 with Crohn's disease [CD] and 13 with ulcerative colitis [UC)]) and 20 non-IBD controls for analysis ( Table 1). Characteristics of disease in patients with IBD are shown in Table 2.
Mean plasma haemoglobin improved similarly (p = 0.460) after 28 days in patients with (121 to 134 g/L) and without IBD (124 to 132 g/L).
Change in serum phosphate
Serum phosphate fell in 42 of 44 patients (95%) following FCM. The overall mean fall in serum phosphate across all patients and time periods was 0.47 mmol/L ± SD 0.24 (42 ± 19%) compared with baseline. The lowest serum phosphate recorded was 0.26 mmol/L in a patient with UC at Day 7. Mean lowest serum phosphate levels were 36% lower at Day 7, and remained lower at Day 28 compared to baseline (mean 0.95 vs 1.10 mmol/L, p = 0.001) (Fig. 1a). Three patients had serum phosphate below 0.6 mmol/L at Day 28 ( Table 3). The time to lowest serum phosphate was 2 days in 1 patient, 4 days in 8 patients, 7 days in 22 patients, 14 days in 10 patients, and 28 days in 3 patients.
Serum phosphate fell below 0.6 mmol/L in 24 of 44 patients (56%), similar in patients with and without IBD (Table 4).
Patients with CD had a significantly greater maximal reduction in serum phosphate than patients with UC (mean reduction 51% vs 32%, p = 0.029) (Fig. 1b).
In patients with IBD, there was no correlation between minimum serum phosphate and markers of inflammation (faecal calprotectin, C-reactive protein, albumin, platelet count) or baseline 25-hydroxy vitamin D (Fig. 2c-f).
A significant correlation between serum phosphate at Day 2 and minimum phosphate (r = 0.67, p < 0. When serum phosphate at Day 2 was ≥1.05 mmol/L, the risk of Grade 3 or 4 hypophosphataemia during follow-up was 23% compared to a 67% risk (odds ratio 7.0, 95% CI 1.6-32.0) when Day 2 phosphate was < 1.05 mmol/L.
Discussion
The risk of hypophosphataemia in patients receiving intravenous FCM is increasingly recognised; however, the risk in patients with IBD compared with patients without IBD, and associated predisposing factors, have not been previously reported. This prospective observational study demonstrated a mean phosphate reduction of 42% following FCM, similar in patients with and without IBD, with more than half the patients experiencing moderate to severe hypophosphataemia. Although patients with CD more frequently experienced hypophosphataemia than patients with UC, patients with IBD per se were not at greater risk than patients without. Importantly, neither the severity of inflammation (assessed by circulating or faecal markers) nor baseline vitamin D status predicted risk of hypophosphataemia. Most cases of FCM-associated hypophosphataemia are asymptomatic. Indeed, our data show that delayed adverse events secondary to FCM have no relationship with hypophosphataemia, with the 6 patients with IBD and 8 patients without IBD who experienced fatigue, arthralgia, myalgia, headache and dyspnoea or dizziness have similar phosphate levels to those participants without these symptoms. Hence, hypophosphatemia is difficult to recognise unless specifically measured in the serum. In the largest randomised clinical trial of patients with IBD who received FCM, mean serum phosphate was noted to fall by 38% from baseline (1.12 ± 0.22 mmol/L) to Week 2 (0.69 ± 0.24 mmol/L) [13]. Hypophosphataemia as an adverse event was reported in only 6 of 244 patients in that study, and none in a follow-up maintenance study of 104 patients [12], presumably due to most cases being asymptomatic.
Though most cases of hypophosphataemia appear transient, a minority of patients have persistent reduction in serum phosphate for up to several months. In our current study, serum phosphate remained lower at Day 28 compared to baseline (mean 0.95 vs 1.10 mmol/ L, p = 0.001), and in 3 patients remained below 0.6 mmol/L. In another recent study, 56.9% of 52 patients receiving FCM were noted to have moderate to severe hypophosphataemia (defined in this study as < 0.65 mmol/L) at 2 weeks, with 13.7% of patients continuing to have serum phosphate below this level at 6 weeks, and some for up to 6 months [17]. Retrospective studies have also reported hypophosphatemia for as long as 6 months following intravenous iron [16,23].
This persistent reduction in serum phosphate, particularly in the context of repeated iron infusions, may contribute to hypophosphatemic osteomalacia with fractures [18,24,25], which may have delayed clinical recognition due to the non-specific nature of symptoms reported by patients and often normal plain radiography. Given that patients with IBD may have nutritional deficiencies and reduced bone density, they may be particularly susceptible to these complications [19]. Furthermore, iFGF-23 is also known to be upregulated by systemic inflammation, potentially further predisposing to hypophosphataemia [20]. Nonetheless, baseline levels of iFGF-23 were similar in patients with and without IBD in the current study. Though not performed in this study, measurement of bone turnover markers following FCM in a future study may help to stratify risk of osteomalacia in the long-term.
Eliciting risk factors for hypophosphataemia following FCM, especially in patients with IBD is, therefore, crucial, to develop potential preventative strategies. The absence of any association between systemic or intestinal inflammation, or vitamin D components (25-hydroxy vitamin D, 1,25(di-OH) D, free or bioavailable 25hydroxy vitamin D), with risk of hypophosphataemia, means that correction of these factors may not be the answer.
Interestingly, a significant correlation between DBP and the lowest serum phosphate was noted. DBP is a liver-derived α-globulin structurally similar to albumin, which binds about 85-90% of circulating vitamin D metabolites [26]. DBP may control the availability of vitamin D metabolites, especially 25-hydroxy vitamin D, to tissues by allowing only the small free fraction to passively enter cells through diffusion across cell membranes, or actively via interaction with membrane glycoproteins megalin and cubulin [27]. Higher concentrations of DBP may directly reduce circulating 1,25 dihydroxy vitamin D [28]. In contrast, FGF-23 inhibits cytochrome P27B1, the enzyme which 1-hydroxylates 25-hydroxy vitamin D to 1,25 dihydroxy vitamin D. The potential relationship between DBP, FGF-23, 1,25 dihydroxy vitamin D and serum phosphate warrants further study. The mechanism of hypophosphataemia following intravenous iron administration, specifically after FCM, has been investigated in numerous studies [15,29,30]. Consistent with previous reports, the hormone iFGF-23, which primarily inhibits renal phosphate reabsorption producing phosphate wasting but also reduces circulating 1,25(di-OH) D and thus intestinal phosphate absorption [15,29,30], was demonstrated to have significantly risen by Day 2 in our study. Changes in C-terminal FGF-23 should generally be interpreted with some caution as the assay detects both cFGF-23 fragments and the intact molecule. Given that levels remained relatively stable, it is likely that impaired intracellular degradation of iFGF-23 is the likely explanation for the observed changes in FGF-23 [15]. The timing of peak of iFGF-23 is uncertain, with initial studies describing a peak at day 1 [15], and other studies reporting a peak at Day 2 [29,30]. Both days 1 and 2 have not been published in a single study, and hence the precise trajectory of iFGF-23 in the first 2-3 days remains uncertain. The decline by day 7 has been consistently reported previously [15,29,30].
Though patients with and without IBD did not differ in rates of hypophosphataemia, patients with CD had a higher risk of hypophosphataemia than those with UC. Subgroup analysis based upon the location of CD was not possible, since only 2 of 11 patients with CD had isolated colonic disease, with most having ileal or ileocolonic disease. Given that phosphate is absorbed in the small intestine, the difference between patients with CD and UC might be accounted for by a greater susceptibility to malabsorption of phosphate in patients with CD and requires further investigation. It is worth noting, however, that baseline phosphate was similar in patients with CD and UC (p = 0.34).
FCM is one of the most commonly prescribed formulations of intravenous iron worldwide, but emerging studies demonstrate a significantly higher risk of hypophosphataemia following FCM compared to other intravenous iron formulations such as iron dextran, iron isomaltoside, and ferumoxytol [15,17,31]. The reasons for this difference are unclear, but may be secondary to a differential effect on cleavage of iFGF23 in osteocytes by differing carbohydrate moieties [15]. Although apparently consistent, such serological effects need to be balanced against the relative clinical safety and the limited sequelae of FCM noted to date, particularly in comparison to other formulations such as iron dextran. Longer studies with more rigorous endpoints will enable a clearer distinction to be made.
The strength of this study lies in the uniform, prospective collection of data in patients with and without IBD, and its ability to clarify rates of hypophosphataemia as well as pertinent risk factors. Nonetheless, it must be acknowledged that fractional urinary phosphate excretion was not measured in patients. Previous studies have shown that an increase in phosphaturia is the primary mechanism for hypophosphataemia following intravenous iron, but whether this effect differs between patients with and without IBD, or is influenced by systemic inflammation, is an area for further study. Secondly, the duration of persistence of hypophosphataemia beyond 4-6 weeks and effect on bone turnover markers in this population remains undetermined, particularly in relation to the presence or type of IBD, and remains an avenue for further investigation.
Conclusions
Hypophosphataemia following FCM occurs at similar rates in patients with and without IBD, and is not influenced by inflammation, or vitamin D status. Alternative intravenous iron formulations associated with a lower risk of hypophosphataemia might be considered for iron replacement in such patients.
Additional file 1: Table S1. Delayed adverse events (from 1 h after infusion to 28 days follow-up). Figure S1. Correlation between minimum serum phosphate during follow-up and multiple markers in patients with IBD. | 2020-06-10T14:26:48.482Z | 2020-01-20T00:00:00.000 | {
"year": 2020,
"sha1": "c38daa5a846c0625231ac35952d9572456761747",
"oa_license": "CCBY",
"oa_url": "https://bmcgastroenterol.biomedcentral.com/track/pdf/10.1186/s12876-020-01298-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c38daa5a846c0625231ac35952d9572456761747",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
104400340 | pes2o/s2orc | v3-fos-license | Single stage extraction and recovery of hexavalent chromium using blended TOA-TOMAC in palm oil-based diluent via supported liquid membrane process
Diluent is an important components in the supported liquid membrane (SLM) process. The conventional diluents used in SLM are usually flammable, volatile, and toxic. To promote a sustainable development, the palm oil was incorporated in the SLM for the removal and recovery of chromium. SLM is a three-phase system with an organic phase containing the carrier in diluent which immobilized in a membrane support and is set in between the simulated chromium and sodium hydroxide solutions that act as a feed and stripping phases respectively. Both solutions were pumped into the membrane support cell in a recycled operation for about 5 hours. To monitor the changes of the chromium ion concentration for both phases, the chromium ion concentration in the feed and stripping phases as a function of time was analysed using an atomic absorption spectrometry (AAS). Several parameters namely type of stripping agent, diluent composition and carrier concentration were investigated. Results showed that about 75 and 73% of chromium was extracted and recovered, respectively at the best conditions of using sodium hydroxide, mixture of palm oil-kerosene (50:50) and TOA-TOMAC (0.20-0.20M). Thus, palm oil is regarded as feasible as a greener diluent in SLM process for chromium ion extraction.
Introduction
Chromium is widely used in various types of industry owing to its good features of magnetic properties, hardness and anti-corrosion. It is also known as alloying material for steel, which is commonly used for surface coating and refractory material. Other applications of chromium are in the preservation of wood, leather tanning, synthetic manufacturing, industrial catalysts, and colour pigments for paints [1][2]. The most common oxidation state of chromium that exists in aqueous is hexavalent (Cr (VI)) and trivalent chromium (Cr (III)). However, due to the high solubility and bioavailability, hexavalent chromium is more toxic compared with the trivalent chromium due to its ability to form various anionic species in aqueous solution, which are harmful, carcinogenic and mutagenic compounds towards environment [3]. World Health Organization (WHO) and USEPA have declared chromium as one of the most toxic metal for the environment [4]. Thus, removal of chromium from the industrial effluents have become a necessary task in ensuring environmental safety. Previously, a few conventional techniques have been employed in order to remove chromium from industrial waste and wastewater such as precipitation [5], solvent extraction [6], reverse osmosis [7], and electrodialysis [8]. Every conventional method has its own advantages and limitations. For instance, precipitation process achieves high percentage of heavy metal removal with an addition of other chemicals. However, this process generates the sludge that leads to an extra disposal cost. Meanwhile, solvent extraction involves high consumption of chemical whilst reverse osmosis is a simple operation with effective removal of contaminants but requires expensive monitoring system and high energy consumption. On the other hand, electrodialysis implies high capital and operating cost due to fouling and scaling of the membrane. However, these methods only remove without recovering the chromium from wastewater. Hence, in view of the drawbacks of the conventional methods, the liquid membrane (LM) technology has attracted much interest and appears to be an advanced technique due to its great potential in the field of separation process for various organic compounds and metallic ions [9]. The main advantages of LM technology is the removal and recovery in one single step. Therefore, the targeted solute can be simultaneously removed and recovered.
Principally, LM technology is composed of three main configurations or designs namely bulk liquid membrane (BLM), emulsion liquid membrane (ELM) and supported liquid membrane (SLM). In LM system, there are three main phases involved including feed, membrane and stripping phases. Theoritically, the mechanism of LM occurs as the targeted solute is transported from feed to stripping phases across the LM phase that acts as the barrier [10]. Mostly, the diffusion of the targeted solute is facilitated by the carrier in LM phase. In BLM, a relatively thick layer of immiscible LM contained carrier in organic diluent separate the bulk feed and stripping phases [11]. LM phase contains carrier as well as surfactant in organic diluent to form emulsion known as ELM [12]. Meanwhile, LM phase contains carrier in organic diluent and is placed in a microporous polymer membrane support known as SLM [13].
Among these configurations, SLM is one of the simplest, efficient and easy to be scaled up.The main components in SLM system are carrier, stripping agent and diluent. Carrier is chosen based on the selectivity of the targeted solute ion present in the feed phase and can either be acidic, basic and solvating. Acidic carriers such as 2-hydroxy-5-nonylacetophenone oxime (LIX841),5,8-diethyl-7hydroxy-dodecan-6-oxime(LIX63), phosphorus derivative (di-2-ethylhexyl phosphoric acid (D2EHPA) and di-2,4,4-trimethylpentyl phosphinic acid (Cyanex 272) are generally used for extraction based on the cationic exchange mechanism [14][15][16][17]. Meanwhile, basic carriers such as trioctylamine (TOA),tridodecylamine (TDA), tri-n-octylmethylammonium chloride (TOMAC) and etc undergo metal ion extraction through anion exchange reaction [18][19][20]. Subsequently, the solvating or neutral carriers are basic in nature, hence extracting either neutral metal complexes or acids through the solvate formation [21][22]. To date, an application of mixed carriers is gaining attention to improve the metal ion extraction efficiency via synergistic effect. Previously, Sulaiman and Othman [23] prove that about 83% of nickel ions were successfully extracted via the mixture system of 0.08 M LIX63 and 0.02 M D2EHPA with the maximum synergistic enhancement factor, Rmax of 29.56. This is also supported by Singh et al. [24] who reported similar observation as more than 95% of uranium (VI) were recovered in 360 minutes of SLM process using a binary mixture of 0.60 M PC88A and 0.15 M Cyanex 923 in dodecane. Subsequently, good diluent should provide good characteristics such as high solubility, low flash point [25]. So far, the petroleum based diluents have been employed for metal ion extraction in liquid membrane technology namely kerosene, n-heptane and toulene and etc [26]. Nevertheless, this type of diluents is toxic, non-renewable, non-biodegradable, flammable and volatile in nature. An introduction to the green organic diluent such as vegetable oils attracts the attention of several reseachers as a way to promote greener process in the future [27]. Acording to Jusoh et al. [28], the combination of 30/70 kerosene to palm oil also leads to high separation of succinic acid from the fermentation broth. Another observation reported by Chakrabarty et al. [29] claimed that in about 95% of mercury were extracted using coconut oil as diluent through the SLM process. Besides, Othman et al. [30] also found that almost 100% of chromium were extracted and recovered using the diluent containing kerosenepalm oil mixture with the ratio of 3:7.On the other hand, stripping agent also plays a crucial role as a binder for the targeted solute at the membane-stripping interface. Commonly, metallic ions extracted by basic carriers seems suitable to be stripped using neutral or alkaline solutions and vice versa in order to create the chemical potential between both membrane and stripping phase [31].
In this present investigation, removal and recovery of chromium (VI) using SLM was studied. A liquid membrane formulation placed in the membrane support composed of a blended carrier of TOA and TOMAC in the mixture of kerosene and palm oil as organic diluent. Several parameters affecting the extaction and recovery performance of chromium (VI) such as the type of stripping agents, the composition of diluent and the concentration of carrier were examined.
Reagent and materials
Potassium dichromate, (K2Cr2O7), supplied by Sigma Aldrich in powdered form, was used as a source of hexavalent chromium Cr (VI) in the simulated wastewater. Trioctylamine (TOA) and Tri-noctylmethylammonium chloride (TOMAC) as carriers were procured from Fluka. Kerosene and cooking palm oil as diluents were obtained from Merck and Mart, respectively. Meanwhile, sodium hydroxide and sulfuric acid as stripping agents were purchased from Merck. Besides, polyvinylidenfluoride (PVDF) ordered from Millipore was used as a membrane support for organic solution with an average effective pore diameter of 0.22μm, an average thickness of 125μm and porosity of 75%. All these materials were of analytical reagent grade and used directly as received from the manufacturer without further purification.
In fact, tertiary amines (TOA) and quaternary ammonium salts (TOMAC) are the most widely used ionic carriers in chromium ion extraction using SLM process owing to the high coordination ability and stability of the complex strength [32][33]. Besides, in terms of the carrier-chromium interaction perspective, more highly structured basic mixed carrier (TOA-TOMAC) is favored due to the high degree of solvation of carriers with the chromium ion in the membrane, hence enhancing the stability of the carrier-chromium complexation [34].
SLM set up and extraction
SLM rig set up is composed of membrane cell, feed and strip vessel, double head peristaltic pump, flowmeter and tubing. There are three main phase involve in SLM process which are feed, membrane and stripping phases. The feed phase contains simulated chromium (VI) solution, which was prepared by mixing an appropriate quantity of potassium dichromate in deionized water with measured pH of 2. Subsequently, organic liquid membrane was prepared by dissolving the corresponding volume of carriers in the organic diluent to obtain carrier solutions of different concentrations, ranging from 0.05 to 0.35M. The PVDF membrane support (10cm x 3.5 cm) was impregnated with the organic solution for 24 hours before leaving it to drip for a few seconds followed by placing it in the membrane cell as shown in figure 1. Meanwhile the stripping solution (sodium hydroxide) of desired concentration was prepared by dissolving appropriate weight of pellets in deionized water. About 300 mL of feed and strip solutions were added into feed and strip vessel, respectively. These solutions were pumped into the membrane cell with a recycled operation. Both aqueous feed and stripping solutions were magnetically stirred to avoid concentration polarization conditions at the membrane interfaces and in the bulk of the solutions. Ten mL sample of each feed and stripping solutions were periodically taken every 30 minutes for 5 hours to determine the changes in the chromium ion concentration using Atomic Absorption Spectrometry (AAS) with a wavelength of λ=540 nm. The new membrane support was used for each experiment. The experiment was carried out at ambient temperature (25 ± 1°C) with standard deviations of chromium ion concentrations less than ± 5%. (1) and (2), respectively: (1) Where [ ] represents the initial chromium in the feed phase; [ ] indicates the final concentration of chromium in the feed phase and [ ] denotes the concentration of chromium in the stripping phase.
Determination of permeability value.
Membrane permeability is defined as the ability of a membrane to allow the desired solute for passing through. Permeability of the chromium ions extracted from feed to stripping phases is determined using equation (3) [35]: where is the initial concentration of chromium ions in the feed phase, c is the concentration of chromium ions at a given time, p is the permeability value (cms -1 ), A is the effective area of the membrane (cm 2 ), V is the volume of aqueous feed phase (cm 3 ) and t is the time.
Where m0, m1 and m2 are the weights of the dry, wet, and used membrane support, and A is the effective area. A weight of dry membrane is a weight membrane before used while weight wet membrane designates the weight membrane after impregnation with organic liquid membrane. Then, the weight of used membrane refers to the weight of membrane after extraction process.
Results and discussion
3.1. Transport mechanism of chromium (VI) ion extraction in SLM Apparently, chromium ion is able to exist in aqueous solution as HCrO4 -, CrO4 2-, HCr2O7and Cr2O7 2depending on the pH value of the solution and the total concentration of chromium. As the pH is lower than 0.5 or too acidic, the chromic acid (H2Cr2O7) is predominant. Meanwhile, Cr2O7 2appears to be the main anion in an acidic aqueous phase. However, Cr2O7 2is converted into HCrO4 -, in acidic aqueous solution as the concentration of chromium lies in the range lower than (1.26 -1.74) × 10 −2 mol/L. Beyond this critical concentration values, Cr(VI) is generally found as CrO4 2- [37]. As the lowest concentration used lies in the range lower than the concentration mentioned above, HCrO4is the prevailing anion that existed in the chromium solution as a feed phase in this present work. The mechanism for simultaneous extraction and recovery of chromium ion using SLM process is illustrated in figure 2. In this study, both blended TOA-TOMAC and NaOH act as carrier and stripping agent, respectively. In SLM transport process, the diffusion of chromium ions takes place via the following steps: a) Both basic carriers, TOA (R3N) and TOMAC (R4N + Cl -) in the membrane phase were protonated by the stripping agent, NaOH at the membrane-stripping interface as shown in equation (5): b) During the extraction, the protonated carrier molecules in the membrane phase react chemically with the hydrochromate ions at the feed-membrane interface, hence forming chromium-carrier complexes as represented in equation (6): c) Subsequently, these chromium-carrier complexes of R3NNa + HCrO4and R4N + HCrO4complexes diffuse across the membrane phase from feed-membrane interface to the membranestripping interface reversibly. At the membrane-stripping interface, the stripping reaction with NaOH take place as represented in equations (7) and (8):
3.2.
Effect of stripping agent type To elucidate the effect of stripping agent type, two types of stripping agent namely sodium hydroxide (NaOH) and sulphuric acid (H2SO4) were employed for simultaneous extraction and recovery of chromium ion in SLM process. The extraction and recovery performance of chromium ion as a function of different types of stripping agent after 5 hours of experiment are tabulated in table 1. It can be clearly seen that NaOH showed a better performance with 75 and 73% of extraction and recovery percentage, respectively. Conversely, H2SO4 only provided up to 42 and 10% of extraction and recovery percentage, respectively. As both carriers used are the basic type, the alkaline and neutral stripping agent are preferable in creating the chemical potential between the membrane and stripping phases [31]. Fundamentally, both ionization energy and electron affinity become higher as we go up a column of a periodic table. Since oxygen atom (-O-) is above sulphur atom (-S-) in the periodic table, it has higher ionization energy as well as electron affinity. Therefore, it provides high tendency to rapidly break the ionic bonding with sodium ion, hence enhancing the stripping efficiency. In addition, the lowered stripping efficiency for sulfuric acid solution is probably due to the competition between sulfate and chromium ions which working in favor of forming a more stable carrier-chromium complex of larger solvation degree, allowing chromium to stay in the organic phase [34]. As a result, NaOH is employed as stripping agent throughout this study.
Effect of diluent composition
The feasibility of palm oil as a substitute diluent for SLM process is evaluated by studying the effect of various diluents towards the extraction and recovery performance of chromium ion as shown in figure 3. Meanwhile, the variation of the permeability values, viscosity and liquid membrane loss with respect to the different diluent composition are tabulated in table 2. According to figure 3, both liquid membrane containing 100% kerosene and mixtures of palm oil and kerosene (50:50) achieved high percentages of chromium ion extraction (73%). Likewise, the recovery efficiency also showed that the liquid membrane with 100% kerosene provided the highest recovery (81%) followed with the one containing the mixtures of palm oil and kerosene (73%) within 5 hours of experiment. Besides, the changes in the permeability values from 6.3 to 4.2 x10 -4 cms -1 upon mixing with the 50% of palm oil also was observed. The low permeation of chromium ion across the membrane phase is caused by the increment of the viscosity liquid membrane from 32 to 41 cP. Theoretically, liquid membrane with high viscosity tends to retard the permeation of chromium ion across the membrane phase, thereby inhibiting a substantial amount of chromium ion being transported to the stripping phase. This is in line with Kumar et al. [37] who claimed that high viscosity solvent is able to create high resistance as well as hindering the mass transfer of solute ion in the membrane phase.
On the other hand, the liquid membrane containing 100% palm oil provided the lowest extraction (22%) and recovery (8%) performance. This is probably due to the highest viscosity of 72cP which thereby hindering the permeation of chromium ion across the membrane phase. The low permeation of chromium ion into the membrane phase is in accordance with Chakrabarty et al. [29] who indicated that the accumulation of higher amount of oil phase on the membrane surface as a consequence of high viscosity solvent seems to block the chromium ion from reacting with the carrier at the feed-membrane interface. Hence longer time is needed for chromium ion to pass through the membrane phase. Besides, Chang et al. [38] also found that the non-polarity nature of vegetable palm oil tends to interact weakly with polar compounds such as chromium resulting in poor solubility of solute ion in the organic membrane phase. As referring to the liquid membrane loss analysis, liquid membrane containing palm oil suffered higher loss compared to the mixture of palm oil and kerosene as well as fully kerosene. This means the liquid membrane containing fully palm oil seems unstable and easily loss the liquid membrane into the aqueous phase.
Thus, it can be inferred that the sequence of diluent composition which significantly enhanced the extraction and recovery efficiency is as in the following order: 100% kerosene> mixture of kerosene and palm oil (50:50)> 100% palm oil. However, as a way to promote green technology, mixture of kerosene and palm oil (50:50) was employed for the next investigation.
Effect of carrier concentration
The carrier has a profound significance in SLM process by acting as a shuttle in transporting metal ion from feed to the stripping phases through membrane phase. Theoretically, the rate of metal ion permeation increases with a rise in carrier concentration [36]. Therefore, in order to understand the effect of carrier concentrations towards extraction and recovery performance of chromium ion, an equimolar concentrations of blended TOA-TOMAC were varied from 0.05 to 0.35M as shown in [39] who found that the low quantity of carrier molecules in the membrane phase cause less transportation of targeted solute into the stripping phase. However, an increment of carrier concentration up to 0.20M actually provides high number of carrier molecules available for the formation of chromium-carrier complex at the feed-membrane interface. This behaviour led to higher permeation rate of chromium ion which was almost doubled (4.2 x 10 -4 cms -1 ) as well as improving the extraction efficiency. Beyond 0.20M, a sudden fall of extraction efficiency (26%)was observed. The reduction of permeation rate to 3.1 x 10 -4 cms -1 can be adversely affected by the viscosity effect. It was proven by an increment trend of viscosity from 36 to 53cP as the carrier concentration ws increased from 0.05 to 0.35M, respectively. The viscous membrane phase seems to inhibit the diffusion of chromium ion across the membrane phase, which thereby leading to an extraction inefficiency. Similar result was reported by Rehman et al. [40] who indicated that according to Stokes-Einstein in equation (10), diffusity is inversely proportional to viscosity. Thus, lower viscosity has higher solute ion diffusion through membrane phase and vice versa.
where T is the absolute temperature, K is the Boltzmann constant, r is the ionic radius of solute and n is the viscosity of the organic phase in cP.
On the other hand, figure 4(b) illustrates the recovery performance of chromium ion as a function of different carrier concentrations. Notably, recovery efficiency showed similar trend with extraction which implies a simultaneous extraction and stripping reaction during SLM process. At low concentration of 0.05M, the recovery efficiency provided a slow and steady increase up to 15% in 5 hours of experiment. A possible explanation for this situation is the insufficient number of carrier molecules present in the membrane phase, thus retards the chromium ions transportation into the stripping phase. This behaviour was also experienced by Sulaiman and Othman [3] who believed that the low carrier concentration tends to reduce the mass transfer of chromium ions, hence accumulating these complex in the feed-membrane interface without being transported into the stripping phase. Surprisingly, further increment of carrier concentration up to 0.20M, the recovery percentage significantly increased up to 73% in 5 hours of experiment. It is due to the high number of carrier molecules which facilitates the permeation of chromium complexes in the membrane phase. Basically, the basic carriers of TOA and TOMAC containing amine group has high tendency in forming hydrogen bonding with water molecules in the aqueous phase, thus is able to create water channels through the membrane pores. This behaviour promotes direct channeling between the feed and stripping phases. For a longer experiment, the liquid membranes in the pores slowly oozes out into the aqueous phase, thus affecting the recovery efficiency.This is in accordance with Zhang et al. [35] who claimed that the lifetime of SLM process not only relies on the porosity but also on the ability of the membranes to keep the pores free from water channels. Meanwhile, further increment of carrier concentration to 0.35 M, resulted to steady increment of the recovery percentage up to 29% in 5 hours of extraction time. The low recovery might be due to the high viscosity membrane phase that is crowded with high number of chromium-carrier complexes without being stripped. It can also be noted that the liquid membrane loss seemed to increase in about twice (2.1x10 -3 g/cm 2 ). According to Huidong et al. [41], the liquid membrane is not completely insoluble in an aqueous solution and a certain degree of solubility exists at the aqueus-membrane interface. An increase in carrier concentration, increases the solubility of carrier at the aqueus-membrane interface. As a result, this phenomena reduce the interfacial tension among both phases and the aqueous solution is able to wash away the liquid membrane. Thus the best condition was obtained at 0.20 M of TOA-TOMAC with 75 and 73% of extraction and recovery, respectively
Conclusion
Through this work, the SLM process is shown to be a promising method due to its capabilities in extracting the hazardous hexavalent chromium complex from aqueous solution. Besides, the feasibility of palm oil as a substitute green diluent in SLM process has been proven. This green process has high potential to be applied in industrial level for the treatment of wastewater containing heavy metal ions. | 2019-04-10T13:12:51.811Z | 2018-12-24T00:00:00.000 | {
"year": 2018,
"sha1": "90dea1bf1a8ffe6376f6411f670113497a0a3dce",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/458/1/012030",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d854d2c925b6c1941eb3d9b6da373ac0940e403c",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Physics",
"Chemistry"
]
} |
10533619 | pes2o/s2orc | v3-fos-license | Autonomous Driving in Reality with Reinforcement Learning and Image Translation
Supervised learning is widely used in training autonomous driving vehicle. However, it is trained with large amount of supervised labeled data. Reinforcement learning can be trained without abundant labeled data, but we cannot train it in reality because it would involve many unpredictable accidents. Nevertheless, training an agent with good performance in virtual environment is relatively much easier. Because of the huge difference between virtual and real, how to fill the gap between virtual and real is challenging. In this paper, we proposed a novel framework of reinforcement learning with image semantic segmentation network to make the whole model adaptable to reality. The agent is trained in TORCS, a car racing simulator.
Introduction
In the artificial intelligence field, autonomous driving is a significant task and closely relevant to computer vision. The intention of this task is to design a system to autonomously control vehicles to do actions, such as steering, accelerating, and braking. Essentially, autonomous driving is an interactive model with the environment. In this task, on one hand, computer vision techniques can help the system extract and analyze information from driving scene, On the other hand, this task also requires many other techniques to do the decision making.
There are mainly two categories of methodologies to deal with this task, supervised learning and reinforcement learning. Both methods face obstacles. When using supervised learning to do the autonomous driving, the most crucial problem is training relies on a large amount of labeled data which requires much human effort. Also, it is hard to develop an end-to-end model because the ground truth for action is not objectively determined and the label on action would include personal bias. And state-of-the-art methods not using an end-to-end style are mostly very complex involving many empirical rules. Therefore, reinforcement learning seems better fits this task because it can be trained without human labeled training data. Reinforcement learning avoids the problem of a large amount of labeled data and the potential shortcoming of human bias.
Nevertheless, reinforcement learning also meets many problems. The most fundamental problem is that reinforcement learning model cannot be trained in reality because the training process would involve many collisions and other unpredictable situations. So most reinforcement learning models on this task are trained in virtual simulators. The approach of training also brings about some problems. The models performance when it is applied in reality will largely depend on how real the simulator is. In other words, the real driving scene is more complicated than the virtual simulator, which also challenges the generalization ability of the model.
In this paper, our approach is focused on how to fill the gap between virtual and real. We want to solve mainly two problems. One is the difference between the virtual simulator and real in terms of driving scenes. The other is the complexity and noise of reality scenes. Therefore, we use a translation network to transfer the virtual driving scene to semantic segmentation image and use these semantic images as state input to the agent. When applying the model to reality, we do semantic segmentation on the real driving scene and use the segmentation result as input to our model. We consider this translation would fill the gap between virtual and real. Also, we consider the semantic segmentation would be an appropriate level of abstraction of the real driving scene which reduces the complexity and still holds most useful information such as lanes and barriers.
Our framework has several advantages as below: • Compared with the huge demand for labeled data in state-of-art supervised learning, our framework does not relies on any labeled data.
• Training in a virtual environment and transfer to the real world, we do not need to confront the danger and enormous loss of failure. Figure 1. The reinforcement learning agent is trained as following: Initially, a virtual image is produced by simulator TORCS.Then this image go through an image semantic segmentation network. Finally the semantic representation of current driving scene is fed into the reinforcement learning agent. In reinforcement learning framework, agent observes the semantic image and chooses an action. After agent taking action, the environment will give a reward to agent to help the agent adjust its parameters and give an image of next state to the semantic segmentation network • The input of the reinforcement learning agent is semantic segmentation image. Semantic image contains less information compared to original image, but includes most information needed by agent to take actions. In other words, semantic image neglects useless information in original image.
Supervised Learning for Autonomous Driving
Supervised learning has been used in autonomous driving for decades. And these work can be categorized into two major styles, perception-based approaches and end-toend approaches. The perception-based approaches detect some mediate information to help the agent make decisions. In early years, they detect driving-relevant objects such as lanes, cars, pedestrians, etc. Recently, approaches like deep driving involve CNNs [2] to detect more direct information like distances between cars and lanes.
The end-to-end approaches seem more direct. They want to directly map images input to driving action predictions. ALVINN [16] shows an early attempt of end-to-end approaches. It learns the direct mapping with a shallow neural network. And in recent years, the shallow network has been replaced by more powerful deep neural networks like CNNs. NVIDIA [5]recently presented an end-to-end system with deep learning methods [3].
However, whichever styles of supervised learning approaches are employed, the training process requires large quantities of labeled data. And the model performance is highly relevant to the quality and quantity of data. Besides, supervised models have limited generalization abilities because the real driving environments are numerous which are far beyond the training data.
Reinforcement learning for Autonomous Driving
With a lot of variations, reinforcement learning has been a common technique for many scenarios such as computer games [15] and robot control [10,8]. Recently, plenty of work [1,19] contributed to building an autonomous driving system with reliable security. However, highdimensionality of state space and non-trivial large action range in the real world's practical driving environments are challenging the training of reinforcement learning. It is time-consuming to get an optimal policy over such high complexity. With the power of deep neural networks and deep reinforcement learning [11,15,18,12,14], a great step forward in such complexity is made recently. Nonetheless, not only deep Q-learning [15] method but policy gradient method [12] , they both require the interaction between the agent and environment to get feedback and reward. Obviously, training agents of an autonomous vehicle in realworld scenes is unrealistic because of the huge cost for every wrong action.
In order to avoid damage to the real world, reinforcement learning with driving simulators and transfer learning models appear. The training process of reinforcement learning with a driving simulator is safe and fast. However, in order to drive autonomously in the real world, the driving agent Figure 2. Examples of translation from the original output got from simulator to our intended input for the reinforcement learning agent. The first column is the original scene displayed by TORCS, the second column is the corresponding first-person perspective scene got by hacking from the source code, the third column is the semantic segmentation of the first-person perspective scene, the fourth column is the gray scale semantic perspective. must be able to take actions according to an intricate visual field, which is much different from the virtual images we get from a driving simulator. Models trained on virtual data and simulator cannot perform well in real-world data.
For the past decade, there are many models [17,9,20]. These models either first train a model in virtual environment and then fine-tune in the real environment [17], or learn an alignment between virtual images and real images by finding representations that are shared between the two domains [21] contribute to the transferring reinforcement learning. Some of these models first trained on virtual data to reduce time training in real-world significantly. But these models still have to train in real-world, they cannot avoid the risk of damage to real-world radically. Some other models try to learn an alignment between the virtual image and real image. In reality, the real visual field is much more complicated and more noisy than virtual image. Challenge of these models are, under a certain condition, an alignment which is good enough cannot be found between virtual images and real images.
There is a recent work [23] managing to train the rein-forcement learning only in environment created by simulator TORCS [22]. This novel framework demonstrated that after training on a simulator, the autonomous agent is able to realize a collision-free flight. Nonetheless, the work needs a nontrivial training environment to achieve its goal.
Scene Parsing
Semantic image segmentation is one part of our model. It can be seen as a pixel-level prediction task. Based on a deep convolutional neural network and fully convolutional neural network [13], many works achieved good performance in the field of image segmentation [4]. What we used in our framework to do image segmentation is PSPNet [24]. It extends the pixel-pixel level feature to specially designed global pooling one. And it also proposes an optimization strategy with a deeply supervised loss. This work achieves state-of-the-art performance on various datasets.
Proposed Framework
Our goal is to develop an autonomous driving model trained entirely in a virtual environment which can be ap-plied in real-world driving scenes with good performance. One of the major challenges is that the training environment is generated by a simulator, which means the training environment would be quite different from real-world scenes in terms of their appearance. To tackle this problem, we proposed to use an image translation process to convert virtual images to semantic layouts which resembles the semantic segmentation of its supposed corresponding real world image. This idea is inspired by the work of [23] which tries to fill the gap between virtual and real on synthesized images. Our framework contains two parts, the image translation process and the reinforcement learning. The image translation process intends to translate the virtual driving scene to a semantic representation. We use PSPNet as the semantic segmentation network in our model. In this part, in order to get required information such as the first-person perspective from the TORCS simulator, we use some hacking techniques. The sample process of this translation part is presented in Figure 1. Finally, we train an autonomous driving car using reinforcement learning on the semantic layouts obtained by the translation network. In the reinforcement learning part, we use the asynchronous advantage actor-critic reinforcement learning algorithm. In this section, we will present the image translation process and how to apply reinforcement learning to train an autonomous driving agent.
Image Translation Process
In order to ensure our autonomous model entirely trained in a virtual environment has good performance in the real world, we have to fill the gap between the training environment generated by simulators and the real-world scenes. From our point of view, the semantic segmentation of realworld visual field contains enough information the agent needs to take driving actions. Inspired by the work of [23], we adopt a translation process to translate the virtual image into a semantic segmentation image. This translation is based on the hypothesis that the semantic segmentation of real-world visual field is similar to the segmentation of a virtual image.
The image translation process mainly includes an image segmentation network.
First, we get the first-person perspective driving scene from the TORCS simulator. Then we use PSPNet to translate the virtual driving image into semantic segmentation image. The output of this part is the semantic layout, and the semantic layout will be fed into the reinforcement learning agent.
There is an obvious obstacle in this part. The appearance of virtual driving images are different from the realworld images, we cannot apply the segmentation tool pretrained on real-world datasets like Cityscapes [7] directly to the virtual images. And there are no semantic annotations Figure 3. The convolutional neural network in PSPNet is used to get the feature map from the last convolutional layer. After that, in order to harvest different sub-region representation, the PSPNet applies a pyramid parsing module. Then, it uses upsampling and concatenation layers to get the final feature representation. Finally, the convolution layer, which gets the final feature representation as input, outputs per-pixel prediction.
for TORCS virtual images. We tackle this problem by some hacking techniques which will be explained in the experiment part. We use Asynchronous Advantage Actor-Critic(A3c) to train the autonomous-driving vehicle and get the best performance comparing with other reinforcement learning structures, which is a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. A3C utilizes multiple incarnations of the above in order to learn more efficiently.
Reinforcement Learning for Training Autonomous Driving
In A3C there is a global network and multiple worker agents which each have their own set of network parameters. Each of these agents interacts with its own copy of the environment at the same time as the other agents are interacting with their environments.
The experience of each agent is independent of the experience of the others, making speed up and better performance. In this way, the overall experience available for training becomes more diverse. Critically, the agent uses the value estimate to update the policy more intelligently than traditional policy gradient methods.
In [14] there are more details of implement A3C algorithm.
In order to encourage the agent to drive faster and avoid collisions, we define the reward function as where v t is the speed (in m/s) of the agent at time step t, α is the angle (in rad) between the agent's speed and the tangent line of the track, and dist (t) center is the distance between the center of the agent and the middle of the track. β, γ are constants and are determined at the beginning of training. We take β = 0.006, γ = −0.025 in our training.
Experiments
We perform experiments to compare the performance of our method and other existing methods of autonomous driving on real-world driving data. This set of experiments aim to evaluate our model's performance in the real-world driving scene. And also we perform experiments to compare our method and basic reinforcement learning without image translation network on TORCS simulator. This set of experiments aim to show advantages our model bring to reinforcement learning process.
Autonomous Driving with Image Translation
Network and RL on Real-world Driving Data In this experiment, we trained our proposed reinforcement learning model with image translation process. We first trained the semantic segmentation network(PSPNet) and then apply the trained network to generate semantic parsing images to feed into our A3C reinforcement learning agent to train a driving policy. And finally, we apply the trained agent on a real-world driving data to evaluate its performance. When we apply the agent to the real-world driving data, we also use the semantic segmentation network to get the semantic input to the agent. To have a comparison, we also trained another reinforcement learning method without image translation network in the TORCS simulator. This model is same as our proposed model except the translation network. We call it Basic RL.
Dataset
The real-world driving data is from [6], which is collected with detailed steering angle autonomous per frame. There are in total around 45k images in the dataset. To train the image translation network, we use two datasets separately. We collect 2k images from the TORCS simulator and use hacking techniques to get semantic labels for these collected virtual images. We modified the source code of TORCS guided by the work to control each category of objects to show or hide. In detail, we compare the original image with the image after we hide all the trees, and get the exact pixels covered by trees. And we do the same for other objects. We process the collected images in this way to get their semantic labels. The other dataset we use is Cityscape [7]. It contains around 25k real-world images with semantic segmentation annotations.
Image Translation Process
Both scene segmentation parts to translating virtual images and real-world images in our framework are PSPNet.
We train the network translating virtual images to semantic ones which would be applied in training part with the dataset we collected from TORCS. We don't simply use the ground truth label we hacked from the simulator as the output of this part because we want to avoid the bias between PSPNet result and hacked ground truth. More specifically, we want the training part in TORCS and testing part on real-world data share more similarity through using same segmentation tool for both parts.
And we train the network translating real-world images to semantic ones which would be applied in real-world testing part with the Cityscape dataset.
We do experiments to determine whether use the original RGB semantic result as the output of this part and feed it into the agent or translate the semantic result into a gray scale image before fed into the agent. We finally choose to use the gray scale images as the output of this part. This will be presented in the result section.
Reinforcement Training
We use such structure in our A3C algorithm: the actor network is a 4-layer convolutional network, with ReLU as ac- tivation function. It's input is 4 consecutive frames and there are 9 discrete actions can be its output("go straight with acceleration", "go left with acceleration", "go right with acceleration", "go straight and brake", "go left and brake", "go right and brake", "go straight", "go left", and "go right"). 12 asynchronous threads with RMSProp optimizer is what we used to train our reinforcement agent, whose initial learning rate is 0.01, γ = 0.9 and = 0.1.
Evaluation on Real-World Dataset
The real world driving dataset [6] provides the steering angle annotations per frame. However, the actions performed in the TORCS virtual environment only contain "going left", "going right", and "going straight" and their combination with "brake" and "acceleration". Therefore we come up with a mapping in Table 1 from the steering angle to the action space of our RL agent. With this mapping, we evaluate the accuracy of action prediction of our model.
Result of Image Translation Process
We use PSPNet to translate virtual images and real-world images into semantic images. Figure 2 is examples of the result of translation process on virtual images. We tried two alternative output of this part -the RGB image or gray scale image. We do experiments for both, compare the performance and finally choose the gray scale image. Figure 8 is examples of the translation result of realworld driving images. Figure 6. shows the training process using RGB semantic image as agent observation. Figure 7. shows the training process using gray scale semantic image as agent observation.
Testing on Real-World Driving Data
We extract 4 consecutive frames from the real-world driving data and parse them into semantic images with PSP-Net. And feed these images into our agent to get predictions. Figure 5 shows the prediction our agent made to corresponding input.
We test all the frames in the driving data set and finally got an accuracy of 36.6%. The comparison with the basic reinforcement learning and the supervised model trained on the dataset is shown in Table 2.
And examples of predictions and ground truth are shown in Table 3.
Conclusion and Future Work
Our proposed autonomous driving model tries to transfer the reinforcement learning agent developed in a virtual environment to real-world tasks. We use semantic segmentation as the tool to fill the gap between virtual and real. But result reveals that its performance is limited by the result of segmentation.If the segmentation techniques develop, our model would have better capacity. Future work can combine our proposed model with supervised models on realworld data to achieve better results. | 2018-01-13T06:55:23.000Z | 2018-01-13T00:00:00.000 | {
"year": 2018,
"sha1": "34b33ff86e17a2e10d07e4990c27b3746431e731",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "aa7e7637f3443a823ee799a560ab84103b0e9a7f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
231900669 | pes2o/s2orc | v3-fos-license | A New Instrument to Assess Children’s Understanding of Death: Psychometrical Properties of the EsCoMu Scale in a Sample of Spanish Children
The acquisition of the death concept in children may influence how these children cope with the losses that they will confront throughout their lives. At the present time, there is a lack of psychometric instruments in Spanish-speaking countries in order to evaluate the components of the death concept in children. The aim of this study was to create and validate a scale (EsCoMu-Escala sobre el Concepto de Muerte) in order to provide insight about the concept of death in children. The sample was formed by 358 children from ages 6 to 13 years. The final EsCoMu version has 27 items which serve to evaluate universality, irreversibility, non-functionality and causality. The results of the confirmatory factor analysis show an adequate fit index for the four dimensions model, reliability (α = 83) and validity evidence, specifically based on the children’s age. In conclusion, EsCoMu is an instrument that shows adequate reliability and validity indices in order to assess the concept of death and its four components among children. Due to its simplicity, this instrument can be very useful if applied to the field of neurodevelopmental disorders.
Introduction
In Western societies, death is often considered a taboo subject.In the case of children, in addition to the social reticence to talk about death, the way that the concepts of death and dying are learned is extremely important [1] and may influence how they cope with grief and loss.Previous research seems to indicate that children grief manifestations are directly associated to the knowledge they have about death [2].Also, the possibility to talk about death and understand its meaning may help children to overcome the mistaken ideas or the appearance of unnecessary fears that can have an impact on children's emotional life even in adulthood, interfering in the normal elaboration of bereavement processes that they will have to deal with in the future [3].Recent reviews also consider the concept of death as a core aspect when communicating bad news to children and adolescents [4].The concept of death is very complex, as it is influenced by variables such as social beliefs, cultural norms, emotions, biological development, cognitions or previous experiences with death [5,6].
Several studies highlight four core components related to the concept of death: universality, irreversibility, non-functionality and causality [7].Universality implies that death is conceptualized as a natural phenomenon that applies to all living beings.Irreversibility is linked to the understanding that the dead cannot come back to life.Non-functionality includes the acknowledgement that, once a person has died, their bodily functions cease, as well as their internal and external actions.Lastly, causality implies the understanding of the possible internal or external factors which can cause the end of life.Other authors have proposed other dimensions, such as inevitability, personal mortality or unpredictability [8], but most of them agree on the model proposed by Speece and Brent [7].
There are multiple factors that can influence the understanding of the death concept, even if the scientific evidence of previous investigations is contradictory.Age is one of the most consistent variables, as the death concept is more defined the older the child is.However, each component (universality, irreversibility, non-functionality and causality) seems to follow different patterns [9][10][11][12].The child's previous experience with death or illness has also been positively linked to the understanding of the components of the death concept [9].However, other studies did not find any differences [3,10,13].The cognitive ability of the children has also been considered an important aspect, especially in the background of the cognitive development model by Jean Piaget.During the preoperational stage, there is a predominance of magical thinking and egocentricity, so it is harder for the child to understand the different aspects of the concept of death.However, De la Herrán et al. [14,15] noted that, as early as 3 years old, children are able to distinguish between life and absence of it.Moreover, between 3 and 5 years old, children start to be curious about the signs of devitalization and the causes of death [16,17].In subsequent stages, such as in the concrete-operational (7-11 years), the child begin to understand the logical operations and reversibility of thought [18], so they can develop a more mature understanding of death by including components such as irreversibility, non-functionality and causality [19].Different studies have also indicated the existence of two complementary approaches to death in children that include the biological aspect and the meta-psychological, afterlife or religious conception of death [20].Both conceptions of death seem to be influenced by culture [21,22].
Keeping in mind the construct complexity, it is necessary to have valid and reliable assessment instruments in order to evaluate the concept of death in children.Previous investigations have used open qualitative interviews, as well as other art-related approaches, such as drawings, storytelling or play in order to investigate the understanding of the death concept [3,9,23].However, there is a lack of quantitative instruments showing adequate reliability and validity indices in order to assess the concept of death, as, to our knowledge, a Spanish adapted quantitative scale that meets such conditions has yet to be created.
One of the classic resources is the Death Concept Questionnaire [24], which is formed by two groups of 13 questions about death in people and animals.The factor analysis displayed four main factors: irreversibility, non-functionality, causality and inevitability of death and old age [9].The rating of each item depends on the correctness of the reply (ranging from 0 to 3) and the sophistication of the child's explanation, finding an overall Cronbach's alpha of 0.77 [24].The main limitations of the Death Concept Questionnaire are that each factor is composed of a small number of items and that the responses of the child need to be categorized.
In Spanish, we can find two qualitative instruments.First, we can find the interview developed by Viñas et al. [25,26], named "Entrevista Estructurada del Concepto de Muerte-ECM" ("Structured Interview on Death Concept"), which evaluates the universality, irreversibility and non-functionality through 11 closed questions with dichotomous answers.Another series of 17 questions are included in order to assess beliefs related to afterlife and the child's personal experience with death, as well as three open questions where the child is asked to specify three causes of death (animal, person and own) and a final question about the definition of suicide [26].In Mexico, Gutiérrez et al. [27] have recently developed a qualitative interview in Spanish, where 14 items assessing universality, finality, non-functionality and causality where used.The main limitation of both instruments is that they don't report evidence involving instrument validity, reliability or factorial structure.
In accordance with the above, it is essential to develop a new instrument in the Spanish language which allows us to assess the components of the death concept in children while having proper psychometric features.The main benefit of having an instrument of this kind is that it can be easily completed and quickly distributed among a large number of participants, without having to invest too much time codifying or correcting the results (as in interviews or other qualitative approaches).Also, the scores from the scale can be easily compared between studies and populations.Finally, we need to establish a reliable measure of the death concept among primary school children which will serve as a starting point to assess other populations of children, such as those with intellectual disabilities or other neurodevelopmental disorders.
Due to this, the main aim of this study is to develop and present the psychometric properties (factorial structure, reliability and validity) of a scale which is able to assess the acquisition of the components of the concept of death in primary school children (6-13 years).The factorial structure of the scale will be tested through confirmatory factor analysis, and reliability will be calculated through internal consistency analysis.The main validity evidence was assessed through the comparison of the scale between different age groups.We hypothesized that younger children will have lower scores in the four dimension of the death concept, in comparison with older children.In addition, we wanted to explore the differences in the following variables: sex, existence of a previous loss and school setting.
Materials and Methods
The study was formed by 358 primary school students (6-13 years) coming from five schools of the Spanish provinces of Granada, Jaén and Cádiz (see Table 1).
The mean age of children was 9.92 years (SD = 1.57).Different schools of the provinces mentioned before were contacted to select the participant sample.The inclusion criteria were: school's willingness to participate, informed consent signed by children's parents or legal guardians, and children who were between 6 and 13 years old.Data were collected about each student's sex, age and their school setting.Given the foreseeable differences between children of different ages, they were grouped into four levels (see Table 2).
Instruments
Ad-hoc demographic data questionnaire: Age, sex and school setting (rural/urban/semi-urban) were considered.When the school setting was not fully rural or urban (e.g., towns located in a range of less than 10 km from the province capital), it was coded as semi-urban.A question asking whether they have suffered a recent loss (yes/no) was also added.
Scale to assess the concept of death-EsCoMu (Escala sobre el Concepto de Muerte): This dichotomous scale was developed by four of the authors of this study who are professionals and researchers specialized on the field of bereavement and end of life processes.The dichotomous rating (yes/no) was based on previous instruments and interviews used in the field [25,28].The items were evaluated by a panel of experts, who established items relevancy, adequacy and belonging to each of the four dimensions or theoretical components: irreversibility, universality, non-functionality and causality [29].Each item was rated on a likert scale ranging from 0 to 100 assessing its relevancy (if the item was significantly relevant for the dimension assessed) and adequacy (if the item was appropriate for the proposed dimension of the concept of death).Experts could also include qualitative commentaries about the items.The initial version of the scale consisted on 38 items, of which 10 were eliminated as they showed mean values less of than 70 in any of the dimensions assessed (relevancy and/or adequacy), as well as those that experts identified as difficult to understand for children (in the qualitative part of the survey).An additional item was removed because of their lower factor loadings in the exploratory analysis.
The definitive EsCoMu version is formed by a group of 27 dichotomous items (yes/no answer), grouped into four dimensions: 6 for universality, 7 for irreversibility, 7 for nonfunctionality and 7 for causality (see Supplementary Material for the Spanish version of the scale).Each item has a score of 1 (if the answer is correct) or 0 (if the answer is incorrect), with some items being reversed.The total score is calculated by adding all items from each component.The scale global Cronbach's alpha of this investigation was α = 0.83.
Procedure
Two informative meetings were held in order to inform the management of each school participating in the study.After obtaining the school's permission, with the help of Parent Associations (AMPA-Asociación de Madres y Padres de Alumnos), an informative meeting with parents was held in order to explain the aim of the study.Secondly, each student was given an informative sheet and an informed consent in order to be signed by their parents.Subsequently, this consent was returned to the school teacher along with each parent's or legal guardian's signature.The students whose parents did not fill in the informed consent performed a different activity unrelated to the study.
The assessment was performed in groups, in the students' regular classroom.In one session, they filled in all the sociodemographic data as well as the scale EsCoMu.The evaluation lasted around 20 min, and all students received similar guidelines.The evaluation was performed by a team of experts experienced in end-of-life processes.
At the beginning of the evaluation, all students were given the option of not participating in the activity if they did not want to, regardless of the parent's consent.Each participant's emotional state was evaluated during and after the assessment in order to provide them with emotional support if needed, but none of the participants had reported issues in this regard.
Ethical Considerations
Prior to the data collection and the inclusion of participants in the study, both school management and parents were informed about the aim, purpose and confidentiality of the study.In every case, the evaluation was performed after obtaining both the school authorization and after collecting parent-signed informed consent for their children's participation.The present study was approved by the University of Granada's Ethics Committee on Human Research (Ref.1056/CEIH/2020).
Data Analysis
To perform the descriptive analysis, the frequency of each answer was calculated for every item.To verify the factor structure, a confirmatory factor analysis (CFA) was performed.Given the dichotomous condition of data, the WLSMV (Variance-Adjusted Weighted Least Squares) estimation method was used.The following indices were used in order to calculate items' fit with the proposed model: RMSEA (Root Mean Square Error of Approximation), TLI (Tucker-Lewis Index), CFI (Comparative Fit Index) and WRMR (Weighted Root Mean Square Residual).
To gather validity evidence about possible relations between EsCoMu's ratings and other external variables, multivariate analysis of variance (MANOVA) was applied, with variables including age, sex, having suffered a loss (yes/no) and school setting (urban, semiurban and rural) for each subcomponent of the scale.For post-hoc contrasts, Bonferroni correction and partial eta-squared effect size were used.Statistical software SPSS 22 (IBM, 2013) was used.For gathering structural validity evidence, MPLUS 6.11 [30] was used.
Descriptive Analysis of EsCoMu's Response
In Table 3, the percentage of correct answers for each age group is shown in each EsCoMu scale item, based on their dimension.
EsCoMu Factor Structure
Two one-order models (One-Factor and Four-Factor) were tested.However, fit indices for the one-factor model (see Table 4) were not adequate.As correlation values were not appropriate for causality-universality dimensions on the Four-Factor model (with a coefficient bigger than 1), a second-order model was tested (Second order with Four Factor model).CFA results showed adequate fit indices in this model (see Table 4 and Figure 1).The dimensions showed medium-high values of intercorrelation (see Table 5).
Evidence of Validity
As observed in Table 6, the MANOVA results indicate a significative medium-low effect size with age in all EsCoMu dimensions as well as in the total score.Post-hoc analysis (Bonferroni) indicate lower scores in Universality, Irreversibility, and Non-functionality in the 6-7 years age group if compared with the 8-9 years age group (p < 0.01) and with the 10-11 years age group (p < 0.01).There were no statistically significant differences among the youngest and oldest age groups.However, both in the Causality dimension and in the total score, the youngest children showed statistically significant differences (both p < 0.01) if compared with other groups, where such differences do not occur.
As observed in Tables 7 and 8, there were no differences between any of the instrument dimensions involving sex, nor between the participants who have suffered a recent loss and those who did not.Finally, when analyzing the children's school setting, MANOVA showed lower results in rural areas (Table 9).The effects appear in the four dimensions as well as in the instrument's total score, with medium-to-low effect sizes.The post-hoc analysis results (Bonferroni) indicate differences only between rural and semi-urban settings (p < 0.01) in terms of Universality, Irreversibility and Non-functionality.Again, in this case, Causality and the EsCoMu total score are the dimensions with more differences between rural settings and the other two groups (p < 0.05).
Discussion
The aim of this study was to develop and present the psychometric properties (factor structure, reliability and validity) of a scale which was able to assess the acquisition of the components of the concept of death in primary school children (6-13 years).Results prove that the scale has an adequate reliability and factor structure, showing promising validity evidences.
The scale global alpha shows acceptable values, in the same direction as the value reported to this date by the Death Concept Questionnaire (α = 0.77 in the original study and α = 0.81 in the study by Bonoti et al. [9]).Test-retest measures could not be included, so future research should investigate whether the EsCoMu scale maintains its reliability over time, as well as whether it is sensitive to death education-based interventions [2,14].
The four scale factors showed positive and moderate correlations between each other.Furthermore, the CFA model showed that the four components of the concept are closely related to each other, suggesting that they are part of an underlying construct, in this case the concept of death.This is supported by the results of the second-order CFA, where the concept of death is explained by the four factors, which are also explained by their respective items.Following this rationale, previous research has highlighted the relevance of these four components to explain the concept of death [6,10].
Age is one of the factors that seems to systematically influence the acquisition of the concept of death.In the present study, age-dependent differences were found in the four evaluated dimensions, in line with previous investigations [10,31].In the dimensions of universality, irreversibility and non-functionality, differences were identified between younger and older children, whereas, in causality, statistically significant differences were found between 6-7-year-old children and the rest of the age groups.This seems to indicate that not all dimensions are acquired by following the same pattern.However, we do observe that, from the age of 8-9 years, all four subcomponents are well acquired in children [7,9].In their sample, Gutiérrez et al. [27] found that not all the components of the death concept differed according to age, finding no significant differences in universality and causality based of this variable, but rather between finality and non-functionality.However, results for these variables were very close to significance (p < 0.08 in both cases).
Regarding the school setting, lower scores have been found in rural schools as compared to these semi-urban and urban schools.Previous studies have shown different results, pointing to greater acquisition in children living in rural environments.Panagiotaki et al. [31] found significant differences in the irreversibility component when comparing three children groups (British living in London, British Muslims living in London, and Muslims living in rural areas of Pakistan), being higher in the latter group.However, in this study, since the groups were not equivalent due to a series of key cultural variables, no conclusive evidence can be drawn regarding the area of origin.Lastly, other studies did not find significative differences regarding the death concept in urban and rural settings [32].
In the present study, we also did not find significant differences in any of the components of the death concept based on the child's sex or in the event of having a recent loss.These findings are consistent with one prior study [3], but differ from another study that found differences on this variable [9].Future studies must investigate the influence of the loss type and the appearance of specific symptoms in terms of bereavement, which sheds light on to what extent it is the experience of loss itself, or the intensity of the child's experience that will affect the acquisition of the concept of death [33].
The development of a scale that serves to evaluate the concept of death has important clinical implications.On one hand, death is not a common topic in school subjects or academic curricula.On the other hand, adults and families are often hesitant about how to respond to the questions that children raise about the concept of death and the dying process.Moreover, in spite of children's curiosity, some adults are unsure about the appropriateness of discussing death with their children [34].Therefore, it is a common situation that children are not able to find the adequate space to clarify their doubts regarding what death means.This may prevent them from developing more adaptive coping responses, which can lead to emotional issues.Therefore, it is essential to have valid and reliable instruments which allow us to evaluate the conceptualization of death in different ages and contexts, as well as to work in education, as many teachers are currently demanding [35,36].
The EsCoMu scale, due to its fast and easy application, can be widely used in populations with neurodevelopmental disorders or problems.Previous qualitative studies have shown that, in cases involving diagnosed intellectual disabilities or neurodevelopmental conditions, the acquisition of the death concept seems to follow a different pattern.Markell and Hoover [37] highlight how even learning problems or physical and emotional issues in children can affect the bereavement process and understanding of death.Children diagnosed with intellectual disability (ID) have shown confusion and difficulty understanding the concepts of non-functionality, irreversibility (associating death with the illness) and universality [37].Finally, recent studies in adults diagnosed with ID show that they do acquire the components of the death concept, but in a different and, in many cases, partial way [38,39], showing greater difficulty understanding concepts such as causality and uni-versality [40].Future studies that use the EsCoMu scale may explore the concept of death in these populations, as well as the effectiveness of interventions related to death education and supporting methods for bereavement and end-of-life processes in this population [41].
However, this research has a series of limitations: Firstly, the child age groups were not equivalent, the 10-11-year-old group being the biggest one.Moreover, it is necessary for future research to examine the usefulness of EsCoMu among populations under 6 years of age as well as to include test-retest measures to evaluate the temporal reliability of the scale.We did not perform a pilot study or cognitive interviews prior to the initial assessment, so it may be useful to perform pilot testing when applying this scale to children less than 6 years old.However, the assessed population did not have any problem understanding any of the items.No measures of anxiety or depression were taken in the children who completed the scale, so it would be necessary in the future to control such variables and check their effect and connection to the EsCoMu score.Future studies should verify if the acquisition of the components of the death concept correlates with the most common themes regarding death, such as biological, psychological or metaphysical death [3,9].In the present study, age was considered as the main evidence of validity, but there are other variables associated with the concept of death, such as religious, cognitive or socioeconomic aspects, that should be included in future studies to have measures of convergent validity of the EsCoMu scale.Finally, the use of mixed methods design to explore the relationship between the acquisition of the components of the concept of death and the subjective experience of the child [27] will give additional information about the validity of the EsCoMu scale.
Conclusions
In conclusion, the EsCoMu scale is an instrument with adequate factor structure that shows adequate reliability and validity indices in order to assess the concept of death and its four components (universality, irreversibility, non-functionality and causality) among children.
Figure 1 .
Figure 1.Second-order four factors EsCoMu model.Latent variables are represented by ellipses and measured variables are represented by rectangles.Values are standardized estimated correlations between factors.
Table 2 .
Sociodemographic data sample divided by age group.
Table 3 .
Descriptive analysis of the EsCoMu scale items for each age group.
Table 4 .
Fit indices for EsCoMu factor models.
Table 6 .
MANOVA results for school setting with EsCoMu dimensions and total scores.
Table 7 .
MANOVA results for gender with EsCoMu dimensions and total scores.
Table 8 .
MANOVA results for recent loss with EsCoMu dimensions and total scores.
Table 9 .
MANOVA results for school setting with EsCoMu dimensions scores. | 2021-02-13T06:16:37.217Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "bfa8630f4b43a0b1f33ff40c06755e978d0569ed",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9067/8/2/125/pdf?version=1614236518",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "22a6ea902c6a1dd39e8bb04b47c661e03e9c892b",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
90227697 | pes2o/s2orc | v3-fos-license | Sexuality of Aruncus aethusifolius ( Rosaceae )
Aruncus L. is a small genus which is distributed in the circumpolar regions of the Northern Hemisphere, including Europe, Asia, and North America. In Korea, there are two taxa, A. dioicus var. kamtschaticus and A. aethusifolius. Aruncus aethusifolius is a species endemic to Korea, occurring only on Jeju Island. An important characteristic of Aruncus is dioecy. However, there is some controversy regarding the sexuality of this genus, and little is known about A. aethusifolius. To characterize the sexuality of A. aethusifolius and to provide insight into the evolution of its sexual system, we investigated natural populations and herbarium specimens of A. aethusifolius. The results indicate that the species has carpellate, staminate, and bisexual flowers. Carpellate flowers are always borne on separate individuals, and staminate flowers are borne either on separate individuals or mixed with bisexual flowers on the same individuals. The sexuality of A. aethusifolius is defined as polygamo-dioecious. The polygamodioecious type is also found in A. dioicus var. dioicus and A. dioicus var. kamtschaticus and thus may be a general condition in Aruncus.
In Korea, there are two taxa.Aruncus dioicus var.kamtschaticus is distributed in the higher elevations of mountains on the Korean Peninsula and on Ulleung Island, but it has not been reported on Jeju Island.This taxon is widely distributed in East Asia from western China, Korea, to Japan with a wide range of morphological variation (Ikeda, 2001;Gu and Alexander, 2003;Lee, 2003;Lee, 2007).Aruncus aethusifolius is an endemic species of Korea, occurring only on Jeju Island.It is usually found in moist places near or crevices of volcanic rocks at elevations ranging from 1,500 to 1,800 m as part of the subalpine vegetation on Mt.Halla, and it occurs as well on moist banks along creeks of the mountain at elevations as low as 400 m.Aruncus aethusifolius is occasionally thought of as a variety of A. dioicus (Hara, 1955;Lee, 2003), but it is considered to be a distinct species due to several unique morphological characteristics, such as a short plant stature, very short internodes in the stems, highly dissected leaflets, and erect follicles (Nakai, 1912;Lee, 1980;Lee, 2006;Lee, 2007).
One of key characters of Aruncus includes the sexuality of the species, in which staminate and carpellate flowers are developed in different individuals in a population, making it dioecious.Dioecious species is very rare among the approximately 90 genera and 3,000 species in Rosaceae,
occurring only in eight genera and approximately nine species (Table 1).Given that the vast majority of species in Rosaceae are hermaphroditic and considering the phylogeny of Rosaceae (Potter et al., 2007a;Xiang et al., 2016), dioecy in Rosaceae were likely derived independently in each of the species from hermaphroditic ancestors.Dioecy is also rare in angiosperms, found in about 6% of flowering plants (Renner and Ricklefs, 1995), in contrast to the animal system, in which gonochorism with separate males and females is found in 95% of all species (Jarne and Auld, 2006).
The rarity of dioecism in angiosperms has led to much attention regarding the evolution and development of separate genders (Darwin, 1877;Westergaard, 1958;Irish and Nelson, 1989;Renner and Ricklefs, 1995;Sakai and Weller, 1999;Charlesworth, 2015;Käfer et al., 2017;Pannell, 2017;Zuluvova et al., 2017).Several characteristics, such as woodiness, wind pollination, and large inflorescences with small white to green flowers, are associated with the evolution of dioecy.With regard to their geographic distribution, dioecious species are more common in the tropics and on oceanic islands, including Hawaii, in angiosperms (Renner and Ricklefs, 1995;Sakai and Weller, 1999;Käfer et al., 2017).Aruncus provides an intriguing system with which to study the evolution of dioecy, as it deviates from the general trends in the evolution of the sexual system.However, there is confusion regarding the sexuality of Aruncus among authors.While most authors (Radford et al., 1964;Ohwi, 1965;Gleason and Cronquist, 1991;Ikeda, 2001;Lee, 2007;Mellichamp and Wetherwax, 2012;Mellichamp, 2014) have considered Aruncus to be dioecious, Gu and Alexander (2003) have found that Aruncus is monoecious, in which staminate and carpellate flowers are borne on the same plant.Tutin (1968) recognized that Aruncus is polygamo-dioecious, in which bisexual (or perfect) flowers are developed in some of the male or female plants in a population.Bond (1962) and Robertson (1974) showed that bisexual flowers are occasionally found in staminate inflorescences in A. dioicus var.dioicus in eastern North America, consistent with Tutin (1968).The occurrence of bisexual flowers is unknown in other varieties of A. dioicus and A. aethusifolius.
The objectives of this study are to characterize the sexuality of A. aethusifolius based on field observations and investigations of herbarium specimens and to provide insights into the evolution of sexual system of this species.
Materials and Methods
Field observations of A. aethusifolius were made on Mt.Hallasan on Jeju Island.Fifty-three herbarium specimens of A. aethusifolius and 190 specimens of A. dioicus var.kamtschaticus were examined in KB (the acronyms follow Thiers, 2017), TI, TUT, and Warm-Temperate and Subtropical Forest Research Center.A list of specimens of Aruncus aethusifolius examined in this study was provided in Appendix 1.
The sexuality of A. aethusifolius in a flower and a plant was determined under a microscope.A Nikon SMZ18 dissecting Table 1.Genera or species with unisexual flowers in Rosaceae.Most genera with unisexual flowers are monotypic, except for Spiraea, Rubus, Fragaria, and Sanguisorba.In the cases of Spiraea, Fragaria, and Rubus, most species produce bisexual flowers.The tribal and supertribal classifications follow Potter et al. (2007b).Data were obtained from Hutchinson (1964), Tutin (1968), Gu et al. (2003), and Kalkman (2004) stereomicroscope (Nikon, Tokyo, Japan) was used to examine and photograph the flowers.Each specimen was scored as a male or female plant.The proportion of bisexual flowers in a plant was approximated by the length of the inflorescence axis, with bisexual flowers divided by the total length of the inflorescence axis.Flowers of A. aethusifolius are very small and borne very tightly along the inflorescence axis.The proportion of bisexual flowers in each gender class was plotted.
Results and Discussion
Our examination of A. aethusifolius indicated that the species has complex patterns of sexuality (Fig. 1).Three types of flowers were found: carpellate (Fig. 1B), staminate (Fig. 1D), and bisexual (Fig. 1F) flowers.The flowers had four basic whorls: sepals, petals, stamens, and carpels.In the carpellate flowers, stamens were not fully developed such that the filaments were very short and the anthers minute (Fig. 1B).In the staminate flowers, rudimentary carpels were located at the center of the flower (Fig. 1D).The bisexual flowers had both stamens and carpels that were fully developed (Fig. 1F).Nectaries on the inner side of the hypanthium were developed in all three types of flowers, which attract pollinators (Fig. 1).The flowers of A. aethusifolius are borne in panicles of racemes with 2−9 lateral branches.Carpellate flowers are always borne in separate individuals, making them female (gynoecious) plants (Fig. 1A).There are male (androecious) plants bearing only staminate flowers (Fig. 1C).In some male plants, bisexual flowers were also developed with staminate flowers in an inflorescence (Fig. 1E).Bisexual flowers are located at the lower part of a branch or at the lateral branch of the inflorescence.These andro-polygamous plants have mixed of flowers, staminate and bisexual flowers.The proportion of bisexual flowers in andro-polygamous plants based on herbarium specimens varied, ranging from 22 to 79% (Fig. 2).Thus, the populations of A. aethusifolius consist of female, male, and andro-polygamous plants.Of the 53 specimens examined, 14 were female, 23 were male, and 16 were andropolygamous (Fig. 2).As a result, sexuality of A. aethusifolius is defined as polygamo-dioecious.
Polygamo-dioecious may be a general condition in Aruncus.An examination of 190 specimens of A. dioicus var.kamtschaticus collected in Korea, China, and Japan showed that one specimen from Mt. Hwangbyeongsan in Gangwon Province (Kwon 080727-080, KB278766) was andropolygamous.The presence of andro-polygamous plants in A. dioicus var.kamtschaticus, albeit rare, is consistent with our observations of A. aethusifolius.Thus, our investigations of the sexuality in A. aethusifolius and A. dioicus var.kamtschaticus support previous studies of A. dioicus var.dioicus (Bond, 1962;Tutin, 1968;Robertson, 1974).
The occurrence of andro-polygamous plants in a dioecious species is occasionally reported in Rosaceae.For example, flowers of Oemleria cerasiformis (Torr.& A. Gray ex Hook.& Arn.) J. W. Landon are usually unisexual, developed in separate plants.However, in rare cases, bisexual flowers are also found (Hess, 2014), though the frequency of bisexual flowers is largely unknown.
The frequency of andro-polygamous plants varies across species in Aruncus.The occurrence of an andro-polygamous plant is very rare in A. dioicus var.kamtschaticus, as described above.It is difficult to quantify the frequency of andropolygamous in the eastern North American A. dioicus var.dioicus.Robertson (1974) stated that it was 'occasionally' found.However, the frequency of andro-polygamous plants in A. aethusifolius is much higher than the frequency in A. dioicus.Of the 53 specimens examined, 16 (30%) were andropolygamous (Fig. 2).The gender frequency based on herbarium specimens may be biased, as female and andro-polygamous plants could be over-represented.Female and andropolygamous plants bearing flowers or fruits could be prepared for specimens, while male plants may have been neglected when the flowers fell off.Our field study supports the underrepresentation of male plants in herbarium specimens.Among nearly 60 individuals of nine clumps of A. aethusifolius along the Youngsil trail of Mt.Hallasan, male plants were dominant.Plants of A. aethusifolius showed a patchy distribution pattern in the population.A clump consisted of two female plants (Fig. 1A), one clump contained three male and four andropolygamous plants (Fig. 1E), and the remaining clumps included male plants.Our field observation also shows that the frequency of andro-polygamous plants in A. aethusifolius is higher than that in A. dioicus.
Why does A. aethusifolius show a high frequency of andropolygamous plants?It may be that the fitness of bisexual flowers, the population size, the genetic mechanism of sex determination, or a combination of these factors is associated with the frequent development of bisexual flowers.The occurrence of the same pattern of sexuality in all taxa of Aruncus, including A. dioicus and A. aethusifolius, suggests that polygamo-dioecious is genetically determined.Given that its sister group, Holodiscus, and most other members of Spiraeeae are hermaphroditic (Table 1), the gender polymorphism and underlying genetic network in A. aethusifolius should have evolved in the most recent common ancestor of Aruncus.It is interesting to note that seeds produced The fitness of the andro-polygamous plants in A. aethusifolius may be greater than the fitness of andropolygamous plants in A. dioicus.Bisexual flowers are borne on the part of the male inflorescence, as described above (Fig. 2).Thus, the proportion of seeds from andro-polygamous plants in a population would decrease as the size of the inflorescence increases.The size of the panicles of female plants in A. dioicus is very large (11−27.8cm in length [mean = 17.5 cm]), producing numerous follicles.In contrast, the size of the inflorescence of female plants in A. aethusifolius is small, ranging from 7.7 to 12.5 cm in length (mean = 9.4 cm).This suggests that the magnitude of the difference in the number of seeds produced between carpellate and bisexual flowers in A. aethusifolius is smaller compared to that in A. dioicus.This would result in a higher proportion of seeds from bisexual flowers in a seed pool or bank in a population of A. aethusifolius.Further study of the gender specificity of these seeds is therefore necessary.
The genetic mechanisms which determine gender in plants are diverse (Pannell, 2017) because dioecy evolved multiple times from hermaphroditism.A genetic network for sex determination controlled by environmental signals was shown in Ceratopteris richardii Brongn.(Tanurdzic and Banks, 2004).Chromosomal sex determination has been widely studied in several angiosperms, such as Fragaria virginiana Mill.(Spigler et al, 2008), Sagittaria latifolia Willd.(Dorken and Barret, 2004), and Silene latifolia Poir.(Fujita et al., 2012).In monoecious plants such as melons and maize, the sexual identity of flowers as either male or female is determined by different genetic networks that react to hormones (Irish, 1999;Boualem et al., 2015).These case studies suggest that there is no general model of gender determination in plants.It would be interesting and important to study the sex determination mechanism in A. aethusifolius.
Fig. 2 .
Fig. 2. Plot of the proportion of bisexual flowers.A. Male and andro-polygamous plants.B. female plants.Each circle represents one individual.For andro-polygamous plants, each plant was ranked according to the proportion of bisexual flowers and was plotted. . | 2019-04-02T13:11:24.075Z | 2017-09-29T00:00:00.000 | {
"year": 2017,
"sha1": "f4c243feaebfe1961c2ce7ec3dacba7c51472c36",
"oa_license": "CCBYNC",
"oa_url": "http://www.e-kjpt.org/upload/pdf/kjpt-47-3-189.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "f4c243feaebfe1961c2ce7ec3dacba7c51472c36",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
84782 | pes2o/s2orc | v3-fos-license | The Standard Model in 2001
The Standard Model of electroweak and strong interactions is reviewed in a pedagogical set of lectures. After an introduction to the quarks, leptons, and gauge bosons, an elementary discussion of gauge theories is given, with application to quantum chromodynamics. The physics of $W$ bosons, electroweak unification, and $Z$ bosons is then described, ending with a discussion of precision electroweak experiments and the light they can shed on the Higgs boson and other physics.
Introduction
The "Standard Model" of elementary particle physics encompasses the progress that has been made in the past half-century in understanding the weak, electromagnetic, and strong interactions. The name was apparently bestowed by my Ph. D. thesis advisor, Sam B. Treiman, whose dedication to particle physics kindled the light for so many of his students during those times of experimental and theoretical discoveries. These lectures are dedicated to his memory.
As graduate students at Princeton in the 1960s, my colleagues and I had no idea of the tremendous strides that would be made in bringing quantum field theory to bear upon such a wide variety of phenomena. At the time, its only domain of useful application seemed to be in the quantum electrodynamics (QED) of photons, electrons, and muons.
Our arsenal of techniques for understanding the strong interactions included analyticity, unitarity, and crossing symmetry (principles still of great use), and the emerging SU(3) and SU(6) symmetries. The quark model (Gell-Mann 1964, Zweig 1964 was just beginning to emerge, and its successes at times seemed mysterious. The ensuing decade gave us a theory of the strong interactions, quantum chromodynamics (QCD), based on the exchange of self-interacting vector quanta. QCD has permitted quantitative calculations of a wide range of hitherto intractable properties of the hadrons (Lev Okun's name for the strongly interacting particles), and has been validated by the discovery of its force-carrier, the gluon.
In the 1960s the weak interactions were represented by a phenomenological (and unrenormalizable) four-fermion theory which was of no use for higher-order calculations. Attempts to describe weak interactions in terms of heavy boson exchange eventually bore fruit when they were unified with electromagnetism and a suitable mechanism for generation of heavy boson mass was found. This electroweak theory has been spectacularly successful, leading to the prediction and observation of the W and Z bosons and to precision tests which have confirmed the applicability of the theory to higher-order calculations. In this introductory section we shall assemble the ingredients of the standard model -the quarks and leptons and their interactions. We shall discuss both the theory of the strong interactions, quantum chromodynamics (QCD), and the unified theory of weak and electromagnetic interactions based on the gauge group SU(2) ⊗ U(1). Since QCD is an unbroken gauge theory, we shall discuss it first, in the general context of gauge theories in Section 2. We then discuss the theory of charge-changing weak interactions (Section 3) and its unification with electromagnetism (Section 4). The unsolved part of the puzzle, the Higgs boson, is treated in Section 5, while Section 6 concludes.
These lectures are based in part on courses that I have taught at the University of Minnesota and the University of Chicago, as well as at summer schools (e.g., Rosner 1988Rosner , 1997. They owe a significant debt to the fine book by Quigg (1983).
Quarks and leptons
The fundamental building blocks of strongly interacting particles, the quarks, and the fundamental fermions lacking strong interactions, the leptons, are summarized in Table 1. Masses are as quoted by the Particle Data Group (2000). These are illustrated, along with their interactions, in Figure 1. The relative strengths of the charge-current weak transitions between the quarks are summarized in Table 2.
The quark masses quoted in Table 1 are those which emerge when quarks are probed at distances short compared with 1 fm, the characteristic size of strongly interacting particles and the scale at which QCD becomes too strong to utilize perturbation theory. When regarded as constituents of strongly interacting particles, however, the u and d quarks act as quasi-particles with masses of about 0.3 GeV. The corresponding "constituent-quark" masses of s, c, and b are about 0.5, 1.5, and 4.9 GeV, respectively.
Color and quantum chromodynamics
The quarks are distinguished from the leptons by possessing a three-fold charge known as "color" which enables them to interact strongly with one another. (A gauged color symmetry was first proposed by Nambu 1966.) We shall also speak of quark and lepton "flavor" when distinguishing the particles in Table 1 from one another. The experimental evidence for color comes from several quarters. interchanges, hence totally symmetric under their product. Color introduces an additional degree of freedom under which the interchange of two quarks can produce a minus sign, through the representation ∆ ++ ∼ ǫ abc u a u b u c . The totally antisymmetric product of three color triplets is a color singlet.
2. Electron-positron annihilation to hadrons. The charges of all quarks which can be produced in pairs below a given center-of-mass energy is measured by the ratio For energies at which only uū, dd, and ss can be produced, i.e., below the charmed-pair threshold of about 3.7 GeV, one expects for N c "colors" of quarks. Measurements first performed at the Frascati laboratory in Italy and most recently at the Beijing Electron-Positron Collider (Bai et al. 2001;see Fig. 2) indicate R = 2 in this energy range (with a small positive correction associated with the strong interactions of the quarks), indicating N c = 3.
3. Neutral pion decay. The π 0 decay rate is governed by a quark loop diagram in which two photons are radiated by the quarks in π 0 = (uū − dd)/ √ 2. The predicted rate is where f π = 131 MeV and S = N c (Q 2 u − Q 2 d ) = N c /3. The experimental rate is 7.8 ± 0.6 eV, while Eq. (3) gives 7.6S 2 eV, in accord with experiment if S = 1 and N c = 3. 4. Triality. Quark composites appear only in multiples of three. Baryons are composed of qqq, while mesons are qq (with total quark number zero). This is compatible with our current understanding of QCD, in which only color-singlet states can appear in the spectrum. Thus, mesons M and baryons B are represented by Direct evidence for the quanta of QCD, the gluons, was first presented in 1979 on the basis of extra "jets" of particles produced in electron-positron annihilations to hadrons. Normally one sees two clusters of energy associated with the fragmentation of each quark in e + e − → qq into hadrons. However, in some fraction of events an extra jet was seen, corresponding to the radiation of a gluon by one of the quarks.
The transformations which take one color of quark into another are those of the group SU(3). We shall often refer to this group as SU(3) color to distinguish it from the SU(3) flavor associated with the quarks u, d, and s.
Electroweak unification
The electromagnetic interaction is described in terms of photon exchange, for which the Born approximation leads to a matrix element behaving as 1/q 2 . Here q is the fourmomentum transfer, and q 2 is its invariant square. The quantum electrodynamics of photons and charged pointlike particles (such as electrons) initially encountered calculational problems in the form of divergent quantities, but these had been tamed by the late 1940s through the procedure known as renormalization, leading to successful estimates of such quantities as the anomalous magnetic moment of the electron and the Lamb shift in hydrogen.
By contrast, the weak interactions as formulated up to the mid-1960s involved the pointlike interactions of two currents, with an interaction Hamiltonian H W = G F J µ J µ † / √ 2, with G F = 1.16637(1) × 10 −5 GeV −2 the current value for the Fermi coupling constant. This interaction is very singular and cannot be renormalized. The weak currents J µ in this theory were purely charge-changing. As a result of work by Lee and Yang, Feynman and Gell-Mann, and Marshak and Sudarshan in 1956-7 they were identified as having (vector)-(axial) or "V − A" form.
Hideki Yukawa (1935) and Oskar Klein (1938) proposed a boson-exchange model for the charge-changing weak interactions. Klein's model attempted a unification with electromagnetism and was based on a local isotopic gauge symmetry, thus anticipating the theory of Yang and Mills (1954). Julian Schwinger and others studied such models in the 1950s, but Glashow (1961) was the first to realize that a new neutral heavy boson had to be introduced as well in order to successfully unify the weak and electromagnetic interactions. The breaking of the electroweak symmetry (Weinberg 1967, Salam 1968 via the Higgs (1964) mechanism converted this phenomenological theory into one which could be used for higher-order calculations, as was shown by 't Hooft and Veltman in the early 1970s.
The boson-exchange model for charge-changing interactions replaces the Fermi interaction constant with a coupling constant g at each vertex and the low-q 2 limit of a propagator, 1/(M 2 W − q 2 ) → 1/M 2 W , with factors of 2 chosen so that G F / √ 2 = g 2 /8M 2 W . The q 2 term in the propagator helps the theory to be more convergent, but it is not the only ingredient needed, as we shall see.
The normalization of the charge-changing weak currents J µ suggested well in advance of electroweak unification that one regard the corresponding integrals of their time components (the so-called weak charges) as members of an SU(2) algebra (Gell-Mann andLévy 1960, Cabibbo 1963). However, the identification of the neutral member of this multiplet as the electric charge was problematic. In the V − A theory the W 's couple only to left-handed fermions ψ L ≡ (1 − γ 5 )ψ/2, while the photon couples to ψ L + ψ R , where ψ R ≡ (1 + γ 5 )ψ/2. Furthermore, the high-energy behavior of the νν → W + W − scattering amplitude based on charged lepton exchange leads to unacceptable divergences if we incorporate it into the one-loop contribution to νν → νν (Quigg 1983).
A simple solution was to add a neutral boson Z coupling to W + W − and νν in such a way as to cancel the leading high-energy behavior of the charged-lepton-exchange diagram. This relation between couplings occurs naturally in a theory based on the gauge group SU(2) ⊗ U(1). The Z leads to neutral current interactions, in which (for example) an incident neutrino scatters inelastically on a hadronic target without changing its charge. The discovery of neutral-current interactions of neutrinos and many other manifestations of the Z proved to be striking confirmations of the new theory.
If one identifies the W + and W − with raising and lowering operations in an SU(2), so that W ± = (W 1 ∓ iW 2 ) √ 2, then left-handed fermions may be assigned to doublets of this "weak isospin," with I 3L (u, c, t) = I 3L (ν e , ν µ , ν τ ) = +1/2, I 3L (d, s, b) = I 3L (e − , µ − , τ − ) = −1/2. All the right-handed fermions have I L = I 3L = 0. As mentioned, one cannot simply identify the photon with W 3 , which also couples only to left-handed fermions. Instead, one must introduce another boson B associated with a U(1) gauge group. It will mix with the W 3 to form physical states consisting of the massless photon A and the massive neutral boson Z: The mixing angle θ appears in many electroweak processes. It has been measured to sufficiently great precision that one must specify the renormalization scheme in which it is quoted. For present purposes we shall merely note that sin 2 θ ≃ 0.23. The corresponding SU(2) and U(1) coupling constants g and g ′ are related to the electric charge e by e = g sin θ = g ′ cos θ, so that 1 The electroweak theory successfully predicted the masses of the W ± and Z: where we show the approximate experimental values. The detailed check of these predictions has reached the precision that one can begin to look into the deeper structure of the theory. A key ingredient in this structure is the Higgs boson, the price that had to be paid for the breaking of the electroweak symmetry.
Higgs boson
An unbroken SU(2) ⊗ U(1) theory involving the photon would require all fields to have zero mass, whereas the W ± and Z are massive. The symmetry-breaking which generates W and Z masses must not destroy the renormalizability of the theory. However, a massive vector boson propagator is of the form where M is the boson mass. The terms q µ q ν , when appearing in loop diagrams, will destroy the renormalizability of the theory. They are associated with longitudinal vector boson polarizations, which are only present for massive bosons. For massless bosons like the photon, there are only transverse polarization states J z = ±J.
The Higgs mechanism, to be discussed in detail later in these lectures, provides the degrees of freedom needed to add a longitudinal polarization state for each of W + , W − , and W 0 . In the simplest model, this is achieved by introducing a doublet of complex Higgs fields: Here the charged Higgs fields φ ± provide the longitudinal component of W ± and the linear combination (φ 0 −φ 0 )/i √ 2 provides the longitudinal component of the Z. The additional degree of freedom (φ 0 +φ 0 )/ √ 2 corresponds to a physical particle, the Higgs particle, which is the subject of intense searches.
Discovering the nature of the Higgs boson is a key to further progress in understanding what may lie beyond the Standard Model. There may exist one Higgs boson or more than one. There may exist other particles in the spectrum related to it. The Higgs boson may be elementary or composite. If composite, it points to a new level of substructure of the elementary particles. Much of our discussion will lead up to strategies for the next few years designed to address these questions. First, we introduce the necessary topic of gauge theories, which have been the platform for all the developments of the past thirty years.
Abelian gauge theories
The Lagrangian describing a free fermion of mass m is L free = ψ(i ∂ −m)ψ. It is invariant under the global phase change ψ → exp(iα)ψ. (We shall always consider the fermion fields to depend on x.) Now consider independent phase changes at each point: Because of the derivative, the Lagrangian then acquires an additional phase change at each point: δL free = ψiγ µ [i∂ µ α(x)]ψ. The free Lagrangian is not invariant under such changes of phase, known as local gauge transformations.
Local gauge invariance can be restored if we make the replacement ∂ µ → D µ ≡ ∂ µ + ieA µ in the free-fermion Lagrangian, which now is The effect of a local phase in ψ can be compensated if we allow the vector potential A µ to change by a total divergence, which does not change the electromagnetic field strength (defined as in Peskin and Schroeder 1995;Quigg 1983 uses the opposite sign) Indeed, under the transformation ψ → ψ ′ and with A → A ′ with A ′ yet to be determined, we have This will be the same as L if The derivative D µ is known as the covariant derivative. One can check that under a local gauge transformation, D µ ψ → e iα(x) D µ ψ.
Another way to see the consequences of local gauge invariance suggested by Yang (1974) and discussed by Peskin and Schroeder (1995, pp 482-486) is to define −eA µ (x) as the local change in phase undergone by a particle of charge e as it passes along an infinitesimal space-time increment between x µ and x µ + dx µ . For a space-time trip from point A to point B, the phase change is then The phase in general will depend on the path in space-time taken from point A to point B. As a consequence, the phase Φ AB is not uniquely defined. However, one can compare the result of a space-time trip along one path, leading to a phase Φ (1) AB , with that along another, leading to a phase Φ (2) AB . The two-slit experiment in quantum mechanics involves such a comparison; so does the Bohm-Aharonov effect in which a particle beam traveling past a solenoid on one side interferes with a beam traveling on the other side. Thus, phase differences Φ (1) associated with closed paths in space-time (represented by the circle around the integral sign), are the ones which correspond to physical experiments. The phase Φ C for a closed path C is independent of the phase convention for a charged particle at any space-time point x 0 , since any change in the contribution to Φ C from the integral up to x 0 will be compensated by an equal and opposite contribution from the integral departing from x 0 .
The closed path integral (16) can be expressed as a surface integral using Stokes' theorem: where the electromagnetic field strength F µν was defined previously and dσ µν is an element of surface area. It is also clear that the closed path integral is invariant under changes (14) of A µ (x) by a total divergence. Thus F µν suffices to describe all physical experiments as long as one integrates over a suitable domain. In the Bohm-Aharonov effect, in which a charged particle passes on either side of a solenoid, the surface integral will include the solenoid (in which the magnetic field is non-zero).
If one wishes to describe the energy and momentum of free electromagnetic fields, one must include a kinetic term L K = −(1/4)F µν F µν in the Lagrangian, which now reads If the electromagnetic current is defined as J em µ ≡ ψγ µ ψ, this Lagrangian leads to Maxwell's equations.
The local phase changes (10) form a U(1) group of transformations. Since such transformations commute with one another, the group is said to be Abelian. Electrodynamics, just constructed here, is an example of an Abelian gauge theory.
Non-Abelian gauge theories
One can imagine that a particle traveling in space-time undergoes not only phase changes, but also changes of identity. Such transformations were first considered by Yang and Mills (1954). For example, a quark can change in color (red to blue) or flavor (u to d). In that case we replace the coefficient eA µ of the infinitesimal displacement dx µ by an n × n matrix −gA µ (x) ≡ −gA i µ (x)T i acting in the n-dimensional space of the particle's degrees of freedom. (The sign change follows the convention of Peskin and Schroeder 1995.) For colors, n = 3. The T i form a linearly independent basis set of matrices for such transformations, while the A i µ are their coefficients. The phase transformation then must take account of the fact that the matrices A µ (x) in general do not commute with one another for different space-time points, so that a path-ordering is needed: When the basis matrices T i do not commute with one another, the theory is non-Abelian.
We demand that changes in phase or identity conserve probability, i.e., that Φ AB be unitary: Φ † AB Φ AB = 1. When Φ AB is a matrix, the corresponding matrices A µ (x) in (19) must be Hermitian. If we wish to separate out pure phase changes, in which A µ (x) is a multiple of the unit matrix, from the remaining transformations, one may consider only transformations such that det(Φ AB ) = 1, corresponding to traceless A µ (x).
The n × n basis matrices T i must then be Hermitian and traceless. There will be n 2 − 1 of them, corresponding to the number of independent SU(N) generators. (One can generalize this approach to other invariance groups.) The matrices will satisfy the commutation relations where the c ijk are structure constants characterizing the group. For SU(2), c ijk = ǫ ijk (the Kronecker symbol), while for SU (3), c ijk = f ijk , where the f ijk are defined in Gell-Mann and Ne'eman (1964). A 3 × 3 representation in SU(3) is T i = λ i /2, where λ i /2 are the Gell-Mann matrices normalized such that Tr λ i λ j = 2δ ij . For this representation, then, Tr T i T j = δ ij /2.
In order to define the field-strength tensor F µν = F i µν T i for a non-Abelian transformation, we may consider an infinitesimal closed-path transformation analogous to Eq. (16) for the case in which the matrices A µ (x) do not commute with one another. The result (see, e.g., Peskin and Schroeder 1995, pp 486-491) is An alternative way to introduce non-Abelian gauge fields is to demand that, by analogy with Eq. (10), a theory involving fermions ψ be invariant under local transformations where for simplicity we consider unitary transformations. Under this replacement, As in the Abelian case, an extra term is generated by the local transformation. It can be compensated by replacing ∂ µ by In this case L = ψ(i D − m)ψ and under the change (22) we find This is equal to L if we take This reduces to our previous expressions if g = −e and U = e iα(x) .
The covariant derivative acting on ψ transforms in the same way as ψ itself under a gauge transformation: It may be computed via [D µ , D ν ] = −igF µν ; both sides transform as U( )U −1 under a local gauge transformation.
In order to obtain propagating gauge fields, as in electrodynamics, one must add a kinetic term L K = −(1/4)F i µν F iµν to the Lagrangian. Recalling the representation F µν = F i µν in terms of gauge group generators normalized such that Tr(T i T j ) = δ ij /2, we can write the full Yang-Mills Lagrangian for gauge fields interacting with matter fields as We shall use Lagrangians of this type to derive the strong, weak, and electromagnetic interactions of the "Standard Model." The interaction of a gauge field with fermions then corresponds to a term in the interaction Lagrangian ∆L = gψ(x)γ µ A µ (x)ψ(x). The [A µ , A ν ] term in F µν leads to self-interactions of non-Abelian gauge fields, arising solely from the kinetic term. Thus, one has three-and four-field vertices arising from ∆L (3) These self-interactions are an important aspect of non-Abelian gauge theories and are responsible in particular for the remarkable asymptotic freedom of QCD which leads to its becoming weaker at short distances, permitting the application of perturbation theory.
Elementary divergent quantities
In most quantum field theories, including quantum electrodynamics, divergences occurring in higher orders of perturbation theory must be removed using charge, mass, and wave function renormalization. This is conventionally done at intermediate calculational stages by introducing a cutoff momentum scale Λ or analytically continuing the number of spacetime dimensions away from four. Thus, a vacuum polarization graph in QED associated with external photon momentum k and a fermion loop will involve an integral a self-energy of a fermion with external momentum p will involve and a fermion-photon vertex function with external fermion momenta p, p ′ will involve The integral (29) appears to be quadratically divergent. However, the gauge invariance of the theory translates into the requirement k µ Π µν = 0, which requires Π µν to have the form Π µν (k) = (k 2 g µν − k µ k ν )Π(k 2 ) .
The corresponding integral for Π(k 2 ) then will be only logarithmically divergent. The integral in (30) is superficially linearly divergent but in fact its divergence is only logarithmic, as is the integral in (31). Unrenormalized functions describing vertices and self-energies involving n B external boson lines and n F external fermion lines may be defined in terms of a momentum cutoff Λ and a bare coupling constant g 0 (Coleman 1971, Ellis 1977, Ross 1978: where p i denote external momenta. Renormalized functions Γ R may be defined in terms of a scale parameter µ, a renormalized coupling constant g = g(g 0 , Λ/µ), and renormalization constants Z B (Λ) and Z F (Λ) for the external boson and fermion wave functions: The scale µ is typically utilized by demanding that Γ R be equal to some predetermined function at a Euclidean momentum p 2 = −µ 2 . Thus, for the one-boson, two-fermion vertex, we take The unrenormalized function Γ U is independent of µ, while Γ R and the renormalization constants Z B (Λ), Z F (Λ) will depend on µ. For example, in QED, the photon wave function renormalization constant (known as Z 3 ) behaves as The bare charge e 0 and renormalized charge e are related by e = e 0 Z 1/2 3 . To lowest order in perturbation theory, e < e 0 . The vacuum behaves as a normal dielectric; charge is screened. It is the exception rather than the rule that in QED one can define the renormalized charge for q 2 = 0; in QCD we shall see that this is not possible.
Scale changes and the beta function
We differentiate both sides of (34) with respect to µ and multiply by µ. Since the functions Γ U are independent of µ, we find where The behavior of any generalized vertex function Γ R under a change of scale µ is then governed by the universal functions (39).
It may happen that β ′ (0) < 0 for specific theories. In that caseḡ = 0 is an ultraviolet fixed point, and the theory is said to be asymptotically free. We shall see that this property is particular to non-Abelian gauge theories Wilczek 1973, Politzer 1974).
Beta function calculation
In quantum electrodynamics a loop diagram involving a fermion of unit charge contributes the following expression to the relation between the bare charge e 0 and the renormalized charge e: as implied by (35) and (36), where α 0 ≡ e 2 0 /4π. We find where differences between e 0 and e correspond to higher-order terms in e. (Here α ≡ e 2 /4π.) Thus β(e) > 0 for small e and the coupling constant becomes stronger at larger momentum scales (shorter distances).
We shall show an extremely simple way to calculate (42) and the corresponding result for a charged scalar particle in a loop. From this we shall be able to first calculate the effect of a charged vector particle in a loop (a calculation first performed by Khriplovich 1969) and then generalize the result to Yang-Mills fields. The method follows that of Hughes (1980). When one takes account of vacuum polarization, the electromagnetic interaction in momentum space may be written Here the long-distance (q 2 → 0) behavior has been defined such that e is the charge measured at macroscopic distances, so Π(0) = 0. Following Sakurai (1967), we shall reconstruct Π i (q 2 ) for a loop involving the fermion species i from its imaginary part, which is measurable through the cross section for e + e − → iī: where s is the square of the center-of-mass energy. For fermions f of charge e f and mass m f , while for scalar particles of charge e s and mass m s , The corresponding cross section for e + e − → µ + µ − , neglecting the muon mass, is σ(e + e − → µ + µ − ) = 4πα 2 /3s, so one can define in terms of which Im Π i (s) = αR i (s)/3. For s → ∞ one has R f (s) → e 2 f for a fermion and R s (s) → e 2 s /4 for a scalar. The full vacuum polarization function Π i (s) cannot directly be reconstructed in terms of its imaginary part via the dispersion relation since the integral is logarithmically divergent. This divergence is exactly that encountered earlier in the discussion of renormalization. For quantum electrodynamics we could deal with it by defining the charge at q 2 = 0 and hence taking Π i (0) = 0. The once-subtracted dispersion relation for Π i (s) − Π i (0) would then converge: However, in order to be able to consider cases such as Yang-Mills fields in which the theory is not well-behaved at q 2 = 0, let us instead define Π i (−µ 2 ) = 0 at some spacelike scale q 2 = −µ 2 . The dispersion relation is then For and so, from (43), the "charge at scale q" may be written as The beta-function here is defined by β(e) = µ(∂e/∂µ)| fixed eq . Thus, expressing β(e) = −β 0 e 3 /(16π 2 ) + O(e 5 ), one finds β 0 = −(4/3)e 2 f for spin-1/2 fermions and β 0 = −(1/3)e 2 s for scalars.
These results will now be used to find the value of β 0 for a single charged massless vector field. We generalize the results for spin 0 and 1/2 to higher spins by splitting contributions to vacuum polarization into "convective" and "magnetic" ones. Furthermore, we take into account the fact that a closed fermion loop corresponds to an extra minus sign in Π f (s) (which is already included in our result for spin 1/2). The "magnetic" contribution of a particle with spin projection S z must be proportional to S 2 z . For a massless spin-S particle, S 2 z = S 2 . We may then write where n F = 1 for a fermion, 0 for a boson. The factor of 2b for S = 0 comes from the contribution of each polarization state (S z = ±S) to the convective term. Matching the results for spins 0 and 1/2, we find a = 8 and hence for S = 1 The magnetic contribution is by far the dominant one (by a factor of 12), and is of opposite sign to the convective one. A similar separation of contributions, though with different interpretations, was obtained in the original calculation of Khriplovich (1969). The reversal of sign with respect to the scalar and spin-1/2 results is notable.
Group-theoretic techniques
The result (55) for a charged, massless vector field interacting with the photon is also the value of β 0 for the Yang-Mills group SO(3) ∼ SU(2) if we identify the photon with A 3 µ and the charged vector particles with A ± µ ≡ (A 1 µ ∓ iA 2 µ )/ √ 2. We now generalize it to the contribution of gauge fields in an arbitrary group G.
The value of β 0 gauge fields depends on a sum over all possible self-interacting gauge fields that can contribute to the loop with external gauge field labels i and m: where c G ijk is the structure constant for G, introduced in Eq. (20). The sums in (56) are proportional to δ im : The quantity C 2 (A) is the quadratic Casimir operator for the adjoint representation of the group G.
The contributions of arbitrary scalars and spin-1/2 fermions in representations R are proportional to T (R), where for matrices T i in the representation R. For a single charged scalar particle (e.g., a pion) or fermion (e.g., an electron), T (R) = 1. Thus β 0 spin 0 = −(1/3)T 0 (R), while β 0 spin 1/2 = −(4/3)T 1/2 (R), where the subscript on T (R) denotes the spin. Summarizing the contributions of gauge bosons, spin 1/2 fermions, and scalars, we find One often needs the beta-function to higher orders, notably in QCD where the perturbative expansion coefficient is not particularly small. It is where the result for gauge bosons and spin 1/2 fermions (Caswell 1974) is The first term involves loops exclusively of gauge bosons. The second involves singlegauge-boson loops with a fermion loop on one of the gauge boson lines. The third involves fermion loops with a fermion self-energy due to a gauge boson. The quantity where α and β are indices in the fermion representation.
We now illustrate the calculation of C 2 (A), T (R), and C 2 (R) for SU(N). More general techniques are given by Slansky (1981).
Any SU(N) group contains an SU(2) subgroup, which we may take to be generated by T 1 , T 2 , and T 3 . The isospin projection I 3 may be identified with T 3 . Then the I 3 value carried by each generator T i (written for convenience in the fundamental N-dimensional representation) may be identified as shown below: Since C 2 (A) may be calculated for any convenient value of the index i = m in (57), we chose i = m = 3. Then As an example, the octet (adjoint) representation of SU(3) has two members with |I 3 | = 1 (e.g., the charged pions) and four with |I 3 | = 1/2 (e.g., the kaons).
For members of the fundamental representation of SU(N), there will be one member with I 3 = +1/2, another with I 3 = −1/2, and all the rest with I 3 = 0. Then again choosing i = m = 3 in Eq. (58), we find T (R)| fundamental = 1/2. The SU(N) result for β 0 in the presence of n f spin 1/2 fermions and n s scalars in the fundamental representation then may be written The quantity C 2 (A) in (63) is most easily calculated by averaging over all indices α = β. If all generators T i are normalized in the same way, one may calculate the result for an individual generator (say, T 3 ) and then multiply by the number of generators [N 2 −1 for SU(N)]. For the fundamental representation one then finds
The running coupling constant
One may integrate Eq. (60) to obtain the coupling constant as a function of momentum scale M and a scale-setting parameter Λ. In terms ofᾱ ≡ḡ 2 /4π, one has For large t ′ the result can be written as Suppose a process involves p powers ofᾱ to leading order and a correction of order α p+1 : The coefficient B thus depends on the scale parameter used to defineᾱ.
Many prescriptions have been adopted for defining Λ. In one ('t Hooft 1973), the "minimal subtraction" or MS scheme, ultraviolet logarithmic divergences are parametrized by continuing the space-time dimension d = 4 to d = 4 − ǫ and subtracting pole terms d 4−ǫ /p 4 ∼ 1/ǫ. In another (Bardeen et al. 1978) (the "modified minimal subtraction or MS scheme) a term 1
Applications to quantum chromodynamics
A "golden application" of the running coupling constant to QCD is the effect of gluon radiation on the value of R in e + e − annihilations. Since R is related to the imaginary part of the photon vacuum polarization function Π(s) which we have calculated for fermions and scalar particles, one calculates the effects of gluon radiation by calculating the correction to Π(s) due to internal gluon lines. The leading-order result for color-triplet quarks is R(s) → R(s)[1 +ᾱ(s)/π]. There are many values of s at which one can measure such effects. For example, at the mass of the Z, the partial decay rate of the Z to hadrons involves the same correction, and leads to the estimateᾱ S (M 2 Z ) = 0.118 ± 0.002. The dependence ofᾱ S (M 2 ) satisfying this constraint on M 2 is shown in Figure 3. As we shall see in Section 5.1, the electromagnetic coupling constant also runs, but much more slowly, with α −1 changing from 137.036 at q 2 = 0 to about 129 at q 2 = M 2 Z . A system which illustrates both perturbative and non-perturbative aspects of QCD is the bound state of a heavy quark and a heavy antiquark, known as quarkonium (in analogy with positronium, the bound state of a positron and an electron). We show in Figures 4 and 5 the spectrum of the cc and bb bound states (Rosner 1997). The charmonium (cc) system was an early laboratory of QCD (Appelquist and Politzer 1975).
The S-wave (L = 0) levels have total angular momentum J, parity P , and chargeconjugation eigenvalue C equal to J P C = 0 +− and 1 −− as one would expect for 1 S 0 and 3 S 1 states, respectively, of a quark and antiquark. The P-wave (L = 1) levels have J P C = 1 +− for the 1 P 1 , 0 ++ for the 3 P 0 , 1 ++ for the 3 P 1 , and 2 ++ for the 3 P 2 . The J P C = 1 −− levels are identified as such by their copious production through single virtual photons in e + e − annihilations. The 0 −+ level η c is produced via single-photon emission from the J/ψ (so its C is positive) and has been directly measured to have J P compatible with 0 − . Numerous studies have been made of the electromagnetic (electric dipole) transitions between the S-wave and P -wave levels and they, too, support the assignments shown. The bb and cc levels have a very similar structure, aside from an overall shift. The similarity of the cc and bb spectra is in fact an accident of the fact that for the interquark distances in question (roughly 0.2 to 1 fm), the interquark potential interpolates between short-distance Coulomb-like and long-distance linear behavior. The Coulomb-like behavior is what one would expect from single-gluon exchange, while the linear behavior is a particular feature of non-perturbative QCD which follows from Gauss' law if chromoelectric flux lines are confined to a fixed area between two widely separated sources (Nambu 1974). It has been explicitly demonstrated by putting QCD on a space-time lattice, which permits it to be solved numerically in the non-perturbative regime.
States consisting of a single charmed quark and light (u, d, or s) quarks or antiquarks are shown in Figure 6. Finally, the pattern of states containing a single b quark (Figure 7) is very similar to that for singly-charmed states, though not as well fleshed-out. In many cases the splittings between states containing a single b quark is less than that between the corresponding charmed states by roughly a factor of m c /m b ≃ 1/3 as a result of the smaller chromomagnetic moment of the b quark. Pioneering work in understanding the spectra of such states using QCD was done by De Rújula et al. (1975), building on earlier observations on light-quark systems by Zel'dovich and Sakharov (1966), Dalitz (1967), and Lipkin (1973).
W bosons 3.1 Fermi theory of weak interactions
The effective four-fermion Hamiltonian for the V − A theory of the weak interactions is where G F and ψ L were defined in Section 1.3. We wish to write instead a Lagrangian for interaction of particles with charged W bosons which reproduces (71) when taken to second order at low momentum transfer. We shall anticipate a result of Section 4 by introducing the W through an SU(2) symmetry, in the form of a gauge coupling.
In the kinetic term in the Lagrangian for fermions, the ∂ term does not mix ψ L and ψ R , so in the absence of the ψψ term one would have the freedom to introduce different covariant derivatives D acting on left-handed and righthanded fermions. We shall find that the same mechanism which allows us to give masses to the W and Z while keeping the photon massless will permit the generation of fermion masses even though ψ L and ψ R will transform differently under our gauge group. We follow the conventions of Peskin and Schroeder (1995, p 700 ff).
We now let the left-handed spinors be doublets of an SU(2), such as (We will postpone the question of neutrino mixing until the last Section.) The W is introduced via the replacement where τ i are the Pauli matrices and W i µ are a triplet of massive vector mesons. Here we will be concerned only with the W ± , defined by annihilates a W + and creates a W − , while W − µ annihilates a W − and creates a W + . Then The interaction arising from (72) for a lepton l = e, µ, τ is then where we temporarily neglect the W 3 µ terms. Taking this interaction to second order and replacing the W propagator (M 2 W − q 2 ) −1 by its q 2 = 0 value, we find an effective interaction of the form (71), with
Charged-current quark interactions
The left-handed quark doublets may be written where d ′ , s ′ , and b ′ are related to the mass eigenstates d, s, b by a unitary transformation The rationale for the unitary matrix V of Kobayashi and Maskawa (1973) will be reviewed in the next Section when we discuss the origin of fermion masses in the electroweak theory. The interaction Lagrangian for W 's with quarks then is A convenient parametrization of V (conventionally known as the Cabibbo-Kobayashi-Maskawa matrix, or CKM matrix) suggested by Wolfenstein (1983) is Experimentally λ ≃ 0.22 and A ≃ 0.85. Present constraints on the parameters ρ and η are shown in Figure 8. The solid circles denote limits on |V ub /|V cb | = 0.090 ± 0.025 from charmless b decays. The dashed arcs are associated with limits on V td from B 0 -B 0 mixing.
The present lower limit on B s -B s mixing leads to a lower bound on |V ts /V td | and the dotdashed arc. The dotted hyperbolae arise from limits on CP-violating K 0 -K 0 mixing. The phases in the CKM matrix associated with η = 0 lead to CP violation in neutral kaon decays (Christenson et al. 1964) and, as recently discovered, in neutral B meson decays (Aubert et al. 2001a, Abe et al. 2001). These last results lead to a result shown by the two rays, sin(2β . The small dashed lines represent 1σ limits derived by Gronau and Rosner (2002) (see also Luo and Rosner 2001) on the basis of CP asymmetry data of Aubert et al. (2001b) for B 0 → π + π − . Our range of parameters (confined by 1σ limits) is 0.10 ≤ ρ ≤ 0.32, 0.33 ≤ η ≤ 0.43. Similar plots are presented in several other lectures at this Summer School (see, e.g., Buchalla 2001, Nir 2001, Schubert 2001, Stone 2001, which may be consulted for further details, and an ongoing analysis of CKM parameters by Höcker et al. (2001) is now incorporating several other pieces of data.
Decays of the τ lepton
The τ lepton (Perl et al. 1975) provides a good example of "standard model" chargedcurrent physics. The τ − decays to a ν τ and a virtual W − which can then materialize into any kinematically allowed final state: e −ν e , µ −ν µ , or three colors ofūd ′ , where, in accord with (81), d ′ ≃ 0.975d + 0.22s.
Neglecting strong interaction corrections and final fermion masses, the rate for τ decay is expected to be corresponding to a lifetime of τ τ ≃ 3 × 10 −13 s as observed. The factor of 5 = 1 + 1 + 3 corresponds to equal rates into e −ν e , µ −ν µ , and each of the three colors ofūd ′ . The branching ratios are predicted to be Measured values for the purely leptonic branching ratios are slightly under 18%, as a result of the enhancement of the hadronic channels by a QCD correction whose leadingorder behavior is 1 + α S /π, the same as for R in e + e − annihilation. The τ decay is thus further evidence for the existence of three colors of quarks.
W decays
We shall calculate the rate for the process W → ff ′ and then generalize the result to obtain the total W decay rate. The interaction Lagrangian (76) implies that the covariant matrix element for the process W Here λ describes the polarization state of the W . The partial width is where (2M W ) −1 is the initial-state normalization, 1/3 corresponds to an average of W polarizations, the sum is over both W and lepton polarizations, and p * is the final centerof-mass (c.m.) 3-momentum. We use the identity for sums over W polarization states. The result is that for any process W → ff ′ , where m is the mass of f and m ′ is the mass of f ′ . Recalling the relation between G F and g 2 , this may be written in the simpler form Here E = (p * 2 + m 2 ) 1/2 and E ′ = (p * 2 + m ′ 2 ) 1/2 are the c.m. energies of f and f ′ .
The factor Φ f f ′ reduces to 1 as m, m ′ → 0.
The present experimental average for the W mass (Kim 2001) is M W = 80.451 ± 0.033 GeV. Using this value, we predict Γ(W → e −ν e ) = 227.8±2.3 MeV. The widths to various channels are expected to be in the ratios e −ν e : µ −ν µ : τ −ν τ :ūd ′ :cs ′ = 1 : 1 : 1 : 3 1 + so α S (M 2 W ) = 0.120 ± 0.002 leads to the prediction Γ tot (W ) = 2.10 ± 0.02 GeV. This is to be compared with a value (Drees 2001) obtained at LEP II by direct reconstruction of W 's: Γ tot (W ) = 2.150 ± 0.091 GeV. Higher-order electroweak corrections, to be discussed in Section 5, are not expected to play a major role here. This agreement means, among other things, that we are not missing a significant channel to which the charged weak current can couple below the mass of the W .
W pair production
We shall outline a calculation (Quigg 1983) which indicates that the weak interactions cannot possibly be complete if described only by charged-current interactions. We consider the process ν e (q) +ν e (q ′ ) → W + (k) + W − (k ′ ) due to exchange of an electron e − with momentum p. The matrix element is For a longitudinally polarized W + , this matrix element grows in an unacceptable fashion for high energy. In fact, an inelastic amplitude for any given partial wave has to be bounded, whereas M (λ,λ ′ ) will not be.
The polarization vector for a longitudinal W + traveling along the z axis is with a correction which vanishes as | k| → ∞. Replacing ǫ (λ) ν (k) by k ν /M W , using k = q− p and qu(q) = 0, we find lepton pol.
This quantity contributes only to the lowest two partial waves, and grows without bound as the energy increases. Such behavior is not only unacceptable on general grounds because of the boundedness of inelastic amplitudes, but it leads to divergences in higherorder perturbation contributions, e.g., to elasticνν scattering.
Two possible contenders for a solution of the problem in the early 1970s were (1) a neutral gauge boson Z 0 coupling to νν and W + W − (Glashow 1961, Weinberg 1967, Salam 1968), or (2) a left-handed heavy lepton E + (Georgi and Glashow 1972a) coupling to ν e W + . Either can reduce the unacceptable high-energy behavior to a constant. The Z 0 alternative seems to be the one selected in nature. In what follows we will retrace the steps of the standard electroweak theory, which led to the prediction of the W and Z and all the phenomena associated with them.
Electroweak unification 4.1 Guidelines for symmetry
We now return to the question of what to do with the "neutral W " (the particle we called W 3 in the previous Section), a puzzle since the time of Oskar Klein in the 1930s. The time component of the charged weak current where N L and L L are neutral and charged lepton column vectors defined in analogy with U L and D L , may be used to define operators which are charge-raising and -lowering members of an SU(2) triplet. If we define , the algebra closes: [Q 3 , Q (±) ] = ±Q (±) . This serves to normalize the weak currents, as mentioned in the Introduction.
The form (94) (with unitary V ) guarantees that the corresponding neutral current will be which is diagonal in neutral currents. This can only succeed, of course, if there are equal numbers of charged and neutral leptons, and equal numbers of charge 2/3 and charge −1/3 quarks.
It would have been possible to define an SU(2) algebra making use only of a doublet (Gell-Mann and Lévy 1960) which was the basis of the Cabibbo (1963) theory of the charge-changing weak interactions of strange and nonstrange particles. If one takes V ud = cos θ C , V us = sin θ C , as is assumed in the Cabibbo theory, the u, d, s contribution to the neutral current J (3) µ is This expression contains strangeness-changing neutral currents, leading to the expectation of many processes like K + → π + νν, K 0 L → µ + µ − , . . ., at levels far above those observed. It was the desire to banish strangeness-changing neutral currents that led Glashow et al. (1970) to introduce the charmed quark c (proposed earlier by several authors on the basis of a quark-lepton analogy) and the doublet (99) In this four-quark theory, one assumes the corresponding matrix V is unitary. By suitable phase changes of the quarks, all elements can be made real, making V an orthogonal matrix with V ud = V cs = cos θ C , V us = −V cd = sin θ C . Instead of (98) one then has which contains no flavor-changing neutral currents.
The charmed quark also plays a key role in higher-order charged-current interactions. Let us consider K 0 -K 0 mixing. The CP-conserving limit in which the eigenstates are K 1 (even CP) and K 2 (odd CP) can be illustrated using a degenerate two-state system such as the first excitations of a drum head. There is no way to distinguish between the basis states illustrated in Fig. 9(a), in which the nodal lines are at angles of ±45 • with respect to the horizontal, and those in Fig. 9(b), in which they are horizontal and vertical.
If a fly lands on the drum-head at the point marked "×", the basis (b) corresponds to eigenstates. One of the modes couples to the fly; the other doesn't. The basis in (a) is like that of (K 0 , K 0 ), while that in (b) is like that of (K 1 , K 2 ). Neutral kaons are produced as in (a), while they decay as in (b), with the fly analogous to the ππ state. The shortlived state (K 1 , in this CP-conserving approximation) has a lifetime of 0.089 ns, while the long-lived state (≃ K 2 ) lives ∼ 600 times as long, for 52 ns. Classical illustration of CP-violating mixing is more subtle but can be achieved as well, for instance in a rotating reference frame Slezak 2001, Kostelecký andRoberts 2001).
The shared ππ intermediate state and other low-energy states like π 0 , η, and η ′ are chiefly responsible for CP-conserving K 0 -K 0 mixing. However, one must ensure that large short-distance contributions do not arise from diagrams such as those illustrated in Figure 10.
If the only charge 2/3 quark contributing to this process were the u quark, one would expect a contribution to ∆m K of order where f K is the amplitude for ds to be found in a K 0 , and the factor of 16π 2 is characteristic of loop diagrams. This is far too large, since ∆m K ∼ Γ K S ∼ G 2 F f 2 K m 3 K . However, the Figure 10. Higher-order weak contributions to K 0 -K 0 mixing due to loops with internal u, c, t quarks.
introduction of the charmed quark, coupling to −d sin θ C + s cos θ C , cancels the leading contribution, leading to an additional factor of [(m 2 c − m 2 u )/M 2 W ] ln(M 2 W /m 2 c ) in the above expression. Using such arguments Glashow et al. (1970) and Gaillard and Lee (1974) estimated the mass of the charmed quark to be less than several GeV. (Indeed, early candidates for charmed particles had been seen by Niu, Mikumo, and Maeda 1971.) The discovery of the J/ψ (Aubert et al. 1974, Augustin et al. 1974 confirmed this prediction; charmed hadrons produced in neutrino interactions (Cazzoli et al. 1975) and in e + e − annihilations (Goldhaber et al. 1976, Peruzzi et al. 1976) followed soon after.
An early motivation for charm relied on an analogy between quarks and leptons. Hara (1964), Maki and Ohnuki (1964), and Bjorken and Glashow (1964) inferred the existence of a charmed quark coupling mainly to the strange quark from the existence of the µ − ν µ doublet: Further motivation for the quark-lepton analogy was noted by Bouchiat et al. (1972), Georgi and Glashow (1972b), and Gross and Jackiw (1972). In a gauge theory of the electroweak interactions, triangle anomalies associated with graphs of the type shown in Figure 11 have to be avoided. This cancellation requires the fermions f in the theory to contribute a total of zero to the sum over f of Q 2 f I f 3L . Such a cancellation can be achieved by requiring quarks and leptons to occur in complete families so that the terms Quarks : 3 2 3 sum to zero for each family.
We are then left with a flavor-preserving neutral current J (3) µ , given by (100), whose interpretation must still be given. It cannot correspond to the photon, since the photon couples to both left-handed and right-handed fermions. At the same time, the photon is somehow involved in the weak interactions associated with W exchange. In particular, the W ± themselves are charged, so any theory in which electromagnetic current is conserved must involve a γW + W − coupling. Moreover, the charge is sensitive to the third component of the SU(2) algebra we have just introduced. We shall refer to this as SU(2) L , recognizing that only left-handed fermions ψ L transform non-trivially under it. Then we Figure 11. Example of triangle diagram for which leading behavior must cancel in a renormalizable electroweak theory. Table 3. Values of charge, I 3L , and weak hypercharge Y for quarks and leptons.
can define a weak hypercharge Y in terms of the difference between the electric charge Q and the third component I 3L of SU(2) L (weak isospin): Values of Y for quarks and leptons are summarized in Table 3.
If you find these weak hypercharge assignments mysterious, you are not alone. They follow naturally in unified theories (grand unified theories) of the electroweak and strong interactions. A "secret formula" for Y , which may have deeper significance (Pati and Salam 1973), is Y = 2I 3R + (B − L), where I 3R is the third component of "right-handed" isospin, B is baryon number (1/3 for quarks), and L is lepton number (1 for leptons such as e − and ν e ). The orthogonal component of I 3R and B − L may correspond to a higher-mass, as-yet-unseen vector boson, an example of what is called a Z ′ . The search for Z ′ bosons with various properties is an ongoing topic of interest; current limits are quoted by the Particle Data Group (2000).
The gauge theory of charged and neutral W 's thus must involve the photon in some way. It will then be necessary, in order to respect the formula (105), to introduce an additional U(1) symmetry associated with weak hypercharge. The combined electroweak gauge group will have the form SU(2) L ⊗ U(1) Y .
Symmetry breaking
Any unified theory of the weak and electromagnetic interactions must be broken, since the photon is massless while the W bosons (at least) are not. An explicit mass term in a gauge theory of the form m 2 A i µ A µi violates gauge invariance. It is not invariant under the replacement (26). Another means must be found to introduce a mass. The symmetry must be broken in such a way as to preserve gauge invariance.
A further manifestation of symmetry breaking is the presence of fermion mass terms. Any product ψψ may be written as using the fact that ψ L = ψ(1 + γ 5 )/2, ψ R = ψ(1 − γ 5 )/2. Since ψ L transforms as an SU(2) L doublet but ψ R as an SU(2) L singlet, a mass term proportional to ψψ transforms as an overall SU(2) L doublet. Moreover, the weak hypercharges of left-handed fermions and their right-handed counterparts are different. Hence one cannot even have explicit fermion mass terms in the Lagrangian and hope to preserve local gauge invariance.
One way to generate a fermion mass without explicitly violating gauge invariance is to assume the existence of a complex scalar SU(2) L doublet φ coupled to fermions via a Yukawa interaction: Thus, for example, with ψ L = (ν e , e) L and ψ R = e R , we have If φ 0 acquires a vacuum expectation value, φ 0 = 0, this quantity will automatically break SU(2) L and U(1) Y , and will give rise to a non-zero electron mass. A neutrino mass is not generated, simply because no right-handed neutrino has been assumed to exist.
(We shall see in the last Section how to generate the tiny neutrino masses that appear to be present in nature.) The gauge symmetry is not broken in the Lagrangian, but only in the solution. This is similar to the way in which rotational invariance is broken in a ferromagnet, where the fundamental interactions are rotationally invariant but the ground-state solution has a preferred direction along which the spins are aligned.
The d quark masses are generated by similar couplings involving To generate u quark masses one must either use the multiplet which also transforms as an SU(2) doublet, or a separate doublet of scalar fields With ψ L = (ū,d) L and ψ R = u R , we then find if we make use ofφ, or if we use φ ′ . For present purposes we shall assume the existence of a single complex doublet, though many theories (notably, some grand unified theories or supersymmetry) require more than one.
Scalar fields and the Higgs mechanism
Suppose a complex scalar field of the form (107) is described by a Lagrangian Note the "wrong" sign of the mass term. This Lagrangian is invariant under SU(2) L ⊗ U(1) Y . The field φ will acquire a constant vacuum expectation value which we calculate by asking for the stationary value of L φ : We still have not specified which component of φ acquires the vacuum expectation value. At this point only φ † φ = |φ + | 2 + |φ 0 | 2 is fixed, and (Re φ + , Im φ + , Re φ 0 , Im φ 0 ) can range over the surface of a four-dimensional sphere. The Lagrangian (114) is, in fact, invariant under rotations of this four-dimensional sphere, a group SO(4) isomorphic to SU(2) ⊗ U(1). A lower-dimensional analogue of this surface would be the bottom of a wine bottle along which a marble rolls freely in an orbit a fixed distance from the center.
Let us define the vacuum expectation value of φ to be a real parameter in the φ 0 direction: The factor of 1/ √ 2 is introduced for later convenience. We then find, from the discussion in the previous section, that Yukawa couplings of φ to fermions ψ i generate mass terms We must now see what such vacuum expectation values do to gauge boson masses. (For numerous illustrations of this phenomenon in simple field-theoretical models see Abers and Lee 1973, Quigg 1983, and Peskin and Schroeder 1995 In order to introduce gauge interactions with the scalar field φ, one must replace ∂ µ by D µ in the kinetic term of the Lagrangian (114). Here where the U(1) Y interaction is characterized by a coupling constant g ′ and a gauge field B µ , and we have written g for the SU(2) coupling discussed earlier. It will be convenient to write φ in terms of four independent real fields (ξ i , η) in a slightly different form: We then perform an SU(2) L gauge transformation to remove the ξ dependence of φ, and rewrite it as The fermion and gauge fields are transformed accordingly; we rewrite the Lagrangian for them in the new gauge. The resulting kinetic term for the scalar fields, taking account that Y = 1 for the Higgs field (107), is This term contains several contributions.
3. There are W W η, BBη, W W η 2 , and BBη 2 interactions. 4. The v 2 term leads to a mass term for the Yang-Mills fields: The spontaneous breaking of the SU(2) ⊗ U(1) symmetry thus has led to the appearance of a mass term for the gauge fields. This is an example of the Higgs mechanism (Higgs 1964). An unavoidable consequence is the appearance of the scalar field η, the Higgs field. We shall discuss it further in Section 5.
The masses of the charged W bosons may be identified by comparing Eqs. (121) and (75): Since the Fermi constant is related to g/M W , one finds The combination gW 3 µ − g ′ B µ also acquires a mass. We must normalize this combination suitably so that it contributes properly in the kinetic term for the Yang-Mills fields: where Defining cos θ ≡ g (g 2 + g ′ 2 ) 1/2 so that sin θ = we may write the normalized combination ∼ gW 3 µ − g ′ B µ which acquires a mass as The orthogonal combination does not acquire a mass. It may then be identified as the photon: The mass of the Z is given by using (126) in the last relation. The W 's and Z's have acquired masses, but they are not equal unless g ′ were to vanish. We shall see in the next subsection that both g and g ′ are nonzero, so one expects the Z to be heavier than the W .
It is interesting to stop for a moment to consider what has taken place. We started with four scalar fields φ + , φ − , φ 0 , andφ 0 . Three of them [φ + , φ − , and the combination (φ 0 −φ 0 )/i √ 2] could be absorbed in the gauge transformation in passing from (118) to (119), which made sense only as long as (φ 0 +φ)/ √ 2 had a vacuum expectation value v. The net result was the generation of mass for three gauge bosons W + , W − , and Z.
If we had not transformed away the three components ξ i of φ in (118), the term L K,φ in the presence of gauge fields would have contained contributions W µ ∂ µ φ which mixed gauge fields and derivatives of φ. These can be expressed as and the total divergence (the first term) discarded. One thus sees that such terms mix longitudinal components of gauge fields (proportional to ∂ µ W µ ) with scalar fields. It is necessary to redefine the gauge fields by means of a gauge transformation to get rid of such mixing terms. It is just this transformation that was anticipated in passing from (118) to (119).
The three "unphysical" scalar fields provide the necessary longitudinal degrees of freedom in order to convert the massless W ± and Z to massive fields. Each massless field possesses only two polarization states (J z = ±J), while a massive vector field has three (J z = 0 as well). Such counting rules are extremely useful when more than one Higgs field is present, to keep track of how many scalar fields survive being "eaten" by gauge fields.
Interactions in the SU(2) ⊗ U(1) theory
By introducing gauge boson masses via the Higgs mechanism, and letting the simplest non-trivial representation of scalar fields acquire a vacuum expectation value v, we have related the Fermi coupling constant to v, and the gauge boson masses to gv or (g 2 +g ′ 2 ) 1/2 v. We still have two arbitrary couplings g and g ′ in the theory, however. We shall show how to relate the electromagnetic coupling to them, and how to measure them separately.
The interaction of fermions with gauge fields is described by the kinetic term L K,ψ = ψ Dψ. Here, as usual, The charged-W interactions have already been discussed. They are described by the terms (76) for leptons and (80) for quarks. The interactions of W 3 and B may be re-expressed in terms of A and Z via the inverse of (127) and (128): Then the covariant derivative for neutral gauge bosons is Here we have substituted Y /2 = (Q − I 3L ). We identify the electromagnetic contribution to the right-hand side of (133) with the familiar one −ieQ A, so that e = g ′ cos θ = g sin θ .
The second equality, stemming from the demand that I 3L A terms cancel one another in (133), is automatically satisfied as a result of the definition (126). Combining (126) and (134), we find the result advertised in the Introduction.
The interaction of the Z with fermions may be determined from Eq. (133) with the help of (126), noting that g cos θ + g ′ sin θ = (g 2 + g ′ 2 ) 1/2 and g ′ sin θ = (g 2 + g ′ 2 ) 1/2 sin 2 θ. We find Knowledge of the weak mixing angle θ will allow us to predict the W and Z masses. Using G F / √ 2 = g 2 /8M 2 W and g sin θ = e, we can write if we were to use α −1 = 137.036. However, we shall see in the next Section that it is more appropriate to use a value of α −1 ≃ 129 at momentum transfers characteristic of the W mass. With this and other electroweak radiative corrections, the correct estimate is raised to M W ≃ 38.6 GeV/ sin θ, leading to the successful predictions (7). The Z mass is expressed in terms of the W mass by M Z = M W / cos θ.
Neutral current processes
The interactions of Z's with matter, may be taken to second order in perturbation theory, leading to an effective four-fermion theory for momentum transfers much smaller than the Z mass. In analogy with the relation between the W boson interaction terms (76) and (80) and the four-fermion chargedcurrent interaction (71), we may write where we have used the identity (g 2 + g ′ 2 )/8M 2 Z = G F / √ 2 following from relations in the previous subsection.
Many processes are sensitive to the neutral-current interaction (139), but no evidence for this interaction had been demonstrated until the discovery in 1973 of neutral-current interactions on hadronic targets of deeply inelastically scattered neutrinos (Hasert et al. 1973;Benvenuti et al. 1974). For many years these processes provided the most sensitive measurement of neutral-current parameters. Other crucial experiments (see, e.g., reviews by Amaldi et al. 1987 andLangacker et al. 1992) included polarized electron or muon scattering on nucleons, asymmetries and total cross sections in e + e − → µ + µ − or τ + τ − , parity violation in atomic transitions, neutrino-electron scattering, coherent π 0 production on nuclei by neutrinos, and detailed measurements of W and Z properties. Let us take as an example the scattering of leptons on quarks to see how they provide a value of sin 2 θ. In the next subsection we shall turn to the properties of the Z bosons, which are now the source of the most precise information.
One measures quantities
These ratios may be calculated in terms of the weak Hamiltonians (71) and (139). It is helpful to note that for states of the same helicity (L or R, standing for left-handed or right-handed) scattering on one another, the differential cross section is a constant: where σ 0 is some reference cross section, while for states of opposite helicity, Thus We first simplify the calculation by assuming the numbers of protons and neutrons are equal in the target nucleus, and neglecting the effect of antiquarks in the nucleon. (We shall use the shorthand ν = ν µ andν =ν µ .) Then where Taking account of the relations (143), one finds where we have used the fact that σ(νd → µ − d) = 3σ(νu → µ + d). The results are R ν = 1 2 − sin 2 θ + 20 27 sin 4 θ , Rν = 1 2 − sin 2 θ + 20 9 sin 4 θ .
Some experimental values of R ν , Rν, and r are shown in Table 4 (Conrad et al. 1998).
The relation between R ν and Rν as a function of sin 2 θ is plotted in Figure 12. This result has a couple of interesting features.
The observed Rν is very close to its minimum possible value of less than 0.4. Initially this made the observation of neutral currents quite challenging. Note that R ν is even smaller. Its value provides the greatest sensitivity to sin 2 θ. It is also more precisely measured than Rν (in part, because neutrino beams are easier to achieve than antineutrino beams). The effect of r on the determination of sin 2 θ is relatively mild.
A recent determination of sin 2 θ (Zeller et al. 1999), based on a method proposed by Paschos and Wolfenstein (1973), makes use of the ratio In these differences of neutrino and antineutrino cross sections, effects of virtual quarkantiquark pairs in the nucleon ("sea quarks," as opposed to "valence quarks") cancel one another, and an important systematic error associated with heavy quark production (as in νs → µ − c) is greatly reduced. The result is sin 2 θ (on−shell) = 0.2253 ± 0.0019(stat.) ± 0.0010(syst.) , which implies a W mass M W ≡ M Z cos θ (on−shell) = 80.21 ± 0.11 GeV .
The "on-shell" designation for sin 2 θ is necessary when discussing higher-order electroweak radiative corrections, which we shall do in the next Section.
[Note added: a more recent analysis by Zeller et al. (2001) finds sin 2 θ (on−shell) = 0.2277 ± 0.0014(stat.) ± 0.0008(syst.) , equivalent to M W = 80.136 ± 0.084 GeV. Incorporation of this result into the electroweak fits described in the next Section is likely to somewhat relax constraints on the Higgs boson mass: See Rosner (2001).]
Z and top quark properties
We have already noted the prediction and measurement of the W mass and width. The In much of the subsequent discussion we shall make use of the very precise value of M Z as one of our inputs to the electroweak theory; the two others, which will suffice to specify all parameters at lowest order of perturbation theory, will be the Fermi coupling constant G F = 1.16637(1) × 10 −5 GeV −2 and the electromagnetic fine-structure constant, evolved to a scale M 2 Z : α −1 (MS) (M 2 Z ) = 128.933 ± 0.021 (Davier and Höcker 1998). This last quantity depends for its determination upon a precise evaluation of hadronic contributions to vacuum polarization, and is very much the subject of current discussion.
The relative branching fractions of the Z to various final states may be calculated on the basis of Eq. (138). One may write this expression as The values of a L and a R for each fermion are shown in Table 5.
The partial width of Z into ff is where n c is the number of colors of fermions f : 1 for leptons, 3 for quarks.
The predicted partial width for each Z → νν channel is independent of sin 2 θ: using the observed value of M Z . The partial decay rates to other channels are expected to be in the ratios νν : e + e − : uū : dd = 1 : 1 − 4 sin 2 θ + 8 sin 4 θ : 3 − 8 sin 2 θ + 32 3 sin 4 θ : 3 − 4 sin 2 θ + 8 3 sin 4 θ , where f V and f A = 1 − f V are the relative fractions of the partial decay width proceeding via the vector (∼ a L + a R ) and axial-vector (∼ a L − a R ) couplings. For sin 2 θ = 0.23, f V ≃ 1/3, f A ≃ 2/3, and Φ bb ≃ 0.988. A further correction to Γ(Z → bb), important for the precise determinations in the next Section, is associated with loop graphs associated with top quark exchange (see the review by Chivukula 1995), and is of the same size, about 0.988. Taking a correction factor (1 + α S /π) with α S (M 2 Z ) = 0.12 for the hadronic partial widths of the Z, we then predict the contributions to Γ Z listed in Table 5. (The tt channel is, of course, kinematically forbidden.) The measured Z width (157) is in qualitative agreement with the prediction, but above it by about 0.7%. This effect is a signal of higher-order electroweak radiative corrections such as loop diagrams involving the top quark and the Higgs boson. Similarly, the observed value of Γ(Z → e + e − ), assuming lepton universality, is 83.984 ± 0.086 MeV, again higher by 0.7% than the predicted value of 83.4 MeV. We shall return to these effects in the next Section.
The width of the Z is sensitive to additional νν pairs. Clearly there is no room for an additional light pair coupling with full strength. Taking account of all precision data and electroweak corrections, the latest determination of the "invisible" width of the Z (see the compilations by the LEP EWWG 2001 and by Langacker 2001) fixes the number of "light" neutrino species as N ν = 2.984 ± 0.008.
The Z is produced copiously in e + e − annihilations when the center-of-mass energy √ s is tuned to M Z . The Stanford Linear Collider (SLC) and the Large Electron-Positron Collider at CERN (LEP) exploited this feature. The cross section of production of a final state f near the resonance, ignoring the effect of the virtual photon in the direct channel, should be At resonance, the peak total cross section should be σ peak = 12πB e + e − /M 2 Z ≃ 59.4 nb, corresponding to Here B e + e − ≡ Γ(Z 0 → e + e − )/Γ Z ≃ 3.37%. This is a spectacular value of R, which is only a few units in the range of lower-energy e + e − colliders. Of course, not all of the cross section at the Z peak is visible: Nearly 12 nb goes into neutrinos! Another 6 nb goes into charged lepton pairs, leaving σ peak, hadrons = 41.541 ± 0.037 nb.
We close this subsection with a brief discussion of spin-dependent asymmetries at the Z. These are some of the most powerful sources of information on sin 2 θ. They have been measured both at LEP (through forward-backward asymmetries) and at SLC (through the use of polarized electron beams).
The discussion makes use of an elementary feature of vector-and axial-vector couplings. Processes involving such couplings to a real or virtual particle (such as the Z) always conserve chirality. In the direct-channel reactions e − e + → Z → ff this means that a left-(right-)handed electron only interacts with a right-(left-) handed positron, and if the final fermion f is left-(right-)handed then the final antifermionf will be right-(left-) handed. Moreover, such reactions have characteristic angular distributions, with where σ 0 is some common factor, and the a L,R are given in Table 5. Several asymmetries can be formed using these results.
The polarized electron left-right asymmetry compares the cross sections for producing fermions using right-handed and left-handed polarized electrons, as can be produced and monitored at the SLC. The cross section asymmetry is given by The measured value (LEP EWWG 2001) A e LR (hadrons) = 0.1514 ± 0.0022 corresponds to sin 2 θ = 0.23105 ± 0.00028 using this formula. (We shall discuss small corrections in the next Section.) The forward-backward asymmetry in e + e − → ff uses the fact that so that These quantities can be measured not only for charged leptons, but also for quarks such as the b, whose decays allow for a distinction to be made (at least on a statistical basis) between b andb.
The discovery of the top quark by the CDF (1994) and D0 (1995) Collaborations culminated nearly two decades of detector and machine work at the Fermilab Tevatron. A ring of superconducting magnets was added to the 400 GeV Fermilab accelerator, more than doubling its energy. Low-energy rings were added to accumulate and store antiprotons, which were then injected into the superconducting ring and made to collide with oppositely-directed protons at a center-of-mass energy of 1.8 TeV. The top quarks were produced in the reaction pp → tt + . . . .
The top quark's mass is currently measured to be m t = 174.3 ± 5.1 GeV. It couples mainly to b, as expected in the pattern of couplings discussed in Section 3. One determination (see Gilman, Kleinknecht, and Renk 2000 for details) is that This result makes use of the measured fraction of the decays t → be + ν e in top semileptonic decays.
The top quark is the only quark heavy enough to decay directly to another quark (mainly b) and a real W . Its decay width is This is larger than the typical spacing between quarkonium levels (see Figures 4 and 5), and so there is not expected to be a rich spectroscopy of tt levels, but only a mild enhancement near threshold of the reaction e + e − → tt, associated with the production of the 1S level (Kwong 1991, Strassler andPeskin 1991). A good review of present and anticipated top quark physics is given by Willenbrock (2000).
Searches for a standard Higgs boson
Let us assume that all quark and lepton masses and all W and Z masses arise from the vacuum expectation value of a single Higgs boson: φ 0 = v/ √ 2, where the strength of the Fermi coupling requires v = 246 GeV. The Yukawa coupling g Y f (107) for a fermion f is related to the fermion's mass: g Y f = √ 2m f /v. (It is a curious feature of the top quark's mass that, within present errors, g Y t = 1. Since fermion masses "run" with scale µ, it is not clear how fundamental this relation is.) Those quarks with the greatest mass then are expected to have the greatest coupling to the physical Higgs boson H = √ 2φ 0 − v. (Here we use H to denote the field represented by η in the previous Section.) The Higgs boson has a well-defined coupling to W 's and Z's as a result of the discussion in the previous Section. The term (D µ φ) † (D µ φ) in the Lagrangian leads to To lowest order, one find L HZγ = L Hγγ = 0.
At very high energies, the Higgs boson can be produced by means of W + W − and ZZ fusion; the (virtual) W 's and Z's can be produced in either hadron-hadron or lepton-lepton collisions. A further proposal for producing Higgs bosons is by means of muon-muon collisions.
For Higgs bosons far above W W and ZZ threshold, one expects (Eichten et al. 1984) as one can show with the help of (174). The longitudinal degrees of freedom of the W and Z provide the dominant contribution to the decay width in this limit. For M H = 1 TeV, this relation implies that the Higgs boson's width will be nearly 500 GeV. Such a broad object will be difficult to separate from background. However, mixed signals for a a much lighter Higgs boson have already been received at LEP.
At the very highest LEP energies attained, √ s ≤ 209 GeV, the four LEP collaborations ALEPH, DELPHI, L3, and OPAL have presented combined results (LEP Higgs Working Group 2001) which may be interpreted either as a lower limit on the Higgs boson mass of 114.1 GeV, or as a weak signal for a Higgs boson of mass M H ≃ 115.6 GeV produced by the above process. This latter interpretation is driven in large part by the ALEPH data sample (Barate et al. 2001). The main decay mode of a Higgs boson in this mass range is expected to be bb, with τ + τ − taking second place.
LEP now has ceased operation in order to make way for the Large Hadron Collider (LHC), which will collide 7 TeV protons with 7 TeV protons and should have no problem producing such a boson. The LHC is scheduled to begin operation in 2006. In the meantime, the Fermilab Tevatron has resumed pp collider operation after a hiatus of 5 years. Its scheduled "Run II" is initially envisioned to provide an integrated luminosity of 2 fb −1 , which is thought to be sufficient to rival the sensitivity of the LEP search (Carena et al. 2000), making use of the subprocess qq → W virtual → W + H. With 10 fb −1 per detector, a benchmark goal for several years of running with luminosity improvements, it should be possible to exclude a Higgs boson with standard couplings nearly up to the ZZ threshold of 182 GeV, and to see a 3σ signal if M H ≤ 125 GeV. Other scenarios, including the potential for discovering the Higgs boson(s) of the Minimal Supersymmetric Standard Model (MSSM) are given by Carena et al. (2000). Meanwhile, we shall turn to the wealth of precise measurements of electroweak properties of the Z, W , top quark, and lighter fermions as indirect sources of information about the Higgs boson and other new physics.
Precision electroweak tests
We have calculated processes to lowest electroweak order in the previous Section, with the exception that we took account of vacuum polarization in the photon propagator, which leads to a value of α −1 closer to 129 than to 137.037 at the mass scale of the Z. The lowest-order description was found to be adequate at the percent level, but many electroweak measurements are now an order of magnitude more precise. As one example, we found that the predicted total and leptonic Z widths both fell short of the corresponding experimental values by about 0.7%. Higher-order electroweak corrections are needed to match the precision of the new data. These corrections can shed fascinating light on new physics, as well as validating the original motivation for the electroweak theory (which was to be able to perform higher-order calculations).
We shall describe a language introduced by Peskin and Takeuchi (1990) for precise electroweak tests which allows the constraints associated with nearly every observable to be displayed on a two-dimensional plot. The Standard Model implies a particular locus on this plot for every value of m t and M H , so one can see how observables can vary with m t (not much, now that m t is so well measured) and M H . Moreover, one can spot at a glance if a particular measurement is at variance with others; this can either signify physics outside the purview of the two-dimensional plot, or systematic experimental error.
The corrections which fall naturally into the two-dimensional description are those known as oblique corrections. The name stems from the fact that they do not directly affect the fermions participating in the processes of interest, but appear as vacuum polarization corrections in gauge boson propagators. In that sense processes which are sensitive to oblique corrections have a broad reach for discovering new physics, since they do not rely on a new particle's having to couple directly to the external fermion in question.
The oblique correction first identified by Veltman (1977), still the most important, is that due to top quarks in W and Z boson propagators. The large splitting between the top and bottom quarks' masses violates a custodial SU(2) symmetry (Sikivie et al. 1980) responsible for preserving the tree-level relation M W = M Z cos θ mentioned in the previous Section. As a result, an effect is generated which is equivalent to having a Higgs triplet vacuum expectation value.
For the photon, gauge invariance prohibits contributions quadratic in fermion masses, but for the W and Z, no such prohibition applies. The vacuum polarization diagrams with W + → tb → W + and Z → (tt, bb) → Z lead to a modification of the relation between G F , coupling constants, and M Z for neutral-current exchanges: The Z mass is now related to the weak mixing angle by where we have omitted some small terms logarithmic in m t . A precise measurement of M Z now specifies θ only if m t is known, so θ = θ(m t ) and hence M 2 The factor of ρ in (179) will multiply every neutral-current four-fermion interaction in the electroweak theory. Thus, for, example, cross sections for charge-preserving interactions of neutrinos with matter will be proportional to ρ 2 , while parity-violating neutralcurrent amplitudes (to be discussed below) will be proportional to ρ. Partial decay widths of the Z, since they involve the combination g 2 + g ′ 2 , will be proportional to ρ. A large part of the 0.7% correction mentioned previously is due to ρ > 1. The observed values of M W /M Z = ρ cos θ and sin 2 θ also are much more compatible with each other for a value of ρ exceeding 1 by about a percent.
The W and Z propagators are also affected by virtual Higgs-boson states due to the couplings (174). Small corrections, logarithmic in M H , affect all the observables, but notably ρ.
In order to display dependence of electroweak observables on such quantities as the top quark and Higgs boson masses m t and M H , we choose to expand the observables about "nominal" values calculated by Marciano (2000) for specific m t and M H . We thereby bypass a discussion of "direct" radiative corrections which are independent of m t , M H , and new particles. We isolate the dependence on m t , M H , and new physics arising from "oblique" corrections associated with loops in the W and Z propagators.
For m t = 174.3 GeV, M H = 100 GeV, the measured value of M Z leads to a nominal expected value of sin 2 θ eff = 0.2314. In what follows we shall interpret the effective value of sin 2 θ as that measured via leptonic vector and axial-vector couplings: ). Defining the parameter T by ∆ρ ≡ αT , we find The weak mixing angle θ, the W mass, and other electroweak observables now depend on m t and M H .
The weak charge-changing and neutral-current interactions are probed under a number of different conditions, corresponding to different values of momentum transfer. For example, muon decay occurs at momentum transfers small with respect to M W , while the decay of a Z into fermion-antifermion pairs imparts a momentum of nearly M Z /2 to each member of the pair. Small "oblique" corrections, logarithmic in m t and M H , arise from contributions of new particles to the photon, W , and Z propagators. Other (smaller) "direct" radiative corrections are important in calcuating actual values of observables.
We may then replace the lowest-order relations between G F , couplings, and masses by where S W and S Z are coefficients representing variation with momentum transfer. Together with T , they express a wide variety of electroweak observables in terms of quantities sensitive to new physics. (The presence of such corrections was noted quite early by Veltman 1977.) The Peskin and Takeuchi (1990) Expressing the "new physics" effects in terms of deviations from nominal values of top quark and Higgs boson masses, we have the expression (181) for T , while contributions of Higgs bosons and of possible new fermions U and D with electromagnetic charges Q U and Q D to S W and S Z , in a leading-logarithm approximation, are (Kennedy and Langacker 1990) The expressions for S W and S Z are written for doublets of fermions with N C colors and m U ≥ m D ≫ m Z , while Q ≡ (Q U + Q D )/2. The sums are taken over all doublets of new fermions. In the limit m U = m D , one has equal contributions to S W and S Z . For a single Higgs boson and a single heavy top quark, Eqs. (183) and (184) become where the leading-logarithm expressions are of limited validity for M H and m t far from their nominal values. (We shall plot contours of S and T for fixed m t and M H values without making these approximations.) A degenerate heavy fermion doublet with N c colors thus contributes ∆S Z = ∆S W = N c /6π. For example, in a minimal dynamical symmetry-breaking ("technicolor") scheme, with a single doublet of N c = 4 fermions, one will have ∆S = 2/3π ≃ 0.2. This will turn out to be marginally acceptable, while many non-minimal schemes, with large numbers of doublets, will be seen to be ruled out. Swartz 2001). We shall present a "cartoon" version after discussing possible extensions of the Higgs system. Meanwhile we note briefly a topic which will not enter that discussion.
Multiple Higgs doublets and Higgs triplets
There are several reasons for introducing a more complicated Higgs boson spectrum. Reasons for introducing separate Higgs doublets for u-type and d-type quarks include higher symmetries following from attempts to unify the strong and electroweak interactions, and supersymmetry. We examine the simplest model with more than one Higgs doublet, in which a single doublet couples to d-type quarks and charged leptons, and a different doublet couples to u-type quarks. This model turns out to naturally avoid flavor-changing neutral currents associated with Higgs exchange (Glashow and Weinberg 1977).
Let us denote by φ u the Higgs boson coupling to u-type quarks and by φ d the boson coupling to d-type quarks and charged leptons. We let The contribution of φ u and φ d to W and Z masses comes from We find the same W 3 µ −B µ mixing pattern as before, and in fact this pattern would remain the same no matter how many Higgs doublets were introduced. The parameters v u and v d may be related to the quantity v = 246 GeV introduced earlier by v 2 u +v 2 d = v 2 , whereupon all previous expressions for M W and M Z remain valid. One would have v 2 = i v 2 i for any number of doublets.
The quark and lepton couplings to Higgs doublets are enhanced if there are multiple doublets. Since m q = g Y v q / √ 2 (q = u or d) and v q < v, one has larger Yukawa couplings than in the standard single-Higgs model. A more radical consequence, however, of multiple doublets in the SU(2) L gauge theory is that there are not enough gauge bosons to "eat" all the scalar fields. In a two-doublet model, five "uneaten" scalars remain: two charged and three neutral. The phenomenology of these is well-described by Gunion et al. (1990).
The prediction M Z = M W / cos θ is specific to the assumption that only Higgs doublets of SU(2) L exist. [SU(2) L singlets which are neutral also have Y = 0, and do not affect W and Z masses.] If triplets or higher representations of SU(2) exist, the situation is changed. We shall examine two cases of triplets: a complex triplet with charges (++,+,0) and one with charges (+,0,-).
Consider first a complex triplet of the form Since Q = I 3l + Y 2 , one must have Y = 2 for this triplet. In calculating |D µ Φ| 2 we will need the triplet representation for weak isospin: The result, if Φ 0 = V 1,−1 / √ 2, is that The same combination of W 3 and B gets a mass as in the case of one or more Higgs doublets, simply because we assumed that it was a neutral Higgs field which acquired a vacuum expectation value. Electromagnetic gauge invariance remains valid; the photon does not acquire a mass. However, the ratio of W and Z masses is altered. In the presence of doublets and this type of triplet, we find so the ratio ρ = (M W /M Z cos θ) 2 is no longer 1, but becomes This type of Higgs boson thus leads to ρ < 1.
is characterized by Y = 0. If we let Φ 0 = V 1,0 / √ 2, we find by an entirely similar calculation, that Here we predict so this type of Higgs boson leads to ρ > 1.
We now examine a simple set of electroweak data (Rosner 2001), updating an earlier analysis (Rosner 1999) which may be consulted for further references. (See also Peskin and Wells 2001.) We omit some data which provide similar information but are less constraining. Thus, we take only the observed values of M W as measured at the Fermilab Tevatron and LEP-II, the leptonic width of the Z, and the value of sin 2 θ eff as measured in various asymmetry experiments at the Z pole in e + e − collisions. We also include parity violation in atoms, stemming from the interference of Z and photon exchanges between the electrons and the nucleus. The most precise constraint at present arises from the measurement of the weak charge (the coherent vector coupling of the Z to the nucleus), Q W = ρ(Z − N − 4Z sin 2 θ), in atomic cesium. The prediction Q W (Cs) = −73.19 ± 0.13 is insensitive to standard-model parameters once M Z is specified; discrepancies are good indications of new physics.
The inputs, their nominal values for m t = 174.3 GeV and M H = 100 GeV, and their dependences on S and T are shown in Table 6. We do not constrain the top quark mass; we display its effect on S and T explicitly. Each observable specifies a pair of parallel lines in the S − T plane. The leptonic width mainly constrains T ; sin 2 θ eff provides a good constraint on S with some T -dependence; and M W lies in between. Atomic parity violation experiments constrain S with almost no T dependence. Although the errors on S they entail are too large to have much impact, we include them for illustrative purposes. Since the slopes associated with constraints are very different, the resulting allowed region is an ellipse, shown in Figure 13. [Note added: Milstein and Sushkov (2001) have noted that a correction due to the strong nuclear field changes the central value of Q W (Cs) in Table 6 to ≃ −72.2, while Dzuba et al. (2001) In the standard model, the combined constraints of electroweak observables such as those in Table 6 and the top quark mass favor a very light Higgs boson, with most analyses favoring a value of M H so low that the Higgs boson should already have been discovered. The efficacy of a small amount of triplet symmetry breaking has recently been stressed in a nice paper by Forshaw et al. (2001). It is also implied in the discussions of Dobrescu and Hill (1998), Collins et al. (2000), He et al. (2001), and Peskin (2001).
The standard model prediction for S and T curves down quite sharply in T as M H is increased, quickly departing from the region allowed by the fit to electroweak data. Table 6. Electroweak observables described in fit. References for atomic physics experiment and theory are given by Rosner (2001). Table 6. Details are given in the text.
(Useful analytic expressions for the contribution of a Higgs boson to S and T are given by Forshaw et al. 2001.) However, if a small amount of triplet symmetry breaking is permitted, the agreement with the electroweak fit can be restored. As an example, a value of V 1,0 /v = 0.03 permits satisfactory agreement even for M H = 1 TeV, as shown by the vertical line in the Figure.
Supersymmetry, technicolor, and alternatives
What could lie beyond the standard model? The odds-on favorite among most theorists is supersymmetry, an extremely beautiful idea which may or may not be realized at the electroweak scale, but which almost certainly plays a role at the Planck scale at which space and time first acquire their meaning.
The simplest illustration of supersymmetry (in one time and no space dimensions!) does back to Darboux in 1882, who factored second-order differential operators into the product of two first-order operators. Dirac's famous treatment of the harmonic oscillator, writing its Hamiltonian as H = h − ω(a † a + 1 2 ), is an example of this procedure, which was generalized by Schrödinger in 1941 and Infeld and Hull in 1951. Some of this literature is reviewed by Kwong and Rosner (1986). The Hamiltonian is the generator of time translations, so this form of supersymmetry essentially amounts to saying that a time translation can be expressed as a composite of more fundamental operations.
Modern supersymmetry envisions both spatial and time translations as belonging to a super-algebra. The Lorentz group is isomorphic to SU(2) ⊗ SU(2) (with factors of i thrown in to account for the Minkowski metric); under this group space and time translations transform as (1/2,1/2). The supercharges transform as (1/2,0) and (0,1/2), clearly more fundamental objects.
Electroweak-scale supersymmetry is motivated by several main points. You will hear further details in this lecture series from Abel (2001). 1. In any gauge theory beyond the standard SU(3) color ⊗ SU(2) L , if the scale Λ of new physics is very high, this scale tends to make its way into the Higgs sector through loop diagrams, leading to quadratic contributions ∼ g 2 Λ 2 to the Higgs boson mass. Unless something cancels these contributions, one has to fine-tune counterterms in the Lagrangian to exquisite accuracy, at each order of perturbation theory. This is known as the "hierarchy problem." 2. The very nature of a λ(φ † φ) 2 term in the Lagrangian is problematic when considered from the standpoint of scale changes. This is known as the "triviality problem." 3. In the simplest theory by Georgi and Glashow (1974) unifying SU(3) color ⊗ SU(2) L , based on the gauge group SU(5), the coupling constants approach one another at high scale, but there is some "astigmatism." In a non-supersymmetric model, they do not all come together at the same scale. This is known as the "unification problem." It is cured in the simplest supersymmetric model, as a result of the different particle content in loop diagrams contributing to the running of the coupling constants. The model has a problem, however, in predicting too large a rate for p → K +ν (Murayama andPierce 2001, Peskin 2001).
An alternative scheme for solving these problems, which has had a much poorer time constructing any sort of self-consistent theory, is technicolor, the notion that the Higgs boson is a bound state of more fundamental constituents in the same way that the pion is really a bound state of quarks. This mechanism works beautifully when applied to the generation of gauge boson masses, but fails spectacularly (and requires epicyclic patches!) when one attempts to describe fermion masses. The basic idea of technicolor is that there is no hierarchy problem because there is no hierarchy; a wealth of TeV-scale new physics awaits to be discovered in the simplest version (applied to gauge bosons) of the theory.
A further, even more radical notion, is that both Higgs bosons and fermions are composite. This scheme so far has run aground on the difficulty of constructing quarks and leptons, keeping their masses light by nearly preserving a chiral symmetry ('t Hooft 1980). One can make guesses as to quantum numbers of constituents (Rosner and Soper 1992), but a sensible dynamics remains completely elusive.
Fermion masses
We finessed the question of the origin of the Cabibbo-Kobayashi-Maskawa (CKM) matrix. It comes about in the following way.
Here g is the weak SU(2) L coupling constant, and ψ L ≡ (1 − γ 5 )ψ/2 is the left-handed projection of the fermion field ψ = U or D.
Quark mixings arise because mass terms in the Lagrangian are permitted to connect weak eigenstates with one another. Thus, the matrices M U, D in may contain off-diagonal terms. One may diagonalize these matrices by separate unitary transformations on left-handed and right-handed quark fields: where Using the relation between weak eigenstates and mass eigenstates: where U ≡ (u, c, t) and D ≡ (d, s, b) are the mass eigenstates, and V ≡ L + U L D . The matrix V is just the Cabibbo-Kobayashi-Maskawa matrix. By construction, it is unitary: V + V = V V + = 1. It carries no information about R U or R D . More information would be forthcoming from interactions sensitive to right-handed quarks or from a genuine theory of quark masses.
Quark mass matrices can yield the observed hierarchy in CKM matrix elements. As an example (Rosenfeld and Rosner 2001), the regularities of quark masses evolved to a common high mass scale can be reproduced by the choice where m 3 denotes the mass eigenvalue of the third-family quark (t or b), and ǫ ≃ 0.07 for u quarks, ≃ 0.21 for d quarks. Hierarchical descriptions of this type were first introduced by Froggatt and Nielsen (1979). The present ansatz is closely related to one described by Fritzsch and Xing (1995). This type of mass matrix leads to acceptable values and phases of CKM elements.
The question of neutrino masses and mixings has entered a whole new phase with spectacular results from neutrino observatories such as super-Kamiokande ("Super-K") in Japan and the Sudbury Neutrino Observatory (SNO) in Canada. These indicate that: 1. Atmospheric muon neutrinos oscillate in vacuum, probably to τ neutrinos, with near-maximal mixing and a difference in squared mass ∆m 2 ≃ 3 × 10 −3 eV 2 .
2. Solar electron neutrinos oscillate, most likely in matter, to some combination of muon and τ neutrinos. All possible ∆m 2 values are at most about 10 −4 eV 2 ; several ranges of parameters are permitted, with large mixing favored by some analyses.
In addition one experiment, the Liquid Scintillator Neutrino Detector (LSND) at Los Alamos National Laboratory, suggestsν µ →ν e oscillations with ∆m 2 ≃ 0.1 to 1 eV, with small mixing. This possibility is difficult to reconcile with the previous two, and a forthcoming experiment at Fermilab (Mini-BooNE) is scheduled to check the result. For late news on neutrinos see the Web page maintained by Goodman (2001).
A possible explanation of small neutrino masses (Gell-Mann, Ramond, andSlansky 1979, Yanagida 1979) is that they are Majorana masses of order m M = m 2 D /M M , where m D is a typical Dirac mass and M M is a large Majorana mass acquired by right-handed neutrinos. Such a mass term is invariant under SU(2) L , and hence is completely acceptable in the electroweak theory. The pattern of neutrino Majorana and Dirac masses, and the mixing pattern, is likely to provide us with fascinating clues over the coming years as to the fundamental origin and nature of mass.
Summary
The Standard Model of electroweak and strong interactions has been in place for nearly thirty years, but precise tests have entered a phase that permits glimpses of physics beyond this impressive structure, most likely associated with the yet-to-be discovered Higgs boson. Studies of mixing between neutral kaons or neutral B mesons, covered by Stone (2001) in these lectures, are attaining impressive accuracy as well, and could yield cracks in the Standard Model at any time. It is time to ask what lies behind the pattern of fermion masses and mixings. This is an input to the Standard Model, characterized by many free parameters all of which await explanation. supported in part by the United States Department of Energy through Grant No. DE FG02 90ER40560. | 2014-10-01T00:00:00.000Z | 2001-08-23T00:00:00.000 | {
"year": 2001,
"sha1": "65b5ebd39640e94bff440b84d79ac733cc683910",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "65b5ebd39640e94bff440b84d79ac733cc683910",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
259775303 | pes2o/s2orc | v3-fos-license | Associations of Perioperative Red Blood Cell Transfusion With Outcomes of Kidney Transplantation in Korea Over a 16-Year Period
Background This study investigated the associations between transfusion of different types of red blood cell (RBC) preparations and kidney allograft outcomes after kidney transplantation (KT) over a 16-year period in Korea using a nationwide population-based cohort. Methods We investigated the reported use of RBCs during hospitalization for KT surgery, rejection, and graft failure status using nationwide data from the National Health Information Database (2002–2017). The associations between the type of perioperative RBC product and transplant outcomes were evaluated among four predefined groups: no RBC transfusion, filtered RBCs, washed RBCs, and packed RBCs (pRBCs). Results A total of 17,754 KT patients was included, among which 8,530 (48.0%) received some type of RBC transfusion. Of the patients who received RBC transfusion, 74.9%, 19.7%, and 5.4% received filtered RBCs, pRBCs, or washed RBCs, respectively. Regardless of the type of RBC products, the proportions of acute rejection and graft failure was significantly greater in patients receiving transfusion (P < 0.001). Cox proportional hazards regression analyses showed that the filtered RBC and pRBC groups were significantly associated with both rejection and graft failure. The washed RBC group also had hazard ratios greater than 1.0 for rejection and graft failure, but the association was not significant. Rejection-free survival of the pRBC group was significantly lower than that of the other groups (P < 0.001, log-rank test), and graft survival for the no RBC transfusion group was significantly greater than in the other groups (P < 0.001, log-rank test). Conclusion Perioperative RBC transfusion was associated with poor graft outcomes. Notably, transfusion of pRBCs significantly increased transplant rejection. Therefore, careful consideration of indications for RBC transfusion and selection of the appropriate type of RBCs is necessary, especially for patients at high risk of rejection or graft failure.
INTRODUCTION
The prevalence of end-stage renal disease (ESRD) in the United States was 2,242 cases per million people in 2018, the highest prevalence in the world. Kidney transplantation (KT) has been the treatment of choice for a minority of patients with ESRD since the 1960s, and at the end of 2018, there were 229,887 patients with a functioning KT in the United States. 1 The prevalence of anemia in people with chronic kidney disease and that in patients with ESRD was 15.4% and 53.4%, respectively, in the United States. 2 Therefore, red blood cell (RBC) transfusion is considered unavoidable during the peri-transplant period due to the high prevalence of anemia and blood loss during surgery. 3,4 Sensitization to human leukocyte antigens (HLAs) expressed on the surface of leukocytes and platelets in blood units can be caused by factors including previous transplant, pregnancy, or blood transfusion. 5, 6 Several studies have shown that RBC transfusion is an important cause of alloimmunization and is associated with increased rejection and graft loss. 7-12 Based on these results, it has been suggested that active efforts to minimize blood transfusion are required to prevent HLA sensitization and to improve kidney allograft outcomes. Appropriate management of perioperative RBC transfusion is a critical factor in KT patient outcomes as more than half of all kidney recipients receive transfusions. The consequences of perioperative transfusion on graft outcomes have been continuously evaluated. 13-15 however, there remain unexplored factors that are potentially relevant to patient prognosis. With respect of the type of RBC unit preparation, the risk of allosensitization in patients with chronic kidney diseases is considered to be lower with leuko-reduced RBCs. 9 Methods for leukocyte depletion from RBC products include centrifugal precipitation, filtration, and red cell washing. In the past, centrifugal precipitation and red cell washing were generally used. In these days, the filtering method is used to reduce leukocytes conveniently and effectively in blood components and achieves residual leucocyte counts of < 5 × 10 6 . 16, 17 However, in practice, packed RBCs (pRBCs), filtered RBCs, and washed RBCs have been heterogeneously transfused into KT recipients. 14 Moreover, the impact of transfusion of different types of RBC preparations on transplant outcomes is unclear, and to our knowledge, the clinical significance of the type of RBC unit preparation and graft outcome has not been evaluated in existing studies. Therefore, to better answer this practical question about which type of RBC products are suitable for patients, we investigated the associations between the transfusion of different types of RBC preparations and kidney allograft outcome using a nationwide database linked to the Korean National Health Insurance Service (KNHIS). 18 Thus, the current study aimed to investigate the outcomes of KT in South Korea over a 16-year period using a KNHIS nationwide database and to evaluate the association with poor prognosis after transplantation according to the type of RBC product transfused.
Data source and study subjects
The KNHIS was implemented in 1988 and controls all medical costs among individuals, health care providers, and the government in South Korea. Medical data, including personal information, diagnosis, medical treatment, and demographics of patients, are centralized in the National Health Information Database (NHID) of the KNHIS. 18 The NHID provides de-identified data for research purposes, and we collected detailed patient characteristics 2/13
Outcomes of Perioperative RBC Transfusion in KT Recipients
of all transplant recipients within this registry. The KNHIS-NHID includes the diagnosis of patients according to the Korean Classification of Diseases codes, which is the Korean version of the International Classification of Diseases (ICD). All insurance claims are classified based on Electronic Data Interchange (EDI) codes.
We extracted data on patients who underwent KT between 2002 and 2017 from the KNHIS-NHID using the specific EDI code (R3280) for KT. Among 18,331 KT patients, we excluded patients who received transfusions of two or more types of RBCs. Finally, we analyzed 17,754 KT cases.
Variable definitions
We investigated the sex, age, type of donor (living or deceased), income level, type of hospital, year of surgery, length of hospital stay, regimen of induction treatment, types of initial immunosuppressant and steroid regimens, and occurrence of acute kidney allograft rejection or graft failure. Since the characteristics of the KNHIS claim data make it difficult to specify the exact time of kidney allograft rejection, acute rejection was defined as any case in which a diagnosis of kidney allograft rejection, as identified by ICD-10 codes T86 and/or T86.1, was recorded during the KT-related hospitalization period. Graft failure was defined as a KT recipient undergoing repeated dialysis for three months or longer during the post-KT follow-up period. 19 The Organ Transplantation Act of South Korea requires a recipient to pay for the cost of donor nephrectomy in the case of living-donor KT, while the government covers the primary cost for organ donation in the case of a deceased donor. Therefore, the donor type was able to be classified as a living donor when the EDI code for donor nephrectomy 'R3272' was charged to a recipient. In addition, among living-donor KT recipients, ABO-incompatible KT was defined as a transplant procedure where ABO antibody tests (EDI code B2080) were performed two or more times during the KT-related hospitalization, and plasma exchange (EDI code X2505) was performed concurrently. The proportions of deceased-or living-donor KT recipients in this study were consistent with the statistics from the Korean Network for Organ Sharing (KONOS). The codes or criteria for defining variables in building a database have been described in previous studies. 19,20 Because death certificates are automatically reported to the KNHIS, mortality was detected when healthcare coverage by the KNHIS was terminated.
The number and types of RBC products used in the subjects during hospitalization were analyzed. However, the NHID did not provide pre-and post-transplant information separately. Therefore, we investigated the associations between the type of perioperative RBC product received and short-and long-term transplant outcomes. We divided the subjects into four groups based on whether they had undergone RBC transfusion and the type of RBC product transfused. The four groups consisted of patients without RBC transfusion and patients transfused respectively with filtered RBCs, washed RBCs, or pRBCs. One unit of pRBCs contains approximately 200 or 250 mL of blood products separated from 320 or 400 mL, respectively, of donor whole blood. The types of RBCs were identified based on the KNHIS EDI codes for medical procedures regarding the types of RBC products; X2021, X2022, X2031, X2032, X2111, and X2112.
Statistical analyses
The Mann-Whitney U and Kruskal-Wallis tests were used to compare continuous variables among subject groups, and the χ 2 test was used to compare categorical variables. Graft survival and rejection-free survival were calculated using Kaplan-Meier survival analyses. Data were censored at the time of death or at the last available follow-up. Cox's proportional
Outcomes of Perioperative RBC Transfusion in KT Recipients
hazard regression was conducted to construct multivariate models for identifying factors associated with occurrence of acute kidney allograft rejection or chronic kidney allograft failure, and hazard ratios (HRs) for risk factors with 95% confidence intervals were calculated. All statistical analyses were performed using SAS 7.15 (SAS Institute Inc., Cary, NC, USA) and RStudio v1.1.463 (RStudio Inc., Boston, MA, USA), and P values less than 0.05 were considered to be statistically significant.
Ethics statement
This was a retrospective cohort study, and the protocol was implemented after approval from the Institutional Review Board (IRB) of National Health Insurance Service Ilsan Hospital (Approval No. NHIMC 2021-09-020). Informed consent was waived by the IRB. The administration number of the National Health Insurance Sharing Service was NHIS-2022-1-101 (REQ202104307-004).
Patients and baseline characteristics
We reviewed a total of 17,754 KT recipients included in the KNHIS-NHID between 2002 and 2017. The median post-KT follow-up period was 66 months (mean 73.1 months; range, 0 to 194 months; 1st to 3rd quartiles, 28 to 109 months). The proportion of males was greater in the group without RBC transfusion. The most common age group among KT recipients was 40-59 years old, representing 56.9% to 64.5% of patients depending on the type of RBC product received ( Table 1). Significant differences among the four groups according to the types of RBC product administered to KT recipients were found for sex, age group, hospital type, year of surgery, donor type, history of acute rejection, length of hospital stay, and medical costs for hospitalization (P < 0.001). Compared to general hospitals, tertiary hospitals more frequently did not transfuse any RBC products and, when needed, tended to use filtered RBCs rather than other types. Washed RBCs were more frequently transfused in cases of ABO-incompatible living-donor KT. Acute rejection was more frequently diagnosed in KT recipients transfused with pRBCs, whereas it occurred less frequently in patients without RBC transfusion. In addition, patients who did not receive RBC transfusion had shorter length of hospital stay ( Table 1).
RBC transfusion depended on the type of preparation
All patients were divided into four groups depending on the type of RBC preparation received. A total of 9,224 patients did not receive RBC transfusion during the perioperative period, and 8,530 (48.0%) of 17,754 KT recipients were transfused with some type of RBC product during the perioperative period. Among these 8,530 patients, 74.9% (n = 6,392) received filtered RBCs, 19.7% (n = 1,679) received pRBCs, and 5.4% (n = 459) received washed RBCs. Transfused patients received a median of 2 RBC units during the perioperative period, regardless of type of RBC preparation ( Table 1).
A total of 30,889 RBC units was transfused into patients perioperatively in 77 hospitals between 2002 and 2017. Filtered RBCs were most frequently used, followed by pRBCs. Washed RBCs were used in 27 of the 77 institutions from 2002 to 2017. Regardless of the type of RBC preparation transfused, the average amount of RBCs used per patient and that of RBCs used per hospital were 3.6 and 406.4 units, respectively. The median transfusion incidence among hospitals was 50.6% (1st to 3rd: 37.7% to 72.0%; Table 2).
Acute rejection and rejection-free survival after KT
The incidence rate of acute rejection in KT patients during hospitalization over the study period was 8.5% in this study. The proportion of patients with acute rejection was significantly greater in women; in the 20-39 and 40-59 age groups; in patents with an earlier year of transplantation, receiving an allograft from a deceased donor, or treated with anti-thymocyte globulin (ATG); and in those receiving cyclosporine rather than tacrolimus or dexa/ betamethasone or fludro/hydrocortisone as initial immunosuppressants (Supplementary Table 1). Types of RBC preparations were transfused inconsistently to KT recipients depending on the institution. We investigated the association between transfusion of different types of RBC preparations and acute rejection. Regardless of RBC product (filtered RBCs, washed RBCs, or pRBCs), the proportion of patients with acute rejection was significantly greater among patients receiving transfusion (P < 0.001) (Supplementary Table 1).
5/13
Outcomes Table 3). The HRs for rejection in the filtered RBC and pRBC groups compared to the KT recipients without RBC transfusion were 1.192 and 1.359, respectively. Although it was not statistically significant, the washed RBC group also had an increased HR of 1.208 compared with the no RBC transfusion group ( Table 3).
In Kaplan-Meier analyses of the four groups (no transfusion, filtered RBCs, washed RBCs, and pRBCs), the log-rank test indicated poorer rejection-free survival of the KT patients being transfused with pRBCs than other groups (no transfusion, filtered RBCs, and washed RBCs) during the post-KT follow-up period (P < 0.001) (Fig. 1).
In the Cox regression model, an inverse association with graft failure was found for females, with an HR of 0.816, and for patients who received a KT from a living ABO-compatible donor (HR, 0.820). The most important risk factor for graft failure was a recent year of transplantation, with an HR of 9.393. The HRs for acute rejection history; CMV infection within 1 year after KT; and perioperative treatments such as ATG, basiliximab, and rituximab ranged from 1.357 to 1.957 ( Table 4).
The proportion of patients with graft failure was significantly greater in those receiving transfusion (P < 0.001) (Supplementary Table 2). The HRs for graft failure in filtered RBC and pRBC groups compared with those without RBC transfusion were 1.240 and 1.363, respectively. The washed RBC group showed an HR of 1.167 for graft failure, but this finding was not statistically significant (P = 0.452; Table 4). Patients who did not receive perioperative RBC transfusion had significantly greater overall graft survival than the patients transfused with filtered RBCs, washed RBCs, or pRBCs (P < 0.001 for each comparison) according to Kaplan-Meier analysis. Graft survival was not significantly different among the groups transfused with different RBC products (Fig. 2).
DISCUSSION
The results of this study using nationwide data from the KNHIS-NHID (2002-2017) indicate differences in transplant outcomes among Korean KT recipients according to the type of RBC product transfused during the perioperative period. Given the high prevalence of intraoperative anemia and bleeding during the peri-transplant period, RBC transfusion is often unavoidable. 3 Generally, in order to reduce the risk of allosensitization, leuko-reduced RBCs are preferable for KT recipients 9 ; however, in practice, several additional types of RBC preparations including pRBCs, filtered RBCs, and washed RBCs have been transfused into Korean patients. 19 Of the 8,530 patients who received RBC transfusion, 74.9% received filtered RBCs, 19.7% received pRBCs, and 5.4% received washed RBCs. Among the 77 institutions included in this study, 64 and/or 27 respectively used pRBCs and/or washed RBCs ( Table 2). In this study, the percentage of patients with both acute rejection and graft failure was significantly greater among those receiving RBC transfusion (P < 0.001). Also, regardless of perioperatively transfused RBC product, similar patterns of association were found in patients with rejection or graft failure. The filtered RBC and pRBC groups were significantly associated with both rejection during follow-up after KT and graft failure in the long term. The washed RBC group showed HRs of 1.208 and 1.167 for rejection and graft failure, respectively, although these were not statistically significant (Tables 3 and 4); this finding may have been due to insufficient statistical power, since the washed RBC group accounted for only a small fraction (2.6%) of the total cases in this study. Cox's multivariate models indicated that transfusion of pRBCs was associated with the worst transplant outcomes among the four groups in this study. Fig. 2. Kidney allograft survival according to the type of RBCs transfused perioperatively at kidney transplantation. The patients who did not receive any RBC product perioperatively showed longer overall graft survival than the groups being transfused with filtered RBCs, washed RBCs, or packed RBCs (P < 0.001 for each comparison). Graft survival was not significantly different among the groups transfused with different types of RBC preparations. RBC = red blood cell.
outcomes associated with transfusion regardless of RBC product type. 15,21- 24 We assessed the association between the type of transfused RBC products and acute rejection using a nationwide population-based database. In the survival analyses, rejection-free survival was significantly lower in the pRBC group than in the no RBC transfusion or filtered or washed RBC groups (Fig. 1). The use of pRBCs is declining, as they have been increasingly replaced by filtered RBCs, but pRBCs were still used in 6.7% of KT recipients between the 2014 and 2017 in Korea. 14 Therefore, when perioperative RBC transfusion is necessary in KT recipients, transfusion of leuko-reduced RBCs should be recommended to lower the risk of kidney allograft rejection.
We also found graft survival to be significantly better in patients without RBC transfusion than in the cases transfused with any type of RBCs. Few prior studies have reported the effects of perioperative blood transfusions on transplantation outcomes. 15,21,24 In a study using data from the national database of the French transfusion service, Gaiffe et al. 15 reported that both pre-and early post-transplant transfusions were associated with increased transplant failure. Our data also showed that KT recipients without RBC transfusion in the perioperative period had better graft survival. Although perioperative RBC transfusion was significantly associated with poor outcomes, it cannot be concluded that transfusion of RBCs is the direct cause of graft failure, since patients who were in poorer clinical condition at the time of KT are more likely to receive RBC transfusion.
On the other hand, there was no clear associations between a specific type of RBCs and graft survival. Exposure to non-self HLA through RBC transfusions may lead to the development of donor-specific HLA antibodies (DSAs) against the kidney allograft donor. Avoiding transfusions or using HLA-matched blood could reduce graft failure. 22,23 Therefore, further evaluation of additional data, such as utilization of HLA-matched blood and DSA results, is required to more thoroughly assess the associations between transfusion with specific types of RBCs and long-term KT outcomes, including graft failure.
Washed RBCs have been considered for reducing exogenous anti-A or anti-B antibodies and HLA sensitization in ABO-incompatible transplantation. 25, 26 Washed RBCs were transfused into KT recipients at 27 of the 77 institutions in our nationwide cohort. However, there was insufficient evidence to support the effectiveness of the use of washed RBCs in KT recipients. In addition, Aston et al. reported that washed RBCs did not further reduce patient HLA sensitization over the use of filtered RBCs, 27 and our findings showed that transfusion with washed RBCs did not lead to better graft outcomes than that with filtered RBCs, although this finding was not statistically significant (Fig. 2). Therefore, in consideration of the clinical efficacy, risk of infection due to contamination, and labor and time required to manufacture washed RBCs, our findings do not support the use of washed RBCs for KT recipients.
This study analyzed KNHIS-NHID data from a 16-year period to determine the associations between the type of RBC preparation transfused and KT outcomes among 17,754 patients who underwent KT for the first time. Due to the limitations of the data related to health insurance claims, we were not able to distinguish between pre-and post-operative RBC transfusions, and we were also unable to obtain detailed clinical information about the KT patients, such as the exact timing of acute rejection occurrence and results of laboratory tests which are not included in the national database. Many factors would influence KT outcomes in the pre-and post-transplantation period, such as renal function, hemoglobin level, allosensitization, pre-existing diseases, and case volume of centers. 13,28-32 Unfortunately, these data could not 10/13
Outcomes of Perioperative RBC Transfusion in KT Recipients
be analyzed in this study because they have not been collected and organized in the KNHIS-NHID. The impact of post-transplantation anemia and the clinical significance of de novo DSA on KT outcomes were also studied. 12,29,33,34 However, the relationship between these factors and types of transfused RBCs has not been sufficiently studied. Further researches with comprehensive clinical data from each medical institution would be necessary.
In addition, we used an operational definition to determine whether the KT recipient and donor were ABO-compatible and whether the donor type was living or deceased. Despite these shortcomings, our data have identified an association between RBC transfusion and short-and long-term graft outcomes among Korean patients who underwent KT during a recent 16-year period.
Perioperative RBC transfusion was associated with an increased risk of kidney allograft rejection and long-term chronic graft failure. Notably, the transfusion of pRBC preparations increased the likelihood of rejection. Therefore, careful consideration of indications for RBC transfusion and selection of the appropriate type of RBC is necessary, especially for patients at high risk of rejection or graft failure. In addition, our data may support future revision of guidelines for clinicians regarding RBC transfusion of KT recipients and the development of computerized order entry system alerts when ordering pRBCs for KT recipients. Supplementary Table 1 Characteristics of patients with or without acute rejection during hospitalization period for kidney transplantation Click here to view | 2023-07-12T16:06:49.557Z | 2023-06-08T00:00:00.000 | {
"year": 2023,
"sha1": "da0299e0bfda5f6bd4f399228299d20979e9891c",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3346/jkms.2023.38.e212",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7f53ea4cc8b5d327f8b7f14a50408fec892bd872",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14427756 | pes2o/s2orc | v3-fos-license | Ploidy Distribution of the Harmful Bloom Forming Macroalgae Ulva spp. in Narragansett Bay, Rhode Island, USA, Using Flow Cytometry Methods
Macroalgal blooms occur worldwide and have the potential to cause severe ecological and economic damage. Narragansett Bay, RI is a eutrophic system that experiences summer macroalgal blooms composed mostly of Ulva compressa and Ulva rigida, which have biphasic life cycles with separate haploid and diploid phases. In this study, we used flow cytometry to assess ploidy levels of U. compressa and U. rigida populations from five sites in Narragansett Bay, RI, USA, to assess the relative contribution of both phases to bloom formation. Both haploid gametophytes and diploid sporophytes were present for both species. Sites ranged from a relative overabundance of gametophytes to a relative overabundance of sporophytes, compared to the null model prediction of √2 gametophytes: 1 sporophyte. We found significant differences in cell area between ploidy levels for each species, with sporophyte cells significantly larger than gametophyte cells in U. compressa and U. rigida. We found no differences in relative growth rate between ploidy levels for each species. Our results indicate the presence of both phases of each of the two dominant bloom forming species throughout the bloom season, and represent one of the first studies of in situ Ulva life cycle dynamics.
Introduction
Macroalgal blooms typically consist of large accumulations of ephemeral macroalgal biomass. These blooms occur worldwide, often in shallow areas with relatively low water mixing that are affected by coastal eutrophication, and they have the potential to cause severe ecological and economic damage [1][2][3]. The largest documented bloom on record occurred four weeks before the 2008 Beijing Olympics, with a bloom of an estimated 20 million tons of Ulva prolifera in system of Narragansett Bay, Rhode Island [41,42]. Macroalgal densities (comprised mostly of Ulva) peak in the summertime and vary significantly across sites, seasons, and years [43][44][45].
Our research focuses on four central questions regarding the life cycles and biology of U. compressa and U. rigida. Firstly, what is the relative abundance of sporophytes and gametophytes of both species? Secondly, how do these relative abundances correlate with physical and biological factors? Thirdly, do the phases have different growth rates, and lastly, do the phases have cells of different sizes? We interpret our data in the context of macroalgal bloom dynamics and the impacts of environmental variables in structuring bloom formation.
Collection of Ulva
We collected Ulva spp. monthly from June to October 2013 at several publically accessible bloom-forming sites in Narragansett Bay, RI, including Chepiwanoxet, Sandy Point, Oakland Isomorphic biphasic life cycle of Ulva. Ulva cycles between two morphologically similar multicellular adult phases, a haploid gametophyte and a diploid sporophyte. Diploid sporophytes produce haploid zoospores that develop into gametophytes. Haploid gametophytes produce haploid gametes. When a "+" and "-" gamete fuse a zygote is formed, which develops into a diploid sporophyte.
Beach, Oakland Beach Cove, and Warwick City Park. We chose these sites to represent a range of typical Ulva spp. bloom intensity, with Oakland Beach Cove and Warwick City Park as high bloom sites, while Chepiwanoxet, Sandy Point and Oakland Beach as low bloom sites (Thornber, unpublished data). At each site, on each sampling date, we haphazardly collected individuals by hand from the shallow subtidal zone, put them in a plastic bag, and brought them back to the lab. We selected a minimum of 16 individuals and maximum of 40 individuals on each sampling date. Later, we identified U. compressa and U. rigida to the species level by microscopic examination and only used individuals with clear cellular characteristics based on the current molecular analyses of Ulva in Narragansett Bay [45]. A recent study by Mao et al. 2014 discovered the presence of U. laetevirens in Long Island Sound [46]. Since there are morphological similarities between U. rigida and U. laetevirens, we recognize the potential for species misidentification, however slight, in our study. Overall, we collected and analyzed 282 total Ulva individuals: 150 U. compressa and 132 U. rigida (S1 Table). Both species were collected at all sites, with a minimum of 10 individuals of each species at each site. Due to the nature of sampling and length of time necessary for preparing flow cytometry samples (which limited our ability to collect larger sample sizes), we present and analyze our data here in terms of the overall relative abundance of each Ulva species during the peak bloom-forming season at each site. However, we use collection date and month as covariates in building our logistic regression models for predicting the relative abundance of each phase (see Statistical Analysis section). We used sea surface temperature and sea surface salinity data for Greenwich Bay (Site F5) collected daily by the Rhode Island Department of Environmental Management Bay Assessment and Response Team (http://www.narrbay.org/d_projects/buoy/buoydata.htm; S2 Table).
We also determined Ulva biomass data from monthly subtidal surveys of the same sites, following the protocol in Guidone [44]. Briefly, at each site, we collected all algae in each of 0.16 m 2 subtidal quadrats placed 1 m apart along a transect line. All plots were < 2 m deep at mean lower low water (S1 Table).
Prior to thallus destruction for flow cytometry, we took a microscopic photograph at 400X of each individual that was analyzed for ploidy content. Using ImageJ (www.nih.gov), we created an overlying grid on each microscopic photograph, and measured the area of the exposed surface of the first ten cells that were at grid intersection points to assess cell size differences between phases (S3 Table). We examined the upper cell layer, as U. compressa and U. rigida are each two cells thick.
Flow Cytometry and Ploidy Analysis
We used flow cytometry to determine the relative abundance of gametophytes and sporophytes in U. rigida and U. compressa. Based on the C-values (haploid genome sizes) of U. compressa 0.13 pg [27] and U. rigida 0.16 pg [27], we used the freshwater unicellular alga Chlamydomonas reinhardtii as an external flow cytometry control, with a C-value of 0.12 pg [47]. We specifically selected the cell wall-deficient mutant CC-400 cw15 mt+ as our control (University of Minnesota Chlamydomonas Center, chlamycollection.org). The cell wall-deficient mutant was selected to easily rupture the cells and allow the PI/RNase Staining Buffer to reach the nucleus.
We used an enzyme solution developed specifically for efficient production of Ulva protoplasts [48], along with a modified version of the LB01 nuclear isolation buffer. Instead of the standard 0.1% v/v concentration for Triton X-100, we modified the buffer to contain a 1% v/v concentration to ensure the nuclei were cleanly isolated (15mM Tris, 2MM EDTA, 0.5mM Spermine tetrahydrochloride, 80mM KCl, 20mM NaCl, 1% vol/vol Tritron x-100, 15mM β-mercaptoethanol) [26].
We were concerned with successful protoplast isolation and not with the exact number of protoplasts obtained, so we chose a qualitative method for isolating protoplasts [48]. We weighed all Ulva samples to 0.50 g wet weight, rinsed with them raw seawater to remove debris and epiphytes, and then thoroughly scrubbed them manually in 20 μm filtered seawater to remove smaller particles. Ulva samples were chopped with a razor blade in a large (85 mm x 25 mm) plastic Petri dish for one minute, and then the tissue was transferred into a small (55 mm x 15 mm) Petri dish that contained 5 mL of enzyme solution [48].
Protoplasts were released by placing samples on a shaker at 50 rpm in the dark for two hours at room temperature (~21°C), then filtered with a 30μm nylon mesh into a 5mL polypropylene tube and spun for five minutes at 120 x g at 4°C. A total of 2mL of supernatant was then removed and replaced with 2mL of sterile filtered seawater. Centrifugation with subsequent replacement of fluid was repeated twice, and after the last round of centrifugation, all supernatant was removed and replaced with 1mL of sterile filtered seawater. We observed successful protoplast isolation via microscopic examination at 400X. In preparation for the flow cytometer samples were spun for five minutes at 120 g at 4°C, the supernatant was removed, and samples were kept refrigerated or on ice.
To liberate the nuclei, we added 1 mL of modified LB01 nuclear buffer kept on ice to each sample, vortexed and tapped the tube occasionally for eight minutes, and then added 0.5mL of PI/RNase Staining Buffer (BD Science). After five minutes the samples were run on a BD Influx flow cytometer at the RI EPSCoR Marine Life Sciences Facility on the University of Rhode Island's Narragansett Bay Campus. This machine was optimized for marine applications and is equipped with three lasers (355 nm, 488 nm, and 561 nm). We used a green (532 nm) or a blue (488 nm) laser and quantified fluorescence at 610 nm (20 nm bandwidth) on a linear scale. Since sporophytes have twice the amount of genetic material as gametophytes, sporophytes have twice the amount of fluorescence as gametophytes (Fig 2). To measure the spread of the distribution of the data we used the coefficient of variation (CV), which is the standard deviation expressed as a percentage of the population mean. The CV was calculated from replicate counts of the same prep from one thallus; our CV values ranged from 3-8%. This range is due to the small genome size and the predilection of PI to bind to remaining cell wall polysaccharides from the extraction of Ulva protoplasts, which makes obtaining CV values less than 3% challenging [49,50].
Growth Experiments
We assessed growth rates of gametophytes and sporophytes of U. rigida and U. compressa in outdoor flow-through ambient temperature seawater tanks on the University of Rhode Island's Narragansett Bay campus. We collected healthy Ulva individuals from the shallow subtidal zone in Greenwich Bay in the summer of 2013. In total, we used 90 U. compressa individuals (62 sporophyte and 28 gametophyte) and 61 U. rigida individuals (38 sporophyte and 23 gametophyte) for this analysis. We conducted growth experiments in June, July, and August to assess differences in growth over the peak bloom-forming months (S4 Table).
In the lab, we determined the species identity of each specimen via microscopic examination. We then spun individuals 20 times in a salad spinner prior to separating 1.0 g from the thallus. We placed one 1.0g Ulva individual in each 2.5 L bucket with mesh sides; after 14 days, all growth experiments concluded and the Ulva was re-weighed. For each month, we had a sample size of at least five (up to a maximum of 36) individuals of each phase of each species, except for U. rigida sporophytes in August, when we only had three individuals. All Ulva were spun 20 times in a salad spinner prior to each weighing on a digital scale to ensure consistent mass, and all individuals were analyzed using flow cytometry for ploidy content (see above).
Statistical Analyses
To assess ploidy ratios in field populations of U. compressa and U. rigida, we used a χ 2 analysis to determine if the relative abundances of each species, at each site, were significantly different from the null model hypothesis. We then assessed the relationship of several variables (site, species, salinity, temperature, month of collection, date of collection, total Ulva biomass, total algal biomass, total Ulva biomass) to the ploidy ratio, using a logistic regression model with a binomial response variable (gametophyte vs. sporophyte). We selected the model with the highest AIC as it best explained the distribution of gametophytes and sporophytes in Greenwich Bay (S1 Text).
The AIC measures the relative quality of a statistical model, taking into consideration the number of parameters and the information lost with the model. Model coefficient estimate values predict the odds ratio of gametophytes and sporophytes in the population. The model has a binomial response variable with sporophytes chosen as success and gametophytes as failure. Therefore, negative estimate values are associated with higher proportions of gametophytes while positive estimate values are associated with higher proportions of sporophytes.
Based on the results for the logistic regression model described above, we then selected the three significant continuous variables (salinity, salinity two weeks prior to specimen collection, and total Ulva biomass) and analyzed each individually in separate models for representation in graphical models. Data analyses were conducted in R [51,52] and JMP (JMP 1 , Version 10. SAS Institute Inc., Cary, NC, 1989-2013).
Relative growth data were analyzed with a two way fixed factor ANOVA to measure differences across ploidy levels and months. Cell sizes were compared between gametophytes and sporophytes for each species using t-tests with unequal variances in JMP. All data were checked for statistical test assumptions and transformed where appropriate prior to analysis.
Ethics Statement
All research was conducted on public beaches in Rhode Island. No specific permits were obtained for this research, as the Rhode Island state constitution guarantees its citizens the right collect seaweed from public beaches [53]. The study did not involve any endangered or protected species or any protected locations.
Ploidy
We found both gametophytes and sporophytes of each species present at each of the sampling location sites (S1 Table). There were significant differences among the relative ploidy levels at each site (Fig 3), compared to the null model prediction of p 2 gametophytes to 1 sporophyte (χ 2 likelihood test, Table 1). U. compressa in Oakland Beach Cove (OBC) and Sandy Point (SP) differed from this null prediction with a relative overabundance of sporophytes. U. rigida in Warwick City Park (WCP) and Sandy Point (SP) differed from the null prediction with a relative overabundance of sporophytes in WCP and dominance of gametophytes in SP. Based on AIC values, the strongest predictive model for ploidy relative abundance included the variables species, site, salinity at time of sampling, and total Ulva biomass ( Table 2; S2 Table; S1 Text) and not temperature, month of sampling, date of sampling, or total algal biomass. While salinity measurements with a time lag of two weeks prior were significant, they were not included in the model with the strongest AIC.
When we analyzed the significant continuous variables individually for their correlation to ploidy ratios, we found that the relative abundance of sporophytes was positively correlated with higher Ulva biomass at the time of collection (Fig 4A; χ 2 3 = 16.10, p<0.01). We found increasing proportions of Ulva sporophytes at higher salinities at the date of sampling for both species (Fig 4B;
Ploidy Distribution
Our data indicate that both phases are present for both U. compressa and U. rigida throughout the peak bloom-forming season, and that relative phase abundance is correlated with both abiotic and biotic factors. We found a high variability among sites in ploidy ratio among sites, with some sites matching the null model prediction of relative abundance, while others exhibited a significant overabundance of gametophytes or sporophytes. These deviations could be due to ecological differences among phases, environmental differences among sites, and/or temporal differences in life cycle dynamics among sites. Sandy Point, which differed from the null hypothesis for both species, is a more exposed site and experiences more water mixing than the other sites [54]. However, as U. compressa had an overabundance of sporophytes and U. rigida had an overabundance of gametophytes at this site, the relative impacts of environmental factors are challenging to assess and may represent specific environmental factors unique to each species. Warwick City Park and Oakland Beach Cove, which differed from the Population Dynamics of Bloom-Forming Ulva null hypothesis in U. compressa and U. rigida respectively, are more sheltered sites and experience less water mixing [54]. We found a significant correlation of physical and biological factors on the relative abundance of gametophytes and sporophytes in our study system ( Table 2, Fig 4). In this study system, low salinities are typically a result of increased freshwater flow from rivers caused by storms. In Narragansett Bay, increased flow in rivers yields higher concentrations of dissolved inorganic nitrogen and phosphorus [55]. Therefore, although nutrient data are not available for our sampling period, low salinities can be used as a proxy for increased nutrients. Lower salinities from the date of sample collection were correlated with higher relative levels of gametophytes, while lower salinities from two weeks prior to specimen collection were correlated with more sporophytes (Fig 4). This shift in ploidy ratios may be due to several factors, such as salinity tolerance, positive response to nutrient availability from one phase over the other, or a shift to asexual reproduction [56]. While it is unlikely that a reproductive event would result in the presence of new adults after only two weeks [57], lower salinities may trigger more rapid growth of one phase from a microscopic to a macroscopic size [35]. Due to the biphasic life cycle, increased nutrients may either impact mortality and/or fecundity rates of either phase [40,58], with differential effects on the relative balance of phases. In addition, vegetative fragmentation of mature blades, germination of unfused gametes, and/or asexual production of diploid spores by sporophytes may impact the ploidy ratio [18].
We also found a positive correlation between the relative abundance of sporophytes for total Ulva biomass for both species. This may be a byproduct of the positive correlation of temperature with bloom abundance [59,60] and growth rates [44], although we found no impact of temperature on the relative abundance of gametophytes and sporophytes in this study.
Previous studies have found a seasonal dominance of one ploidy phase [30] or a long term (11-20 month) non-seasonal cyclic dominance [19], or no seasonal trend [61]. As our sampling was limited to the bloom forming season, a cycling trend in ploidy for U. compressa and/ or U. rigida could exist. However, due to the scarcity of Ulva specimens during non bloom forming periods [43], this would be challenging to assess.
Growth and Cell Area
We did not find any significant differences in growth rates of adult gametophytes and sporophytes of either species, but this does not preclude the possibility of differences at the germling stage [35]. In addition, growth rates can vary based on nutrient levels [62]; as nutrient levels shift in Narragansett Bay over seasonal cycles [63,64], differences in Ulva growth rates between phases may emerge.
Based on our cell area data, future studies of U. compressa and U. rigida life cycle dynamics may be much more rapid. Individuals can be predicted as gametophytes or sporophytes based on their cell area, with a subset confirmed using ploidy analysis. This would increase the ability to have larger sample sizes and more rapid assessment.
Differences in U. compressa and U. rigida cell areas between phases may impact the surface area to volume ratio, allowing for faster uptake of nutrients in smaller cells [40]. This is especially relevant in single-celled spores, gametes, and small juveniles, and may impact Ulva individuals in their early growth stages. U. rigida zoospores are 9-15 μm x 5-10 μm while gametes are 7-11 μm x 4-6 μm [65]. Since gametes are smaller than zoospores, they may have a survival differential in their ability for nutrient uptake and storage capacity. There may also be other ecological differences between either phases across their lifespan, such as susceptibility to herbivores, light tolerance, salinity tolerances, and temperature optima [23,29,34,66], that may explain differences in ploidy ratios.
Flow Cytometry Method
We designed our flow cytometry ploidy analysis methods from similar analyses in higher plants [26,49,67,68], which has been successful for other macroalgal studies [27]. We first attempted chopping Ulva tissues with a razor blade in the presence of a nuclear isolation buffer to obtain isolated nuclei (essentially removing our protoplast isolation step). This method, which is successful in higher plants for flow cytometric analysis [69], was unsuccessful for Ulva. The number of nuclei obtained was small and contaminated with other materials, likely organelle genomes and bacteria [70]. In addition, Ulva has high concentrations of anionic polysaccharides in its cell walls [71] which can interfere with obtaining a sufficient number of nuclei by binding to the positively charged nuleus, inhibiting the propidium iodide from attaching. Given these constraints, protoplast isolation was necessary to obtain sufficient numbers of nuclei for flow cytometry analyses [50], which is successful yet time consuming [48], thus limiting our abilities to obtain larger sample sizes.
Supporting Information S1 | 2016-05-04T20:20:58.661Z | 2016-02-26T00:00:00.000 | {
"year": 2016,
"sha1": "0b3d5b1ce0618bed773d5caf8dd4f2f80fa3deaf",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0149182&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "65ec01e617170da78922d320f09da59dfc1bb4e5",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
270344414 | pes2o/s2orc | v3-fos-license | Gestational Diabetes Mellitus Risk Factors in Pregnant Women Attending Public Health Institutions in Ethiopia’s Sidama Region: An Unmatched Case-Control Study
Background Gestational diabetes mellitus (GDM), a chronic condition leading to glucose intolerance during pregnancy, is common in low- and middle-income countries, posing health risks to both the mother and fetus. Limited studies have been done in Ethiopia, especially using WHO’s 2013 universal screening criteria. Therefore, this study aimed to evaluate the risk factors linked to GDM in women attending antenatal (ANC) clinics in Hawassa town public health institutions, located in the Sidama regional state of Ethiopia. Methods An Unmatched case-control study was carried out in Ethiopia’s Sidama Region from April 1st to June 10th, 2023, involving 510 pregnant women. The Oral Glucose Tolerance Test (OGTT) was utilized for universal screening and diagnosing GDM based on the updated 2013 WHO diagnostic criteria. Data analysis included descriptive and analytical statistics, with variables having p-values below 0.1 deemed suitable for bivariate analysis. Statistical significance was assessed using the adjusted odds ratio (AOR) with a 95% confidence interval and a p-value < 0.05. Results The study involved 633 participants (255 cases and 378 controls), resulting in a 100% response rate, with women having an average age of 29.03 years.Variables such as: age at first conception (AOR=0.97, P=0.01, 95% CI (0.95,0.99)), urban residency (AOR=1.66, P<0.01, 95% CI(01.14,2.40)), widowed marital status (AOR=0.30, P=0.02, 95% CI (0.30,0.90)), parity (AOR=1.10, P<0.01, 95% CI (1.03,1.17)), history of stillbirth (AOR=1.15, P=0.03, 95% CI(1.04,2.30)), and previous cesarean section (AOR=1.86, P=0.01, 95% CI (1.13,2.66)) were identified as independent factors associated with GDM. Conclusion The study concluded that factors like age at first conception, place of residence, marital status, parity, history of Caesarian section, and stillbirth were independently associated with GDM. Surprisingly, upper arm circumference (MUAC), a proxy for pre-gestational BMI, was not identified as a risk factor for GDM. It is recommended that healthcare providers conduct comprehensive GDM risk assessments in pregnant women to identify and address risk factors, and propose specific screening and intervention strategies.
Introduction
Insufficient synthesis of pancreatic insulin or inappropriate usage of it results in diabetes mellitus (DM). 1,2DM encompasses Type I and Type II pre-gestational diabetes as well as gestational diabetes (GDM). 1,32][3][4][5] GDM is diagnosed through specific criteria that cover over 90% of diabetes cases during pregnancy. 1,6The diagnostic criteria typically involve screening pregnant women for glucose intolerance using tests such as the Oral Glucose Tolerance Test (OGTT) or the Glucose Challenge Test (GCT).These tests assess blood glucose levels at different intervals after an
Methods and Materials Study Design
A facility-based unmatched case-controls study was carried out.
Setting
The study conducted at the public hospitals in Hawassa Town, Sidama area, from April 1st, 2023, to June 10th, 2023.The city is divided into eight sub-cities and 32 kebeles, with a population of 455,658 in 2017.Hawassa is home to 83 health facilities, both public and private, including four public hospitals and eleven government-operated health centers.With a population of 394,057, including 190,216 individuals of reproductive age (15-49 years old) and 13,630 pregnant women, these institutions are crucial for providing ANC services.The four public hospitals in Hawassa are dedicated to offering high-quality ANC services to pregnant women with a manageable client volume.
Participants
It appears that the study focused on pregnant women between the ages of 18 and 49 in Hawassa town, selected from public health facilities using a systematic random sampling technique.Two health centers and four hospitals were chosen through an Excel-generated simple random sample method.The anticipated number of pregnant clients for each facility was determined based on the average monthly follow-up rate for antenatal care (ANC).Subsequently, the expected number of pregnant women with and without GDM was estimated monthly for each facility.The sample size for cases and controls was allocated to each facility using a proportional allocation to sample size (PPS) approach, followed by a month-long screening for GDM.
Out of the total estimated samples (633 in total, with 255 cases and 378 controls), pregnant women were screened for GDM and enrolled systematically based on predetermined eligibility criteria.Only singleton pregnant women aged 18 years or older and at least 12 weeks into gestation were eligible for inclusion in the survey.Pregnant women with preexisting diabetes mellitus, chronic illnesses, or medications affecting glucose metabolism were excluded from the study.Finally, enumerating and preparing the sampling frame to each health facilities, the Selection and inclusion of cases and corresponding controls occurred during subsequent ANC visits until the entire screened sample was achieved.
Participants were required to provide written consent during their initial visit.Evaluation for pre-existing diabetes was performed following WHO guidelines, and fasting was recommended for accurate screening for GDM during subsequent appointments.Women identified with GDM risk factors were advised to undergo further testing as needed.For more clarity, please see Figure 1 (attached separately in the manuscript).
Methods of Case Ascertainment
Cases for the current investigation were pregnant women attending a public health clinic in Hawassa town who voluntarily participated in the initial screening survey and were diagnosed with GDM.To assess outcomes, expectant mothers were advised to fast for 8-12 hours before undergoing a 75g oral glucose tolerance test (OGTT), lasting 1-2 hours.Utilizing the HemoCue Glucose B-201+ System and five microliters of capillary whole blood, participants' blood glucose levels were measured.Women were instructed to relax before providing a finger prick blood sample for the first time.Following the collection of 2-3 drops of blood, one drop was used to fill a cuvette placed in the cuvette holder.The glucose level was displayed within 40-240 seconds.Subsequently, under supervision, women were given 75 grams of glucose dissolved in 250 mL of water to drink within 5 minutes.One to two hours after glucose intake, capillary blood samples were obtained.Capillary blood values were multiplied by a constant factor of 1.11 to calculate plasma venous values, which were then used to determine glucose levels. 27Finally, GDM was diagnosed according to the 2013 WHO criteria using a 75g OGTT if specific glucose thresholds were met: fasting plasma glucose of 5.1-6.9mmol/L, one-hour post-glucose load level of 10 mmol/L, and two-hour post-glucose load level of 8.5-11.0mmol/L. 7,8,28
Control Selection
The study's control group consisted of pregnant women from Hawassa town's public health facility who had previously participated in a survey, were not diagnosed with GDM, and were now part of the current research.Control group selection involved choosing women from the earlier survey who did not meet the GDM diagnosis criteria at that time.They were intentionally picked from the sampling frame set up at each health facility post-survey and systematically selected during their subsequent ANC follow-up appointment.
The choice of a case-control study design is driven by the necessity for a strong methodology to pinpoint risk factors and guide preventive measures effectively.This design allows for a thorough comparison between cases and controls, facilitating the assessment of associations and the identification of Region-specific risk factors.Moreover, it plays a crucial role in investigating causality and offering key insights for tailored interventions to enhance maternal and neonatal health outcomes, particularly in the context of GDM in Sub-Saharan Africa (SSA) and specifically in the Sidama Region of Ethiopia.
Variables Outcome Variable
In this research, pregnant women were screened for GDM through oral glucose tolerance tests after fasting overnight and completing a 75-gram, two-hour OGTT.Blood glucose levels were monitored using the HemoCue Glucose B-201+ glucometer.Women were advised to relax prior to blood sampling.After collecting the initial blood drops, a drop was allowed to fill a cuvette for glucose measurement, which took 40-240 seconds to display.Following this, the women consumed 75 g of glucose dissolved in 250 mL of water within five minutes.Blood samples were taken at one and two hours, and plasma venous values were calculated by multiplying capillary blood values by a constant factor of 1.11. 27he 2013 WHO updated diagnostic criteria were applied to identify GDM. 7Furthermore, the study categorized individuals as (1=Case, 0=Control), in line with similar research. 8,29
Exposure variables
Encompass factors such as age, gender, ethnicity, marital status, maternal occupation, education levels of women and spouses, occupation of women and spouses, religion, income, location of residence, and alcohol consumption.Additionally, obstetric and clinical variables including prior child's birth weight, history of GDM, family history of type II diabetes, previous Caesarian-sections (CS), middle upper arm circumference (MUAC), fasting blood glucose levels, previous cesarean deliveries, and gestational age were considered. 8,29ta Sources/ Measurement The study included six groups of five individuals each, comprising a midwifery nurse and a supervisor.The principal investigator provided a two-day on-site training for 18 data collectors and 12 supervisors.Face-to-face interviews were conducted to gather clinical and sociodemographic data on gestational diabetes mellitus (GDM) using a standardized questionnaire.Data points covered family history of diabetes, birth weight of a prior child, residence, age, marital status, religion, ethnicity, education, employment, and income.Gestational age was determined using reliable methods such as the Last Normal Menstrual Period and dating ultrasounds, with obstetric ultrasounds used when needed.Antenatal Care cards offered socio-demographic, obstetric, and clinical details.Physical activity levels pre-pregnancy were evaluated, and alcohol consumption frequency was noted.Mid-upper arm circumference (MUAC) was measured on the left arm as a proxy for BMI before conception.Pregnant women with a MUAC of 28 cm or more were categorized as overweight or obese. 8
Bias
The study employed rigorous measures to minimize biases and ensure the credibility of results, by applying WHO 2013 criteria for diagnosing GDM, employing clear participant selection and recruitment methods, utilizing standardized assessment tools, and employing structured data collection techniques.Accurate glucometers were utilized, along with continuous quality control measures.Data collectors and supervisors underwent extensive training, with regular meetings held to uphold data quality.Statistical analyses were carried out to determine GDM predictors, and model adequacy was evaluated using the Hosmer-Lemeshow test.Blinding procedures were instituted for outcome assessors to boost the study's validity and reliability.
Study Size
The sample size for the current study was calculated using OpenEpi version 3, considering a two-sided confidence level of 95%, a power of 80, a control-to-case ratio of 1:1, and a double population proportion exposure difference, as shown in Supplementary Material 1: Table S1.The sample size for the current study was calculated using OpenEpi version 3, considering a two-sided confidence level of 95%, a power of 80, a control-to-case ratio of 1:1, and a double population proportion exposure difference.The hypothetical proportion of controls exposed was 10.4%, based on the major significant predictors of GDM from Ethiopian studies (Supplementary Material 1: Table S1.Sample size determination). 8,29,30By taking urban residents as the independent predictor exposure variable, the sample size for this study was estimated.The proportion of exposure among cases was 20.19%, while among controls it was 10.4%, with an odds ratio of 2.1. 8With a control-to-case ratio of 1:1, 80% power at a 95% confidence interval was attained.With 510 pregnant women overall-255 cases and 255 controls-the sample size allowed for a 10% non-response rate for each group.After confirming the power, it was determined to include the 633-person sample from the prior survey in the casecontrol analysis to increase the effect evaluation's power, since it exceeded the present study's criteria.
Quantitative Variables
The study examined all quantitative variables in their original forms but subjected them to different treatments, such as grouping according to prior research.For example, the outcome variable GDM was assessed using techniques from earlier studies and grouped into either 1 (yes) or 0 (no).Age at first conception was divided into <20, 20-34, and ≥35 years.Other quantitative variables such as gravidity, parity, family size, gestational age, and women's MUAC were categorized for descriptive analysis.The study sought to comprehend how these variables influence women's health. 8,29
Statistical Methods
The study utilized EPI Data version 3.1 for tasks like data cleaning, coding, error investigation, and analysis.The main investigator oversaw the data entry process.Descriptive stats, including means, standard deviations, tables, and figures, were used for data summarization and presentation.Proportions and 95% confidence intervals assessed the results' magnitude.Independent variables were checked for multicollinearity via tolerance testing and variance inflation factor analysis.The model's adequacy was gauged using the Hosmer-Lemeshow goodness of fit test.Bivariate and multivariate logistic regressions were conducted to handle confounding variables and identify predictors, with significance levels set at p-values < 0.1 and 0.05 in the initial and subsequent analyses.Findings were reported using crude odds ratios, adjusted odds ratios, and confidence intervals, with statistical significance determined by a P value < 0.05 for significant factors.To address missing data, a comprehensive approach was taken.Initially, missing data patterns were examined to detect potential biases.Techniques like multiple imputation or sensitivity analysis were then applied to effectively manage missing data, ensuring robust analysis of study results while considering and addressing any impact of missing data.
Result Participants
The study included 633 pregnant women (255 cases and 378 controls) who participated in the screening survey with a 100% response rate overall.The decision to include all pregnant women who underwent screening procedures rather than the predetermined sample size (i.e 510) was made to ensure appropriate case and control samples and to boost the study's ability to detect effect sizes.(Supplementary Materials: Table S1.Sample size determination, attached as word document).
Descriptive Data Socio-Demographic Characteristics
The study revealed that the average age of cases was 29.03, with controls averaging 30.47.Both groups were primarily in the 20-34 age range, and most cases and controls resided in rural areas.The majority of individuals in both groups had completed secondary education, with a notable proportion having a university/college degree.Furthermore, 31.0% of cases and 29.1% of controls were married, while 55.3% of cases and 49.5% of controls were employed in governmental or non-governmental organizations.The majority of individuals in both groups belonged to the middle-income category.(Refer to Table 1 for details).
Obstetric and Clinical Features of Respondents
The study indicated that the average gestational age of women was 25.21 weeks, with 49.9% of cases falling between 25-40 weeks and 49.6% of controls between 13-24 weeks.For cases, the gravidity and parity were 6.88 and 4.45, respectively, with 71.4% and 43.9% having five or more pregnancies and deliveries.Mid upper arm circumference was measured using an inelastic tape meter, with 72.5% and 79.1% of women having measurements below 28 cm.The majority of cases and controls had birth weights ranging from 2.51-3.9kg and non-anaemic hemoglobin levels, as detailed in Table 2 and Supplementary Materials (Table S2.The mean and standard deviations, attached as word document).
Factors Associated with GDM
The study utilized a binary logistic regression model to investigate factors associated with gestational diabetes mellitus (GDM).Factors with p-values ≤ 0.1 were included in the multivariable logistic regression analysis.Key predictors identified encompassed age at first conception, place of residence, education, marital status, wealth index, history of stillbirth, GDM, cesarean section (C/S), preterm delivery, child birth weight, HIV/AIDS status, parity, and mid upper arm circumference (MUAC) of women in the study.No issues of multicollinearity were observed, and all variables were retained in the model.The model's adequacy was supported by a P-value of 0.778, with a Predictability Percentage of 85.5%, indicating its potential for future interpretation.
Factors Predicting GDM
A binary logistic regression model was utilized to investigate factors associated with gestational diabetes mellitus (GDM).Variables with p-values ≤ 0.1 were included in the multivariable logistic regression analysis.Key predictors identified encompassed age at first conception, place of residence, education, marital status, wealth index, history of stillbirth, GDM, cesarean section (C/S), preterm delivery, child birth weight, HIV/AIDS status, parity, and mid upper arm circumference (MUAC) of women in the study.No multicollinearity concerns were observed, and all variables were retained in the model.The model's adequacy was affirmed by a P-value of 0.778, with a Predictability Percentage of 85.5%, indicating its potential for future interpretation.Several factors emerged as independent predictors of GDM risk.The study noted that the likelihood of developing GDM increased by 0.97 for each unit increase in a woman's age at first conception.Urban residence was linked to higher odds of GDM compared to rural areas.Widowed women exhibited a higher likelihood of GDM compared to single women.Women with a history of caesarean sections had a 1.86 times higher risk of developing GDM.Moreover, a history of stillbirth was linked to an increased risk of GDM.Each additional pregnancy (parity) 1.10 times greater likelihood risk of developing GDM.For further details, refer to Table 3 in the Supplementary Data.
Discussion
The study on risk factors for gestational diabetes mellitus (GDM) in Ethiopia's Sidama Region using a case-control design identified several key factors associated with GDM.These factors include age at first conception, place of residence, marital status, parity, prior history of Caesarean-Sections (CS), and stillbirth.The study also highlighted that upper arm circumference was not found to be a risk factor for GDM in this population.As this study revealed that a woman's age at first conception independently predicts GDM, with a likelihood increased risk of 0.97 times.This aligns with findings from previous cross-sectional study in Ethiopia 12 The findings across multiple studies in Ethiopia, 25 Tanzania, 13 Uganda, 20 and Cameroon. 17Also suggested similar conclusions, possibly influenced by publication bias favoring studies with congruent results.Conversely, a study in Ethiopia by Muche et al 2019, 27 found no association between age and GDM development, highlighting potential differences in study participants, design, criteria, recruitment methods, and sample sizes.Changes in healthcare practices, diagnostic criteria, and population characteristics over time could contribute to discrepancies in study outcomes.The study recommended public health initiatives in Ethiopia and sub-Saharan Africa to focus on raising awareness about GDM, promoting healthy lifestyles, and improving access to prenatal care services.Future research should explore genetic, environmental, and sociocultural factors related to GDM.
Furthermore, the study revealed that widowed women had a 0.52 times likelihood of developing GDM compared to single women, aligning with previous findings in Gondar, Northwest Ethiopia 27 The findings shows the interplay of factors like psychosocial dynamics, behavioral changes, and social support in shaping GDM risk among widowed women.Losing a spouse may lead to heightened stress, anxiety, and depression, disrupting hormonal balance and metabolic processes, increasing vulnerability to conditions like GDM.Coping with partner loss can bring about changes in behavior, eating habits, and lifestyle choices, worsened by unhealthy coping mechanisms, raising GDM risk.This finding underscores the role of psychosocial dynamics and social support in reducing stress and fostering well-being, particularly in widowhood in shaping GDM risk among different marital status groups.
Additionally, urban residents were found to have 1.66 times higher odds of acquiring GDM compared to rural residents, consistent with studies in southern Ethiopia, 13,31 Rwanda, 32 and Tanzania 33 in southern Ethiopia, Rwanda, and Tanzania, attributing this trend to factors like sedentary behavior, unhealthy diets, stress levels, and limited access to nutritious foods in urban settings.In contrast, a cross-sectional study in Northwest Ethiopia indicated a higher GDM occurrence among rural women. 342][3][4] On top of that the changes in healthcare services, public health interventions, urbanization patterns, and data collection timing over time may also contribute to varying results among studies.Despite these discrepancies, the study recommends that urban areas prioritize screening and management of GDM, focusing on lifestyle changes and early prenatal care.Longitudinal studies are essential for monitoring trends in GDM prevalence and understanding how urbanization influences GDM, leading to a comprehensive understanding of GDM disparities.
Moreover, the study showed that with each unit increase in parity, the odds of GDM occurrence increased by 1.10 times, which was supported by findings from other studies in South Western Uganda 20 and Pakistan, 35 possibly influenced by lifestyle factors like reduced physical activity during pregnancy. 1,4,13Nevertheless, the results of this current study conflict with a study conducted in Ethiopia's Northern Amhara Region. 34These discrepancies could stem from variations in healthcare practices, access to antenatal care, or the prevalence of other GDM risk factors among the populations of Gonder town in the Amhara Region and Hawassa town.The study suggests that parity should be recognized as a GDM risk factor, and healthcare providers should deliver appropriate care to pregnant women with these risk factors.Subsequent research should investigate the link between parity and GDM in different populations and environments to validate these findings.
Additionally, women with a history of Caesarean section were shown to have a 1.86 times higher likelihood of developing GDM.Similarly, a cross-sectional study in Ethiopia reported similar findings. 31The consistency observed in the feeding patterns of the study populations in Woliyita zone, southern region of Ethiopia, and Sidama regional state, both known for their unique dietary habits, may explain the similarities in results. 1,4,33Additionally, a systematic review and meta-analysis indicated a higher likelihood of caesarean section among women with gestational diabetes mellitus (GDM), suggesting a correlation between C/S history and GDM occurrence. 11Publication bias could also contribute to the perceived result consistency by selectively displaying conflicting findings from unpublished studies.This association underscores the importance of enhanced monitoring and screening for GDM in these populations and the need for further research to establish causality and explore the mechanisms linking C/S and GDM, along with interventions to reduce the risk.
Furthermore, a history of stillbirth emerged as an independent predictor of GDM, with women having a 1.15 times higher likelihood of GDM if they had a history of stillbirth.A systematic review and meta-analysis conducted in Ethiopia by Belay DM, B. (2020) also support these findings, highlighting an association between adverse pregnancy outcomes, particularly prior stillbirth history, and GDM occurrences. 25Similarly, a study from Wolaita Zone, Southern Ethiopia, identified a link between previous stillbirth history and GDM incidences. 29The similarities observed in these studies may be attributed to similarities in sample characteristics, such as demographic profiles and healthcare access.Furthermore, a consistent trend was noted between this study and research from Cameroon, indicating that women with a history of stillbirth had significantly higher odds of developing GDM 17 This similarity may stem from robust statistical analyses in both studies, adjusting for confounding factors affecting the link between past stillbirth and GDM.This finding underscores healthcare provider should prioritize early identification and management of GDM, especially for clients with a history of stillbirth.They should offer support, counseling, and lifestyle changes to decrease GDM risk.Public health campaigns should increase awareness and ensure sufficient resources for diagnosis, treatment, and screening.Future research should investigate the mechanisms connecting GDM risk to stillbirth, and validation studies are necessary to enhance the current data.
Despite the absence of pre-gestational weight data in this study, the results surprisingly showed no significant association between MUAC and GDM.This contrasts with the common belief that obesity is a major risk factor for GDM across different ethnic groups.Recent studies in Ethiopia, including a systematic meta-analysis, 1 and research at St. Paul's Hospital Millennium Medical College in Addis Ababa, 12 have demonstrated a similar pattern.A consistent report was also seen in the institution based cross sectional study conducted at St.Paul's Hospital Millennium Medical College, Addis Ababa, Ethiopia. 12Furthermore, in southern Tanzania, 36 Brazilan 37 Turkey, 38 Asia, 39 Saudi Arabia, 40 and Libreville 41 has reported similar outcomes.Discrepancies in study design and methods of measuring obesity between previous and current studies may account for these differing results.While a study in Gondar town in Northwest Ethiopia employed similar methods to evaluate overweight/obesity and GDM, 27 it produced conflicting findings.This unexpected result could be linked to the unique characteristics of the study population in the Sidama Region of Ethiopia, particularly their dietary patterns.The consumption of fiber-rich foods like "Kocho" in this region may contribute to this variance.The study emphasizes the importance of personalized prevention and management strategies, along with targeted screening methods and interventions.It suggests more research on the relationship between upper arm circumference, obesity, and gestational diabetes risk in the Sidama Region, incorporating comprehensive obesity measures such as waist circumference, body fat percentage, or visceral fat assessments for a deeper understanding of obesity's impact on GDM risk.
The study exhibited strengths by utilizing a contemporary and broadly applicable screening tool to detect gestational diabetes mellitus (GDM) in pregnant women beyond 12 weeks gestation.Employing a two-hour 75g oral glucose tolerance test (OGTT) with updated standard reference cutoff values, along with retesting pregnant women with GDM risk factors despite negative initial OGTT results, enhanced the depth of the research.However, there were notable limitations to consider: The World Health Organization's (WHO) caution against utilizing point-of-care diagnostics in resource-constrained settings such as Ethiopia due to challenges related to laboratory access and blood sample handling may impede the practical application of the findings.The study's inclusion of solely pregnant women from public health institutions could restrict the generalizability of the results to the broader population.
Conclusion
The study concluded that factors such as age at first conception, place of residence, marital status, parity, history of Caesarian section, and stillbirth were independently linked to GDM.Interestingly, upper arm circumference (MUAC), a proxy for pre-gestational BMI, was not identified as a GDM risk factor.It is advised that healthcare providers perform thorough GDM risk assessments in pregnant women to detect and address risk factors, and suggest specific screening and intervention measures.Future research should investigate the connection between risk factors and GDM to create effective interventions and prevention strategies.Further exploration of the relationship between MUAC, obesity, and GDM risk in the Sidama Region is necessary.Longitudinal studies could offer valuable insights into post-pregnancy GDM risk factors.Collaboration among Ethiopia, other Sub-Saharan African countries, and global partners is encouraged to incorporate recognized risk factors into maternal health guidelines for GDM prevention and promote international research collaborations.Abbreviation AOR, adjusted odds ratio; ANC, Antenatal care; BMI, body mass index; BP, blood pressure, while CI is for confidence interval; CM, centimeter; DBP, diastolic blood pressure; DM, Diabetes mellitus; EDPS, Edinburgh Postnatal Depression Scale; FANTA, Food and Nutrition Technical Assistance; FPG, fasting plasma glucose; GDM, gestational diabetes mellitus; HPPO, Hyperglycemia with poor pregnancy outcome; IDF, International Diabetes Federation; IGT, impaired glucose tolerance; LMIC, low and middle-income nations; LNMP, last normal menstrual period; MCH, Maternal and child health; MDDS, Minimum Dietary Diversity Score; MET, Metabolic Equivalent of Task; MUAC, mid-upper arm circumference; NCDs, Noncommunicable diseases; OR, Odds ratio; OGTT, Oral glucose tolerance test; SBP, systolic blood pressure; SD, standard deviation; SPSS, statistical package for the social sciences; WHO, World Health Organization.
Figure 1 A
Figure 1 A schematic representation of sampling procedure on GDM case and control study, in pregnant women, Sidama-Region, Ethiopia(n=633), 2023.Millennium Health Center (MIL-HCs), Alito Health Center (AL-HCs), Hawella Tula General Hospital (HTG-HPs), Adare General Hospital (AG-HPs), MotteFurra Primary Hospital (MF-HPs), and Hawassa University Comprehensive Specialized Hospital (HUCS-HPs)).)Simple Random Sampling (SRS), Systematic sampling Techniques (SST), Proportional allocation to size (PPS), Cases(Ca), and Controls (Co), NTEw-Total expected pregnant women visiting the health facilities;NTECa -Total expected cases estimated out of the total women visiting the ANC clinics of the health facilities;NTECo -Total expected controls estimated out of the total women visiting the ANC clinics of the health facilities; HFCs-Health facilities;nc-sample size calculated for the study; nf-The final sample size considered for the study.
Table 3
Final Model Formed from Multivariable Analysis Results, on GDM Case and Control Study, Pregnant Women in Sidama-Region, Ethiopia(n=633), 2023 Exposure Variable(s) entered: age at first conception (in years), Place of residence, education of women, education of spouse's, marital status, wealth index in quartiles, previous history of still birth, previous GDM history, previous C/S history, preterm delivery history of women, Previous child birth weight, HIV/ AIDS status, Parity, and measure MUAC of women in CM. | 2024-06-09T15:02:08.787Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "75fdd254877c0a7da675b8997f4dab8e65789f5b",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4d294d7c7f4e979d42036d98aee19f16a4075e7b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
215411073 | pes2o/s2orc | v3-fos-license | Polyunsaturated fatty acids, lung function, and health-related quality of life in patients with chronic obstructive pulmonary disease
Background Dietary polyunsaturated fatty acids (PUFA) are thought to modify systemic inflammation. The present study aimed to evaluate the relationship between PUFA intake, lung function, and health-related quality of life (HRQoL) in patients with chronic obstructive pulmonary disease (COPD). Methods In this study, we used the dataset of 6th Korea National Health and Nutrition Examination Survey, in which, a total of 22,948 individuals including 573 participants with a high probability of developing COPD were enrolled. Participants with missing data for the investigated variables were excluded. Linear regression analyses were used to evaluate the association between PUFA intake (omega-3 [N3], omega-6 [N6], and total) with lung function, and HRQoL. HRQoL was determined according to the European Quality of Life-5 Dimensions (EQ-5D). Subgroup analysis of older patients was performed. Age, sex, body mass index, smoking, alcohol, education, residence, total calorie intake, and predicted FEV1% were adjusted in all analyses. Results Although lung function was not associated with PUFA intake, EQ-5D index was remarkably associated with N3, N6, and total PUFA intake in a dose-dependent manner. This association was more pronounced in elderly COPD patients. Mean levels of N3, N6, and total PUFA intake were significantly higher in patients having better HRQoL with respect to mobility, self-care, and usual activities. Conclusion Our results suggest that N3, N6, and total PUFA intake are associated with HRQoL in COPD patients. This association may be attributed to mobility, self-care, and usual activities. Further longitudinal study is required to clarify this relationship.
Introduction
Chronic obstructive pulmonary disease (COPD) is a preventable and treatable inflammatory disease characterized by persistent respiratory symptoms and airflow limitation measured by spirome-try [1]. The disease burden of COPD is steadily increasing, not only in western countries but also in Asian countries [2]. Since COPD requires constant management, it imposes substantial social, economic, and medical burdens. For example, a recent multicenter observational study in Korea showed that 1,245.6 million US dollars were required to provide COPD-related direct and indirect medical services [3].
The pathogenesis of COPD involves airway and systemic inflammatory response [4]. Individuals exposed to noxious particles may develop airway inflammation with loss of terminal and transitional bronchioles, emphysematous destruction, and lung function declines [5]. Systemic inflammation is associated with poor clinical outcome. For instance, Agusti et al. [6] evaluated systemic inflammatory biomarkers in peripheral blood and showed that the increased inflammatory reaction in COPD patients is associated with increased all-cause mortality and exacerbation frequency. Patients with severe disease have an elevated inflammatory burden, as they usually experience more rapid decline in lung function, increase in severity of symptoms, and frequent exacerbations [7]. Therefore, it is important to alleviate inflammatory response not only in the airway, but also in the circulatory system.
Polyunsaturated fatty acids (PUFA) play a role in modifying inflammation [8]. For example, a study involving 80 patients with COPD, who received 9 g of PUFA or placebo for 8 weeks, demonstrated improvement in exercise capacity [9]. Although recent data from a study on US adults with COPD showed that omega-3 (N3) PUFA was associated with respiratory symptoms [10], this association was not investigated among Korean COPD patients.
In this context, the present study aimed to evaluate the association between dietary PUFA, including total, N3, omega-6 (N6) PUFA, and disease severity as well as the HRQoL in patients with COPD using data from a nationwide representative sample survey.
Ethics statatement
The survey protocol of the Korea National Health and Nutrition Examination Survey (KNHANES) was approved by the Institutional Review Board of the KCDC (IRB No: 2013-07CON-03-4C in 2013, 2013-12EXP-03-5C in 2014). Since the KNHANES of 2015 was conducted for public welfare, approval of the IRB was not required. Written informed consent was obtained from all participants before the survey, which was conducted according to the Declaration of Helsinki. All procedures were in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement for observational studies.
Study participants
The present study used the dataset of the 6th (2013-2015) KN-HANES. The KNHANES is an annually conducted nationwide population-based cross-sectional survey by the Korea Centers for Disease Control and Prevention (KCDC) and the Korean Ministry of Health and Welfare. The KNHANES was designed as a complex sample survey using a multistage sampling method to represent the general non-institutionalized Korean population. The dataset of the KNHANES is freely accessible online, and detailed survey profiles are described in a previous report [11].
The 2013-2015 KNHANES assessed the health and nutritional status of 29,321 South Koreans, and 22,948 responded to the survey (response rate 78.3%). The KNHANES collected health-related and nutritional information by evaluating laboratory samples, physical examinations, face-to-face interviews, and nutritional consumption. Pulmonary function testing (PFT) was performed in participants aged over 40 years; thus, 10,109 participants without PFT results were excluded. Participants with missing values in other variables (i.e., residential area, education level, smoking status, alcohol consumption, body mass index [BMI], and European Quality of Life-5 Dimensions [EQ-5D], and nutrition intake) were excluded (n = 6,408). Participants diagnosed as asthma by their physician or having medication for asthma were excluded (n = 112). Participants with the value of forced expiratory volume in 1 second (FEV 1 ) divided by forced vital capacity (FVC) above 70% were excluded, due to the low probability of having COPD (n = 5,746). Finally, 573 participants with a high probability of having COPD without missing values in the possible confounding variables were included in the analysis. The study flow chart is presented in Fig. 1
Data collection and measurements
PFT was performed using dry rolling seal spirometers (Model 2130; SensorMedics, Yorba Linda, CA, USA) by highly-trained medical technicians. The quality control of the PFT was conducted according to the standardization guidelines of the American Thoracic Society and the European Respiratory Society [12]. Participants with FEV 1 /FVC < 70% were considered to have COPD based on the classification of the Global Initiative for Chronic Obstructive Lung Disease (GOLD) 2020 report. Indices of lung function in participants with COPD included FEV 1 (L), predicted FEV 1 %, FVC (L), predicted FVC%, FEV 1 /FVC%, and peak expiratory flow (PEF, L/sec).
Smoking status was categorized into three groups (current, former, and never) based on the Centers for Disease Control and Prevention classification [13]. Current smokers were defined as participants who smoked more than 100 cigarettes in his/her lifetime and smoke currently. Former smokers were defined as participants who smoked more than 100 cigarettes in his/her lifetime; however, had stopped smoking for more than 1 year. Participants who never smoked were defined as participants who never smoked or smoked less than 100 cigarettes in his/her lifetime.
The EQ-5D was used to evaluate the HRQoL in participants with COPD. The validity and usefulness of the EQ-5D for measuring the quality of life in COPD patients have been demonstrated previously [14]. The EQ-5D consists of five dimensions to measure the quality of life: mobility, self-care, usual activity, pain/ discomfort, and anxiety/depression. The participants were asked to select one of the three following responses for each of the five dimensions: G1, no problem; G2, some problems; G3, severe problems. Furthermore, the EQ-5D index was used to evaluate the dose-dependent relationship between PUFA intake and HRQoL. The formula for the EQ-5D index has been described in a previous report by Nam et al. [15], with higher scores indicating higher HRQoL.
Dietary PUFA intake was measured using a 24-hour recall method. The 24-hour recall method is a structured interview for estimating the intake of food or drink that an individual participant consumed during the day. Although the method can have a day-to-day variability, the reliability, validity, and reproducibility have been previously demonstrated [16]. The amounts of dietary N3 PUFA (g/day), N6 PUFA (g/day), and total PUFA (g/day) were measured.
Data regarding socio-economic status and anthropometric indices were measured by trained survey assistants. Residential areas were categorized into two groups: urban and rural. Educational level was categorized into three groups: middle school or less, high school, and college or more. Heavy alcohol consumption was defined as ≥ 7 drinks in men and ≥ 5 drinks in women on an occasion. The participants were categorized into two groups: whether they consumed more than one heavy alcohol drink in a week, or not. BMI was categorized based on the Korean Society for the Study of Obesity guidelines [17]: underweight ( < 18.5 kg/m 2 ), normal (18.5-22.9 kg/m 2 ), pre-obese (23-24.9 kg/m 2 ), and obese ( ≥ 25 kg/m 2 ).
Statistical analysis
Since the KNHANES was conducted using a complex, multistage, stratified, and probability verified sample design, all statistical analyses were performed under complex sample analyses, and sampling weights were applied. The sample, therefore, represents the non-institutionalized South Korean population.
Continuous variables (age and spirometric values) were compared using the ANOVA test and presented as a mean value with a standardized error. Categorical variables (residence, education, smoking status, alcohol consumption, and BMI) were compared using a chi-square test and presented as percentages with standard errors. Post-hoc analysis was performed with Bonferroni correction, and p < 0.017 was considered statistically significant between groups.
HRQoL was categorized into five dimensions according to the EQ-5D as mobility, self-care, usual activities, pain/discomfort, and anxiety/depression. The mean estimated level and standard error of PUFA intake in the G1, G2, and G3 groups were calculated using linear regression analysis. The trend for significance between PUFA intake and HRQoL was evaluated using linear regression analysis considering the G1, G2, and G3 of HRQoL as a continuous variable.
Multivariable linear regression analyses were used to evaluate the dose-dependent relationship between N3, N6, total PUFA intake, and spirometric values (FEV 1 % predicted, FVC% predicted, and PEF). The association between PUFA intake and HRQoL (EQ-5D index) was measured using linear regression analysis. Model 1 was adjusted for age and sex. Model 2 was adjusted for age, sex, BMI, smoking status, and alcohol consumption. Model 3 was adjusted for age, sex, BMI, smoking status, alcohol consumption, residence, education, total calorie intake, and predicted FEV 1 %. Prior to placing these variables into the model, a multi-collinearity test was performed in order to identify any inter-correlations among the investigated variables. As the distribution of N3, N6, and total PUFA intake violated the assumption of normality and skewed left, the value was log-transformed. Subgroup analysis was performed in elderly COPD patients.
All statistical analyses were performed with IBM SPSS version 24.0 (IBM Corp., Armonk, NY, USA). For all analyses, a p-val-ue < 0.05 was considered statistically significant.
Results
Baseline characteristics of study participants are presented ( Table 1). The relationships between N3, N6, and total PUFA intake and lung function are presented ( Table 2). Lung function, which was measured using predicted FEV 1 %, FVC%, and PEF, was not significantly associated with PUFA intake. The association between PUFA intake and HRQoL is presented ( Table 3). The estimated amounts of N3, N6, and total PUFA intake were significantly associated with mobility and self-care ability in a dose-dependent manner, and statistical significance remained after covariate ad-justment. N3, N6, and total PUFA intake also showed a positive association with the capacity of usual activities, although the degree of significance varied with covariates adjustment. Although N3 PUFA intake was positively associated with pain/discomfort, N6 and total PUFA intake were not. With respect to anxiety/depression, statistical significance was not observed. The associations between N3, N6, and PUFA intake and the EQ-5D index are presented (Table 4). High PUFA intake was significantly associated with a better EQ-5D score after covariate adjustment. Despite statistical significance, the magnitude of association was low. The association in older COPD patients is presented ( Table 5). The strength of association was intensified in older patients, and the same statistical significance was observed.
Discussion
In a population-based sample from South Korea, we investigated the association between PUFA intake, lung function, and HRQoL in patients with COPD defined as FEV 1 /FVC% < 70%. We adjusted for socio-economic status, health behaviors, total calorie intake, and predicted FEV 1 %, and found that N3, N6, and total PUFA intake were associated with HRQoL in a dose-dependent manner. Specifically, high dodes of N3, N6, and total PUFA intake showed a positive association with mobility, self-care, and usual activities in COPD patients, whereas there was no significant association with pain/discomfort and anxiety/depression. In a subgroup analysis of older patients (age ≥ 60 years), this association was reinforced, supporting the importance of proper nutritional supplements in older populations. Our results provide insight regarding the association of nutrition intake and HRQoL in patients with COPD.
HRQoL has been an important primary outcome in several studies and focus areas in the management of COPD. Triple inhaled treatments with long-acting beta-agonists, long-acting anticholinergics, and inhaled corticosteroids demonstrated improvement of HRQoL in COPD patients [18]. The evaluation of treatment efficacy and detection of an individual patient's potential risk for psychological and behavioral problems should be feasible by assessing their HRQoL [19]. The EQ-5D is a useful descriptive methodology evaluating five dimensions of HRQoL: mobility, self-care, usual activities, pain/discomfort, and anxiety/depression. In a previous study measuring the usefulness of the EQ-5D score in COPD patients, the correlation of hospital admissions, comorbidities, COPD Assessment Test scores, and the Medical Research Council scale for dyspnea was observed [20]. Pickard et al. [21] compared nine structured research criteria with the validity and reliability of EQ-5D usage in COPD patients and reported 1.49 ± 0.1 1.87 ± 0.3 0.37 ± 0.2 0.705 8.05 ± 0.5 8.96 ± 1.1 5.42 ± 0.9 0.768 9.48 ± 0.6 10.85 ± 1.2 5.79 ± 1.0 0.687 Comparison of estimated PUFA intake and p-value for trend was measured using multivariable linear regression analysis considering G1, G2, and G3 as continuous variables. Model 1 was adjusted for age and sex; model 2 was additionally adjusted for body mass index, smoking status, and alcohol consumption; and model 3 was additionally adjusted for residence, education, total calorie intake, and predicted FEV 1 %. PUFA, polyunsaturated fatty acid; COPD, chronic obstructive pulmonary disease; G1, no problem; G2, some problems; G3, severe problems; N3, omega-3; N6, omega-6; FEV 1 , forced expiratory volume in 1 second. Estimated prevalence of subjects and amount of PUFA intake according to degree of health-related problems are presented as a) percentage±standard error and b) estimated amount of intake±standard error, respectively. https://doi.org/10.12701/yujm.2020.00052 that the EQ-5D scores were closely related to COPD stages.
Among the five components of the EQ-5D, mobility is defined as the ability to walk, while usual activities refer to an individual's performance at work, study, household, and family/leisure activities; these two components are associated with daily physical activities. Walking ability in COPD patients is further standardized by the measurement of the distance covered by walking in 6 minutes, and lower mean walking distance is generally considered as a poor prognostic factor for mortality [22]. The GOLD report recommends exercise testing and assessment of physical activity based on walking distance to evaluate the effectiveness of pulmonary rehabilitation, which is a key non-pharmacologic management technique for COPD patients [23]. Given that physical activity is a strong predictor of all-cause mortality in COPD patients [22], our results suggest that PUFA intake may correlate with mobility and usual physical activities. However, specific threshold amounts of PUFA for improving physical activity is unclear and further longitudinal study is required.
Several previous studies have shown that PUFA is associated with HRQoL. In patients with systemic lupus erythematosus (SLE) from the United States, N3 PUFA intake, calculated from a diet history questionnaire, was found to be beneficial in patient-reported outcomes assessed by the Systemic Lupus Activity Questionnaire [24]. One randomized controlled trial of 1 g/day N3 PUFA supplementation demonstrated a reduction of premenstrual symptoms and improvement of HRQoL [25]. Another double-blind randomized controlled trial of 3 g/day N3 PUFA supplementation for 3 months reported a reduction in several inflammatory markers and improvement of HRQoL in chronic hemodialysis patients [26]. Although the association between PUFA and HRQoL in patients with COPD is under-recognized, Lemoine S et al. [10] suggested that individual factors should be considered while determining the association of N3 PUFA intake and symptoms. PUFA may be beneficial in other chronic diseases such as SLE or chronic kidney disease. PUFA may also have beneficial effects in patients with COPD in terms of HRQoL.
There are several possible mechanisms related to this association. First, COPD is a chronic airway and systemic inflammatory disease, and N3 PUFA might attenuate this inflammatory process. Within the inflammatory processes of the human body, N3 PUFA, eicosapentaenoic acid and docosahexaenoic acid mediate anti-inflammatory responses and several specialized pro-resolving mediators (SPMs, resolvins, protectins, and maresins) are synthesized [8]. Resolvin, one of the SPMs, has been associated with counter-regulated pro-inflammatory signaling and inflammatory resolution pathways in patients with COPD [27]. In a cigarette exposed human lung, an experimental study demonstrated that resolvin dampened the inflammatory reaction via the production of anti-inflammatory cytokines and enhanced phagocytosis of macrophages [27]. Second, the susceptibility of PUFA to oxidative stress could contribute to lowering airway inflammation. Interestingly, we observed that N6 PUFA was beneficial in improving HRQoL in COPD patients, although there is conflicting evidence regarding the health-related benefits of N6 PUFA [28,29]. PUFA can be easily oxidized due to its unstable hydrogen-carbon bonds; an in vivo study demonstrated that N6 PUFA decreased serum C-reactive protein [29]. Our study has several strengths. This is the first population-based epidemiologic study in South Korea showing a relationship between PUFA intake and HRQoL in patients with COPD. These results highlight the importance of nutrient intake in patients with COPD to alleviate respiratory symptoms and improve HRQoL. In addition, we adjusted for socio-economic characteristics and health-related behavior, as diet is influenced by an individual's status, as well as social, economic, and cultural factors.
Despite these strengths, our study has several limitations. First, as this is a cross-sectional observational study, the causal relationship is unclear. Although we observed that PUFA intake is associated with increased HRQoL in COPD patients, the inverse correlation could exist. Second, because the nutritional survey of KNHANES was based on a 24-hour recall method, recall bias and day-to-day variability should be considered. Third, spirometric values were obtained through pre-bronchodilator tests. Fourth, information regarding respiratory symptoms, hospital admission or exacerbation history, infections within the 4 weeks prior to the study, and recent use of corticosteroid was unavailable. Therefore, classification based on symptoms or exacerbation (e.g., ABCD grouping of the GOLD report) was not feasible. Fifth, which PUFA derivative is specifically associated with HRQoL in unclear. For example, Noguchi et al. [30] reported that eicosapentaenoic acid might improve the quality of life; however, docosahexaenoic acid was not beneficial. Finally, smoking status in terms of packyears is an important component when assessing long-term inflammation and oxidative stress, but the data was not available in 6th KNHANES data.
N3, N6, and total PUFA showed a positive correlation with HRQoL in patients with COPD. Specifically, mobility, self-care, and usual activities might be attributable to the observed association between HRQoL and PUFA intake. Further randomized prospective studies are required to clarify the health-related benefits of PUFA in patients with COPD. | 2020-04-08T19:07:47.813Z | 2020-04-07T00:00:00.000 | {
"year": 2020,
"sha1": "f79e027e16f638e5eda363f2874c36fb80a43a76",
"oa_license": "CCBYNC",
"oa_url": "https://www.e-yujm.org/upload/pdf/yujm-2020-00052.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4b216cf2abc05287c4ca3ce75494600d8d7c6e93",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
24580974 | pes2o/s2orc | v3-fos-license | CPT1c Is Localized in Endoplasmic Reticulum of Neurons and Has Carnitine Palmitoyltransferase Activity*
CPT1c is a carnitine palmitoyltransferase 1 (CPT1) isoform that is expressed only in the brain. The enzyme has recently been localized in neuron mitochondria. Although it has high sequence identity with the other two CPT1 isoenzymes (a and b), no CPT activity has been detected to date. Our results indicate that CPT1c is expressed in neurons but not in astrocytes of mouse brain sections. Overexpression of CPT1c fused to the green fluorescent protein in cultured cells demonstrates that CPT1c is localized in the endoplasmic reticulum rather than mitochondria and that the N-terminal region of CPT1c is responsible for endoplasmic reticulum protein localization. Western blot experiments with cell fractions from adult mouse brain corroborate these results. In addition, overexpression studies demonstrate that CPT1c does not participate in mitochondrial fatty acid oxidation, as would be expected from its subcellular localization. To identify the substrate of CPT1c enzyme, rat cDNA was overexpressed in neuronal PC-12 cells, and the levels of acylcarnitines were measured by high-performance liquid chromatography-mass spectrometry. Palmitoylcarnitine was the only acylcarnitine to increase in transfected cells, which indicates that palmitoyl-CoA is the enzyme substrate and that CPT1c has CPT1 activity. Microsomal fractions of PC-12 and HEK293T cells overexpressing CPT1c protein showed a significant increase in CPT1 activity of 0.57 and 0.13 nmol·mg-1·min-1, respectively, which is ∼50% higher than endogenous CPT1 activity. Kinetic studies demonstrate that CPT1c has similar affinity to CPT1a for both substrates but 20–300 times lower catalytic efficiency.
CPT1c is a carnitine palmitoyltransferase 1 (CPT1) isoform that is expressed only in the brain. The enzyme has recently been localized in neuron mitochondria. Although it has high sequence identity with the other two CPT1 isoenzymes (a and b), no CPT activity has been detected to date. Our results indicate that CPT1c is expressed in neurons but not in astrocytes of mouse brain sections. Overexpression of CPT1c fused to the green fluorescent protein in cultured cells demonstrates that CPT1c is localized in the endoplasmic reticulum rather than mitochondria and that the N-terminal region of CPT1c is responsible for endoplasmic reticulum protein localization. Western blot experiments with cell fractions from adult mouse brain corroborate these results. In addition, overexpression studies demonstrate that CPT1c does not participate in mitochondrial fatty acid oxidation, as would be expected from its subcellular localization. To identify the substrate of CPT1c enzyme, rat cDNA was overexpressed in neuronal PC-12 cells, and the levels of acylcarnitines were measured by high-performance liquid chromatography-mass spectrometry. Palmitoylcarnitine was the only acylcarnitine to increase in transfected cells, which indicates that palmitoyl-CoA is the enzyme substrate and that CPT1c has CPT1 activity. Microsomal fractions of PC-12 and HEK293T cells overexpressing CPT1c protein showed a significant increase in CPT1 activity of 0.57 and 0.13 nmol⅐mg ؊1 ⅐min ؊1 , respectively, which is ϳ50% higher than endogenous CPT1 activity. Kinetic studies demonstrate that CPT1c has similar affinity to CPT1a for both substrates but 20 -300 times lower catalytic efficiency.
Carnitine palmitoyltransferase 1 (CPT1) 4 catalyzes the conversion of long chain fatty acyl-CoAs into acylcarnitines, the first step in the transport of long chain fatty acids from the cytoplasm to the mitochondrial matrix, where they undergo -oxidation. This reaction is not only central to the control of fatty acid oxidation, but it also determines the availability of long chain acyl-CoA for other processes, notably the synthesis of complex lipids.
There are three different CPT1 isozymes: CPT1a (also called L-CPT1) encoded by CPT1a, CPT1b (also called M-CPT1) encoded by CPT1b, and the recently described CPT1c (also called CPT1-C) encoded by CPT1c. CPT1a and CPT1b have been extensively studied since they were cloned for the first time, in 1993 and 1995, respectively (1,2). CPT1a is the most ubiquitous expressed isoform and is found not only in liver but also in pancreas, kidney, brain, blood, and embryonic tissues. CPT1b is expressed only in brown adipose tissue, muscle, and heart. Both isozymes present significantly different kinetic and regulatory properties: CPT1a displays higher affinity for its substrate carnitine and a lower affinity for the physiological inhibitor malonyl-CoA than the muscle isoform (3). In addition, the amino acid residues that are critical for catalytic activity or malonyl-CoA sensitivity have been identified for both enzymes, and three-dimensional structures have been predicted based on the carnitine acetyl transferase, carnitine octanoyl transferase, and carnitine palmitoyltransferase II crystals (4). CPT1a and CPT1b are localized in the outer mitochondrial membrane with the N and C termini facing to the cytosolic side. Western blotting and activity characterization suggested that CPT1a is also localized in microsomes, but expression studies with EGFP fused to the C terminus of CPT1a showed that CPT1a is targeted only to mitochondria and that previous detection of CPT1a in microsomes was probably derived from membrane contact sites between ER and mitochondria (5). CPT1a and CPT1b have a critical role in the heart, liver, and pancreatic -cells and are potential targets for the treatment of metabolic disorders, including diabetes and coronary heart disease.
Less is known about CPT1c. Although the protein sequence is highly similar to that of the other two isozymes, CPT1c expressed in yeast or HEK293T cells displays no catalytic activity with common acyl-CoA esters as substrates (6,7). One explanation is that palmitoyl-CoA is not a substrate for CPT1c and that another brain-specific acyl-CoA might be its natural substrate. Expression studies indicate that CPT1c is localized exclusively in the central nervous system, with homogeneous distribution in all areas (hippocampus, cortex, hypothalamus, and others). The pattern resembles that of FAS, acetyl-CoA carboxylase-␣ (enzymes related to biosynthesis) rather than CPT1a or ACC- (enzymes related to oxidation) (6,8). The capacity of CPT1c to bind malonyl-CoA has been demonstrated, and it has been suggested that CPT1c regulates malonyl-CoA availability in the brain cell.
It has recently been reported that knock-out mice for CPT1c ingest less food and have a lower body weight when fed a standard diet. When these animals are fed high fat chow, body weight increases more than control animals, and they become resistant to insulin, suggesting that CPT1c is involved in energy homeostasis and control of body weight (7). Moreover, ectopic overexpression of CPT1c by stereotactic hypothalamic injection of a CPT1c adenoviral vector protects mice from adverse weight gain caused by high fat diet (9).
Herein we report that CPT1c is localized in neurons but not in astrocytes of adult brain. We also demonstrate that CPT1c is localized in the ER of the cells and not in mitochondria, and that CPT1c shows carnitine palmitoyltransferase activity.
Cells cultured in 24-wells plates were transfected with 0.8 g of plasmid (purified with the Qiagen Maxi Prep Kit) using Lipofectamine Plus reagent (Invitrogen) according to the manufacturer's protocol. Transfection efficiency was ϳ30 -50%.
Plasmid Constructions
For pCPT1c-EGFP and pCPT1a-EGFP, rat CPT1c cDNA was obtained by reverse transcription-PCR performed with 2 g of total rat brain RNA. The 2700-bp fragment amplified was cloned in pBluescript and sequenced. pEGFP-N3 vector (from Clontech, BD Biosciences) was used to clone the coding region of CPT1c or CPT1a, to create pCPT1c-EGFP and pCPT1a-EGFP, respectively. pCPT1c-EGFP and pCPT1a-EGFP plasmids encode CPT1c and CPT1a proteins fused to the N-terminal region of EGFP, respectively.
Chimera Constructions
pCPT1ac-EGFP-460 bp of the 5Ј coding sequence of rat CPT1a gene was PCR-amplified with primers that created an HindIII site and an HpaI site at the ends of the amplified frag-ment. This PCR product was cloned into a pCPT1c-EGFP plasmid previously digested by HindIII and HpaI (which deleted the 460 bp of the 5Ј terminus of CPT1c coding sequence). The resulting plasmid encodes a fused protein constituted by the N terminus and the two transmembrane domains of CPT1a, the catalytic domain of CPT1c, and EGFP.
pCPTca-EGFP-A segment of the first 462 bp of rat CPT1c gene was PCR-amplified with two primers that created a HindIII site a PpuMI site at the ends of the amplified fragment. This PCR product was digested and cloned into a pCPT1a-EGFP plasmid, previously digested by HindIII and PpuMI (which deleted the 460 bp of the 5Ј terminus of CPT1c coding sequence). The resulting vector contained the N terminus, the two transmembrane domains of CPT1c, and the catalytic domain of CPT1a fused to EGFP.
pIRES-CPT1a and pIRES-CPT1c-The coding regions of rat CPT1a and CPT1c were cloned in vector pIRES2-EGFP (Clontech, BD Biosciences), which permits both the gene of interest and the EGFP gene to be translated from a single bicistronic mRNA.
Co-localization Studies in Brain Sections
For co-localization studies we performed combined in situ hybridization/immunocytochemistry or double immunofluorescence, using standard protocols.
For combined in situ hybridization, coronal sections (30 m) from adult mouse forebrains were used. Processed sections were hybridized overnight at 56°C, with cpt1c Riboprobes (full rat cDNA) labeled with digoxigenin-d-UTP (Roche Applied Science) at a concentration of 500 ng/ml. After stringent washing, sections were incubated at 4°C overnight with an anti-DIG antibody (1/2000) conjugated to alkaline phosphatase (Roche Applied Science) and developed with 5-bromo-4-chloro-3-indolyl phosphate/nitro blue tetrazolium substrate (Invitrogen). Tissue sections were mounted on gelatinized slides with Mowiol. Those sections that were hybridized with control sense Riboprobes did not give any hybridization signal.
After in situ hybridization, some slices were collected and processed by immunofluorescence. The primary antibody was mouse anti-NeuN (1:75, Chemicon). The secondary antibody was biotinylated (Vector Laboratories, Inc., Burlingame, CA). The streptavidin-horseradish peroxidase complex was from Amersham Biosciences. Sections were developed with 0.03% diaminobenzidine and 0.003% hydrogen peroxide, mounted onto slides, and dehydrated, and coverslips were added with synthetic resin.
In double immunofluorescence experiments, sections obtained as indicated above were incubated with primary antibodies against glial fibrillary acidic protein (1/500, Chemicon MAB360) and CPT1c (1/100) overnight at 4°C in the same blocking solution. The sections were washed three times in PBS (0.1 M) and incubated for 2 h with secondary antibodies coupled to fluorochromes Alexa 488 (for green fluorescence) and Alexa 568 (for red fluorescence) at a dilution of 1/500. Sections were mounted with Mowiol and observed using a confocal Leica TCS SP2 microscope (Leica Lasertechnik GmbH, Mannheim, Germany). Images were saved in TIFF format and analyzed using Adobe Photoshop 3.0.
Co-localization Studies in Culture Cells
Cultured cells were grown on lysine-treated coverslips in 24-well plates. Co-localization studies were performed 48 h after transfection with plasmids containing CPT1c or CPT1a fused to the 5Ј-end of EGFP. To visualize the ER, cells were washed twice in PBS (10 mM), fixed with 3% paraformaldehyde in 100 mM phosphate buffer and 60 mM sucrose for 15 min at room temperature, and then washed twice in PBS. Cells were permeabilized with 1% (w/v) of Triton X-100 in PBS and 20 mM glycine for 10 min at room temperature and then washed twice in PBS. Nonspecific binding of antibody was blocked by incubation with 1% (w/v) BSA in PBS with glycine 20 mM at room temperature for 30 min. Cells were then incubated with mouse anti-calnexin polyclonal antibody (BD Biosciences, 1:50 in 1% (w/v) BSA/PBS/20 mM glycine/0.2% Triton X-100) for 1 h at 37°C. After washing twice in PBS/20 mM glycine, cells were incubated with goat anti-mouse Alexafluor 546 (Molecular Probes, 1:500 in 1% (w/v) BSA/PBS/20 mM glycine/0.2% Triton X-100) for 1 h at 37°C, and then washed twice in PBS. Coverslips were mounted on glass slides with Mowiol. Mitochondria were visualized by incubating cells with 500 nM MitoTracker Orange CM-H2TMRos (Molecular Probes) in complete medium for 30 min, followed by 30 min in complete medium without MitoTracker, after which they were fixed as mentioned above.
Fluorescent staining patterns were visualized by using a fluorescence microscope (Leica). The captured images were processed using Adobe Photoshop 5.0.
RNA Extraction and Real-time PCR Conditions
RNA was extracted from cells by the TRIzol method (Invitrogen) and quantified spectrophotometrically. 2 g of total RNA was incubated with DNase and reverse transcribed by Superscript III (Invitrogen) following the manufacturer's conditions. 2 l of reaction was used in the real-time PCR amplification with TaqMan and primers designed by Applied Biosystems, following the manufacturer's conditions. An 18 S expression assay was used to normalize the samples.
Lipid Extraction
Cells were washed in cold PBS buffer and gently collected with a pipette. They were then centrifuged at 700 ϫ g for 5 min at 4°C and washed in PBS. 20 l of samples was taken for Bradford protein assay. After that, 200 l of 0.2 M NaCl was added to the pellet, and the mixture was immediately frozen in liquid N 2 . To separate aqueous and lipid phases, 750 l of Folch reagent (chloroform:methanol, 2:1) and 50 l of 0.1 M KOH were added, and, after vigorous vortex mixing, the phases were separated by 15-min centrifugation at 2000 ϫ g at 4°C. The top aqueous phase was removed, and the lipid phase was washed in 200 l of methanol/water/chloroform (48:47:3). After vortex mixing, centrifugation was performed at 700 ϫ g for 5 min at 4°C, and the lower phase (lipid extract) was dried.
Quantification of Acylcarnitines by HPLC
Acylcarnitines were analyzed via an LC-ESI-MS/MS System (API 3000 PE Sciex) in positive ionization mode as described in a previous study (10). Quantification was done through multiple reaction monitoring experiments using the isotope dilution method with deuterated palmitoylcarnitine as internal standard (200 ng⅐ml Ϫ1 ). 10 l of sample was injected in the LC-ESI-MS/MS system. Multiple reaction monitoring transitions were as follows: 400.4/85.2 for quantification of palmitoylcarnitine, 4001.4/341.4 for confirmation of palmitoylcarnitine, and 403.4/ 85.2 for quantification of d 2 -palmitoylcarnitine. The method was linear over the range from 2 to 2000 ng⅐ml Ϫ1 . The limit of detection and the limit of quantification were 0.14 ng⅐ml Ϫ1 (0.35 nmol⅐liter Ϫ1 ) and 0.48 ng⅐ml Ϫ1 (1.2 nmol⅐liter Ϫ1 ), respectively (in standard solutions).
Microsome Purification
Cells were recovered by centrifugation at 1200 ϫ g for 5 min at 4°C, washed in 1.5 ml of PBS, and re-suspended in 2 ml of lysis buffer (250 mM sucrose, 10 mM Tris, pH 7.4, 1 mM EDTA, supplemented with 1 mM phenylmethylsulfonyl fluoride, 0.5 mM benzamidine, 10 ng/ml leupeptin, and 100 ng/ml pepstatin). Cells were disrupted by Dounce homogenization (30 pulses with loose pestle and 30 pulses with tight pestle). Homogenates were centrifuged at 2,000 ϫ g for 3 min at 4°C to remove cell debris. This crude extracted was further centrifuged at 10,000 ϫ g for 30 min at 4°C to remove the mitochondrial fraction. Supernatant was centrifuged at 10,000 ϫ g for 1 h at 4°C to sediment the microsomal fraction. Pellets were immediately used in the carnitine palmitoyltransferase activity assay.
CPT1 Activity
Radiometric Method-Carnitine acyltransferase activity was determined by the radiometric method as previously described (11). The substrates were palmitoyl-CoA and L-[methyl- Chromatographic Method-The same procedure used previously (11) was followed except that all carnitine used was cold (not radioactive). In addition, acylcarnitines extracted with water-saturated n-butanol were analyzed by an LC-ESI-MS/MS system, as described above.
Western Blot Experiments
A polyclonal rabbit antibody against the last 15 residues (796 -810) of mouse CPT1c was developed following the indications in a previous study (7), by Sigma-Genosys. The specificity of the antibody was determined by enzyme-linked immunosorbent assay and Western blot experiments. For CPT1a detection, a polyclonal antibody against amino acids 317-430
CPT1c Location and Activity
of rat-CPT1a (12) was used. Generally, 60 g of protein extracts was subjected to SDS-PAGE. A 1:2000 dilution of anti-CPT1c was used as primary antibody. The secondary antibody was used at 1:5000 dilution. The blots were developed with the ECL Western blotting system from Amersham Biosciences.
Palmitate Oxidation
Palmitate oxidation to CO 2 and acid-soluble products were measured in PC-12 cells 48 h after transfection. On the day of the assay, cells were washed in Krebs-Ringer bicarbonate/ Hepes buffer (KRBH)/0.1% BSA, preincubated at 37°C for 30 min in KRBH/1% BSA, and washed again. Cells were incubated for 2 h at 37°C with fresh KRBH containing 2.5 mM glucose, 0.8 mM carnitine, 0.25 mM palmitate, and 1 Ci/ml [1-14 C]palmitate bound to 1% BSA. Oxidation measurements were performed as previously described (13).
CPT1c Cell Type Localization
To identify the types of brain cell in which CPT1c is expressed, co-localization studies with NeuN (a nuclear neuronal marker), or glial fibrillary acidic protein (an astrocyte marker), antibodies were performed in adult mouse brain sections. Fig. 1 shows co-labeling of CPT1c mRNA, as revealed by in situ hybridization studies, with NeuN. This pattern confirms that CPT1c is expressed mainly in neurons. In addition, no colocalization was detected between CPT1c and glial fibrillary acidic protein (double immunohistochemistry) (Fig. 1d), indicating that CPT1c is not present in brain astrocytes.
CPT1c Subcellular Localization
CPT1c Is Localized in ER of Cultured Cells-To study the intracellular localization of CPT1c, fibroblasts were transiently transfected with pCPT1a-EGFP or pCPT1c-EGFP, which encode CPT1a or CPT1c proteins, respectively, fused at their C-terminal end to EGFP. 48 -52 h after transfection, the fluorescence pattern shown by CPT1a-EGFP (which was expressed in a punctuate manner) was different from that of CPT1c-EGFP (which was expressed in a reticular manner). Co-localization studies were performed with MitoTracker, a potential-sensitive dye that accumulates in mitochondria, and with anti-calnexin, an ER integral protein. In some experiments cells were co-transfected with pDsRed2-ER (Clontech, Takara BioEurope, SAS), a subcellular localization vector that stains the ER red. Fig. 2 clearly shows that CPT1c is localized in the ER membrane, but not in mitochondria. In contrast, CPT1a is localized in mitochondria, as previously described in other cells (5). The slight co-localization of CPT1a with the product of pDsRed2-ER may be due to the contacts between the ER membrane and the mitochondrial outer membrane, labeled as mitochondrial-associated membranes. To assess whether either isoform is localized in peroxisomes, other organelles implicated in fatty acid oxidation, co-localization studies were performed with anti-PMP70, a peroxisomal membrane protein. No major co-localization was observed between PMP70 and CPT1c or CPT1a. The slight co-localization of CPT1c with PM70 may be due to a residual localization of this protein in peroxisomes (Fig. 3). The same experiments were performed with SH-SY5Y cells, PC-12 cells, and HEK293T cells with same results. MARCH 14, 2008 • VOLUME 283 • NUMBER 11
JOURNAL OF BIOLOGICAL CHEMISTRY 6881
CPT1c Is Localized in Microsomal Fraction of Adult Mouse Brain-To eliminate the possibility that overexpression experiments in cultured cells could modify the subcellular localization of CPT1c, we performed Western blot experiments with different cellular fractions of some adult mouse tissues. CPT1c was only present in brain tissue and absent in any other tissues analyzed (Fig. 4). In addition, CPT1c was localized in the microsomal fraction of brain (Fig. 4). Only some levels of CPT1c protein were present in brain mitochondria, probably by residual contamination from microsomes. The same membranes, once de-hybridized, were used with CPT1a antibodies, as a positive control for mitochondria. CPT1a was present at high levels in mitochondria from liver and kidney, and some residual levels were found in the microsomal fraction of all tissues examined.
The N-terminal Region of the Protein Is Responsible for CPT1c-specific Subcellular Localization-We aimed to test whether the N-terminal end of CPT1c was responsible for the ER localization. We made new chimeric plasmid constructions in which 460 bp of the 5Ј end of CPT1a gene (which encodes the two trans-membrane domains) and the mitochondrial import signal described by the Prip-Buus group (14) was replaced by the 5Ј end of CPT1c, and vice versa (see scheme in Fig. 5). The recombinant plasmids were called pCPT1ca-EGFP and pCPT1ac-EGFP, respectively. SY-SHSY cells transiently transfected with those constructions showed that CPT1ca-EGFP was localized in ER, and that CPT1ac-EGFP was localized in mitochondria, indicating that exchange of N-terminal ends between the two CPT1 isoforms swapped the intracellular localization of recombinant chimeric proteins (Fig. 5). These results demonstrate that the N-terminal end of CPT1c lacks the mitochondrial import signal present in CPT1a and contains a putative microsomal targeting signal responsible for ER localization.
CPT1c Does Not Participate in Fatty Acid Oxidation-To examine whether CPT1c participates in mitochondrial fatty acid oxidation, we measured increases in CO 2 in PC-12 cells overexpressing CPT1c. As expected by its subcellular localization, CPT1c did not increase fatty acid oxidation, whereas CPT1a did (Table 1).
CPT1c Substrate Identification
To identify the substrate of CPT1c, we overexpressed the enzyme in PC-12 cells and attempted to identify any increased acylcarnitine species present in the lipid cell extract, 48 h after transient transfection. PC-12 cells were easily transfected with Lipofectamine (Invitrogen) or Metafecten (Biontex, Germany) with transfection efficiencies of ϳ40 -70% of total PC-12 cells, as measured by the fluorescence in a cell-counter FACS Scan. PC-12 cells were transfected with pIRES-CPT1c, pIRES-CPT1a, or empty pIRES. Western blot experiments showed a 5to 10-fold increase in CPT1c and CPT1a levels in transfected cells. The lipid fraction of transfected cells was extracted, and the levels of acylcarnitines were measured. To quantify acylcarnitines, we used a new HPLC-MS/MS method where no derivatization or ionic-pair chromatography is needed (10). Precursor ion scan of m/z 85 experiment allows the identification of all acylcarnitines present in the sample. Areas below chromatographic peaks (chromatograms acquired in multiple reaction monitoring mode) were measured for all acylcarnitines detected. Fig. 6 shows relative areas from chromatographic peaks present in overexpressing cells compared with control (cells transfected with empty expression vector). Cells transfected with pIRES-CPT1c showed an increase of Ͼ2-fold in palmitoylcarnitine levels (Fig. 6). No other acylcarnitine was significantly increased. Cells transfected with pIRES-CPT1a (positive control) showed a 5-fold increase in palmitoylcarnitine levels and a 2-to 3-fold increase in other long chain acylcarnitines. The Wilcoxon statistic test (a non-parametric test for two paired samples) between CPT1ctransfected cells and control cells indicated that only palmitoylcarnitine levels increased significantly in CPT1c-transfected cells. These results indicate that CPT1c has carnitine palmitoyltransferase activity and that palmitoyl-CoA is a substrate for the CPT1c isoenzyme.
CPT1c Activity
Once palmitoyl-CoA had been identified as a CPT1c substrate, we compared CPT1 activity in isolated microsomal fractions of PC-12 and HEK293T cells transfected with pIRES-CPT1c with the activity in fractions transfected with empty pIRES vector. CPT1c was overexpressed Ͼ10-fold, and the protein was found mainly in the microsomal fraction (Fig. 7A). Western blot membrane was reprobed with mouse anti-CPT1a antibodies to determine the residual CPT1a protein present in the microsomal fraction of PC-12 cells (Fig. 7B), which is responsible for the endogenous activity in microsomes of control cells. The same antibodies could not be used in HEK293T cells, because they do not recognize the human CPT1a protein. Palmitoylcarnitine formed in the CPT1 assay was measured by the same HPLC-MS/MS method used to identify the substrate (10). Microsomes from CPT1c-transfected cells showed a 50% increase in CPT1 activ-ity compared with control cells (endogenous activity) ( Table 2). K m and V max values for both substrates were calculated ( Fig. 8 and Table 3). K m values were similar to those of CPT1a (25), whereas V max values were 66 times lower than those of CPT1a (25). For example, CPT1c catalytic efficiencies for palmitoyl-CoA and carnitine were 320 and 25 times lower, respectively, than those of CPT1a. CPT1 sensitivity to malonyl-CoA was not measured in cultured transfected cells, because CPT1c activity was too low and the microsomal fraction always retained residual CPT1a activity that masked any inhibitory effect of malonyl-CoA.
DISCUSSION
The presence of a third CPT1 isoform, CPT1c, in the mammalian brain is intriguing. It might show specific expression patterns, cellular localization, or biochemical properties that would make it different enough from the other two isoforms to explain its occurrence. The data we report here on the peculiarities of CPT1c may provide clues to its cellular function. CPT1c is expressed only in the mammalian brain. The other CPT1 isoforms are expressed in other tissues and are present in other organisms like birds, fishes, reptiles, amphibians, or insects. This suggests that CPT1c has a specific function in more evolved brains. Price et al. (6) showed that CPT1c is expressed in all regions of brain, in a similar pattern to that shown by neurons. Dai et al., have recently demonstrated that CPT1c is localized to neurons of the central nervous system (9). Our results confirm these findings and demonstrate that CPT1c is not expressed in astrocytes, suggesting that CPT1c function is specific to neurons.
The notion that CPT1c is localized in mitochondria stems from an observation of CPT1c protein in mitochondrial fraction of cells (6) and from co-localization studies with Mito-Tracker in GT1-7 hypothalamic cells (9). In the first study (6), CPT1c was also found in the microsomal fraction, as revealed by Western blot experiments, although the authors attributed this to contamination problems in cellular fractioning process. In the second study (9) the authors conclude that CPT1c colocalizes with MitoTracker, although the images did not show perfect matching and co-localization studies were not performed with any ER marker. In contrast, subcellular localization studies performed by our group in cultured cells and also in adult brain clearly demonstrate that CPT1c is localized in the ER, not in mitochondria. These results indicate that CPT1c has a different metabolic function than CPT1a or CPT1b, which is other than facilitating the import of long chain fatty acid into mitochondria or peroxisomes to undergo -oxidation, as demonstrated in palmitate oxidation experiments. Localization of CPT1c in the ER implicates it in a biosynthetic rather than a catabolic pathway.
Intracellular localization experiments with chimeric proteins indicate that the N-terminal region of CPT1c, which includes the two transmembrane domains, is responsible for ER-specific localization. These results complement previous studies in CPT1a protein (14). Prip-Buus and colleagues demonstrate that a region just downstream of the second transmembrane domain (residues 123-147) is important for mitochondrial transport of CPT1a. Amino acid sequence comparison between CPT1a and CPT1c demonstrates that the putative mitochondrial transport sequence is partially altered in CPT1c, with fewer positively charged amino acids (one charged residue versus four). In addition, the second transmembrane domain is longer in CPT1c than in the other two isoforms, which may enable it to sort proteins to the ER rather than to mitochondrial outer membrane (15).
Previous studies (6, 7) had shown that CPT1c had no enzyme activity in yeast or HEK293T cells with palmitoyl-CoA or other acyl-CoA molecules as substrate. This indicated that the CPT1c
CPT1c Location and Activity
substrate could be a rare acyl-CoA specific to the brain. We thus attempted to measure variations in all acylcarnitine levels in neural cells overexpressing CPT1c. We found that palmitoylcarnitine was the only product that was increased, indicating that palmitoyl-CoA is the preferred acyl-CoA substrate for CPT1c. Activity measurements in microsomal fractions from PC-12 and HKE293T cells confirmed that CPT1c has carnitine palmitoyltransferase activity. The failure of other authors (6,7) to detect CPT1c activity has two possible explanations: 1) they used mitochondrial fractions instead of microsomal and 2) they used a radiometric assay instead of a chromatographic method. The HPLC-MS/MS method produces reliable and accurate measurements of palmitoylcarnitine concentrations in biological samples with a sensitivity limit of 0.48 ng/ml, which corresponds to a specific activity of 0.0045 nmol⅐mg Ϫ1 ⅐min Ϫ1 in our CPT1 assay conditions (10). The limit of sensitivity of the radiometric assay, calculated as the standard deviation of ten blank points with a signal-to-noise ratio of 3, corresponds to specific activity of 0.4 nmol⅐mg Ϫ1 ⅐min Ϫ1 . This indicates that the chromatographic method is 100 times more sensitive than the radiometric, as described elsewhere (10). Recently, other authors have also measured CPT1 activity by a tandem mass spectrometry method because of its accuracy and sensitivity (16). CPT1c has 100 times lower specific activity than CPT1a and CPT1b. One explanation is that CPT1c participates in a biosynthetic pathway, facilitating the constant transport of palmitate across the ER membrane, rather than in a highly active catabolic pathway such as fatty acid oxidation. Another explanation is that CPT1c acts as a metabolic sensor. CPT1c may have low activity in standard or optimal conditions (assay conditions), but its activity increases in certain situations (stress, presence of signal molecules, and others).
Lane and co-workers (7) conclude that hypothalamic CPT1c has a role in energy homeostasis and the control of food ingestion. In addition to this localized function, the wide distribution of the protein in the brain suggests a more general, ubiquitous function, perhaps related with the equilibrium between acyl-CoA pools in the cytosol and the ER lumen. Although it is not known whether CPT2 is present in ER of neurons, we hypothesize that CPT1c facilitates the entry of palmitoyl-CoA to the ER lumen. It is been reported that palmitoyl-CoA cannot cross the ER membrane, although palmitoylcarnitine can (17)(18)(19)(20). CPT1a or CPT1b, probably localized in mitochondria-endoplasmic reticulum connections (mitochondrial-associated membrane) (21) may facilitate the entry of palmitoyl-CoA to the reticulum. In the brain, however, fatty acids are not usually oxidized, and levels of CPT1a or CPT1b are low or nonexistent. Thus the occurrence of a specific CPT1c localized in the ER membrane may ensure the entry of palmitoyl-CoA to the lumen of ER. Another possibility is that CPT1c modulates the palmitoyl-CoA pool associated with the ER, thus regulating the synthesis of ceramide and sphingolipids, which are impor-tant for signal transduction, modification of neuronal membranes, and brain plasticity (22)(23)(24). | 2018-04-03T02:28:06.761Z | 2008-03-14T00:00:00.000 | {
"year": 2008,
"sha1": "a89c479c6e0b7a2f524ccf9e1aac361713a4aa79",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/283/11/6878.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "2986b396feae1904ebf9ae38c6ad974ab5c56e48",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
158375900 | pes2o/s2orc | v3-fos-license | Deepening the Curriculum Construction and Practicing the “Big Class” of Primary School Art
In recent years, China has invested more and more heavily in basic education, not only in the construction of traditional cultural courses, but also in the construction of art classes in primary schools. It has extended the original art and music courses and introducing related art types such as film, television, dance, drama and so on. To deepen curriculum construction, art teaching should be an important part of primary school teaching. The primary school art class adopts various forms of art courses to achieve cross-integration and mutual penetration of various disciplines, so that social life and art are closely linked to enhance the artistic level of primary
INTRODUCTION
Chinese investment in basic education has become more and more powerful. In the original primary school curriculum, there are only two art courses, art and music. With the reform of education, film and television, dance, and drama have gradually entered the primary school art class. Art education is a new type of education concept. In the process of teaching, primary school teachers need to conduct in-depth research continuously to achieve comprehensive infiltration of various disciplines so that primary school students can enjoy art education in a good environment. Deepening the curriculum reform and realizing the big classroom of primary school art will help the development of art education in primary education.
II. THE SIGNIFICANCE OF ART CURRICULUM CONSTRUCTION IN PRIMARY EDUCATION
Under the premise of original music and art, the primary school art class combines various artistic expressions such as calligraphy, dance, and drama, and uses creative means to reflect various artistic problems that appear in the cultural science society. Art innovation teaching practice activities can enhance students' cultural quality and life emotions. The art class in primary schools does not superimpose the number of various art disciplines, but helps students improve their art expression ability, art innovation ability, crossregional transformation ability and art appreciation ability while studying various art disciplines.
The goal of curriculum construction in the classroom is mainly reflected in the following: First, innovating the foundation of art theory and realizing the integration of primary school art classes. Second, students can learn through simple art types and be able to appreciate, perceive, create, express, evaluate, reflect, cooperate, and communicate. Students should respect life, care for nature, be friendly, and learn to share in the art course so that their humanistic qualities can be comprehensively improved. Third, primary school students through the art classroom to improve the ability of comprehensive art innovation and art design. It can not only realize the appreciation of art in class, but also actively organize art practice activities in life. Fourth, primary school students have the initial appreciation of art, organizational and coordination skills, and innovative ability in the art class learning in primary schools, which can lay a good foundation for the subsequent art courses.
A. Implementing the National Art Curriculum Policy
In order to actively carry out the design of the art classroom, the school should strictly upgrade the art curriculum at the primary level to about 9% of the total curriculum in accordance with the relevant requirements of the national curriculum plan, so as to ensure that the art curriculum is implemented in time. At the same time, the school should actively implement the primary school art curriculum. On the basis of the original art and music courses, it will introduce a variety of art curriculum forms, set up rookie competitions, exchange books and share topics, and ensure the significance of art courses. The school should conscientiously understand the spirit of the national art curriculum, and formulate a reasonable primary school art curriculum according to the characteristics of the school itself, so that the art curriculum can reach all aspects of the life of primary school students.
B. Creating a Teaching Atmosphere for Art Courses
In the process of primary school art cultivation, attention is paid to emotional experience, highlighting the characteristics of art courses. When primary school students study art courses, they can cultivate noble moral qualities, love the good qualities of life, and good humanistic qualities.
In the process of building the art class curriculum, the school should pay attention to creating a good atmosphere of art curriculum and embody the culture of educating people. In the process of learning art, students can purify the minds and enhance the quality of their beauty. Art class is a kind of popular education. In the teaching of art course, teachers should focus on the less difficult and effective works, emphasizing the ability of students to participate in art classes, so that students can improve their interest in learning art. The values and attitudes of primary school students vary from individual to individual. Although these emotions have a direct relationship with intelligence, primary school students 'emotional attitudes and values more often affect their behavior and judgment in a non-intellectual manner. At the end of each semester, the school allows students to extend and expand their study on the content they are interested in. For example, after learning the paper-cutting art course in the lower grade, teachers can integrate the national art culture and tell students about the history of paper-cutting culture, so that students can better learn papercutting art. For example, after enjoying the folk music, middle-aged students may have a strong interest in the wonderful performance of the teacher. They have a strong desire for knowledge of the national musical instruments. After class, children will find relevant music to appreciate, which is helpful to the inheritance of national music.
C. Conducting Curriculum-based Artistic Activities
In order to improve the enthusiasm of students to learn art, many schools carry out campus art festivals to promote the in-depth reform of art curriculum education. Art festivals in schools are generally conducted in the way of "Competition first, Performance later". This management model emphasizes the display of results and has a certain degree of contempt for the educational process. Therefore, schools should actively explore new methods for the curriculum of artistic activities. In order to fully utilize the charm of art, many schools have developed relevant teaching courses for instrumental music, dance, singing, public welfare, painting, and calligraphy, and encourage students to display teaching results in class. The school will also formulate relevant examinations or competitions to enable the festival to effectively connect with the classroom learning, and truly realize the curriculum and normalization of artistic activities.
D. Integration of Artistic Social Resources
The development of art courses in primary schools is not enough for the school. Therefore, it is necessary to actively integrate the resources of the society so that primary school students can enjoy the environment of art learning both inside and outside the school and improve the quality of art learning. For example, some professional youth activity centers are unique in the art teaching model. They have more complete art teaching facilities and professional teaching staff, and they can systematically and comprehensively provide primary school students with professional art practice forms. The schools can actively cooperate with these professional youth activity centers in art teaching. Every afternoon, they can take turns to transport students to learn various art forms, such as hard-pen calligraphy, creative sketch, children's color, vocal music, instrumental music, hip-hop dance, ballet, national dance, Latin dance, etc. The active cooperation between the school and the activity center helps to integrate social resources and provide a good artistic learning environment for primary school students, thus improving the quality of art teaching. In the process of conducting art courses, the school can combine local regional cultural characteristics, formulate reasonable teaching plans, maximize the integration of social resources, and deepen curriculum construction.
E. Construction of Standardized Art Societies
With the development of the primary school art class, many schools have established a variety of art societies, such as painting clubs, music clubs, dance clubs, photography clubs, animation clubs, etc., which have a positive effect on students' learning of art. The degree of standardization of art associations is related to the construction of art activities in schools. Schools should adopt standardized management mode and establish and improve the operation system of art associations. Associations must have evaluation systems, management systems, professional teachers and teaching plans. The various teaching activities of the association can be closely linked with various activities of the school, leading students into the atmosphere of art. For example, in the operation mode of primary school photography community, professional photography association expert team can be invited to come to school regularly to explain photography knowledge and guide students to shoot relevant film and television materials. And invite some celebrities who like photography to give professional speeches in the school to arouse students' interest in the art of photography.
F. Conducting a Hierarchical Art Curriculum
In the teaching of elementary school art, pupils at different stages must have different methods of art teaching. Teachers should develop an art syllabus that meets the psychological characteristics and learning needs of students, including teaching resources, teaching steps, teaching objectives, and teaching themes, so that students can enhance their perception and artistic appreciation in art learning. In the process of art course construction, it is necessary to integrate poetry, fairy tales, games and other teaching modes to encourage students to actively participate in the practice of art activities. When students in the lower grades study art, they mainly cultivate their interest in art, enhance their performance ability, and expand their artistic vision. Students in the Middle grade of primary school must be one step higher than those in the lower grade. They must have a strong ability to perceive a variety of art forms. For example, the point-to-face teaching mode, in the process of learning fine arts, it is possible to integrate a variety of art forms. In the upper grades of primary school, students are required to closely unite art history and traditional culture to enhance their understanding of art and create a good artistic learning environment. The school should deepen the curriculum construction, actively create a good artistic atmosphere for the students, let them experience the noble realm of art, and develop a sound psychology and complete personality in the Advances in Social Science, Education and Humanities Research, volume 284 process of artistic literacy improvement. For example, in the process of learning art, there are several aspects to be aware of: First, the art is closely related to other art disciplines, so as to carry out point-to-face teaching mode. Secondly, teachers should change the traditional boring teaching mode, and adopt the subject-based learning, which is dominated by students, to ensure that students can master the knowledge and skills of art painting from shallow to deep. At the same time, schools should also take into account the combination of art courses and school-running characteristics, so that art teaching has timeliness and operability.
G. Actively Encouraging Cooperation and Art Teaching Exploration
In the process of primary school art classroom teaching, cooperation and inquiry teaching has always been the teaching mode advocated by curriculum reform. Through the communication of the group, the primary school students can learn a broader cultural background and communicate well with adults, peers and the surrounding environment to improve their own quality. Cooperative inquiry learning is conducive to the strengthening of skills and knowledge, and it is more conducive to the social development of students. It also has important significance for cultivating good quality and cooperation spirit. In the art curriculum teaching activities, cooperation and inquiry are usually a wonderful experience for students. Rich imagination and creativity can give more tension and vitality to teaching. In the process of self-learning and inquiry, students will consciously discover beauty, experience beauty and express beauty. In the cooperative and inquiry teaching mode, experiential teaching mainly emphasizes the integration of the knowledge about art learned in the process of experiencing from the individual students. Primary school art teaching activities are not only knowledge teaching, but also emphasize inner perception and experience ability of students. Therefore, teachers should change the teaching mode of art, let students actively participate in group cooperation and exploration, express their views, and improve the efficiency of art learning.
A. Diversification of Art Curriculum Evaluation Indicators
Primary school art teaching emphasizes students' perceptual cognition. Therefore, schools should establish an objective and comprehensive evaluation system, which not only examines the application of students' professional knowledge, but also objectively evaluates students' artistic performance. For example, in the art teaching process, the teacher's art curriculum evaluation indicators are: awards, participation in the second classroom, painting skills, creativity, appreciation, learning attitudes and habits. Based on these first-level indicators, teachers refine many indicators of the second or third level to form a corresponding evaluation index system. In the past art teaching process, three forms of painting demonstration, creative design and appreciation were adopted. Nowadays, the art class pays more attention to the students' feelings. The students are not only hand-painted, but can express their feelings about the related works, and can also make impromptu creative adaptations. Students' performance can help them exercise their courage, know themselves and experience success.
B. Adopting a Diverse Art Subject Evaluation
In the course of the construction of primary school art classroom, teachers always evaluate the teaching and neglect the main position of students. With the development of diversified teaching, students, students and parents can participate in the curriculum evaluation, which can fully reflect the diversity of art courses. A comprehensive evaluation of the students' artistic learning ability can avoid the one-sidedness, concentration and subjectivity of the evaluation, thus make more scientific comments on students. Mutual evaluation among students helps students to recognize the shortcomings in the process of art learning and treat problems objectively. And the comments from parents help students to appreciate the charm of art in a good learning atmosphere.
C. Evaluation Timing Is Oriented to the Entire Process
In the course of the construction of art curriculum in primary schools, it is necessary to pay attention to the timing of evaluation. The evaluation of students' learning is to enable students to better study art. Therefore, the teacher should supervise the student's artistic learning effect throughout the whole process, conduct a full evaluation, and adopt daily evaluation, stage evaluation, and semester general evaluation. The daily evaluation mainly focuses on the students' differential comments. After each class study, they comment on the students' basic abilities, basic knowledge learning, and attitudes, and make corresponding records. Stage evaluation generally refers to the comprehensive evaluation of students after a stage of learning, focusing on the good results achieved by students in this stage. The final general assessment is mainly a comprehensive evaluation of the effect of students' art learning in this semester, so that students can clearly understand what art knowledge they have mastered in the course of this semester, which will help them to improve their literacy. There will be some differences in the evaluation of students at different stages. Therefore, teachers should establish a dynamic management model, pay attention to the learning effect of students at each stage, continuously improve the artistic quality of students, and master a more comprehensive artistic knowledge.
V. CONCLUSION
In the process of primary school art classroom teaching, teachers should deepen the curriculum construction, formulate reasonable teaching strategies according to the actual situation of students, and encourage primary school students to actively study art. Conducting good art studies can enhance the overall quality of students and their personal cultural accomplishments. Teachers should develop a good curriculum teaching strategy, and objectively and impartially evaluate students' learning effects, improve students' interest Advances in Social Science, Education and Humanities Research, volume 284 in learning art, and promote the comprehensive development of primary school art classroom curriculum construction. | 2019-05-20T13:06:54.541Z | 2018-12-01T00:00:00.000 | {
"year": 2018,
"sha1": "6768865a3d3fac9b3dc79daf9389a968749840d6",
"oa_license": "CCBYNC",
"oa_url": "https://download.atlantis-press.com/article/55908461.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1f7801e7aa4dc848e1dff8835ba661cb26ae7302",
"s2fieldsofstudy": [
"Art",
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
} |
235672096 | pes2o/s2orc | v3-fos-license | The Willingness to Receive COVID-19 Vaccine and Its Associated Factors: “Vaccination Refusal Could Prolong the War of This Pandemic” – A Systematic Review
Background The outbreak of coronavirus disease 2019 (COVID-19) in Wuhan, China, spreads globally, since its declaration by the World Health Organization (WHO) as a COVID-19 pandemic on March 11, 2020. COVID-19 vaccine is a crucial preventive approach that can halt this pandemic. The present systematic review was aimed to assess the level of willingness to receive COVID-19 vaccine and its associated factors. Methods A comprehensive literature search was conducted by using various online databases such as PubMed/MEDLINE, HINARI, EMBASE, Google Scholar, Web of Science, Scopus, African journals, and Google for grey literature which were used to search the related articles up to the period of May 08, 2021. Results The overall rate of participants’ willingness to receive the COVID-19 vaccine was ranged from 27.7% to 91.3%, which was from Congo and China, respectively. Factors such as age, educational status, gender, income, residency, occupation, marital status, race/ethnicity, perceived risk of COVID-19, trust in healthcare system, health insurance, norms, attitude towards vaccine, perceived benefit of vaccine, perceived vaccine barriers, self-efficacy, up-to-date on vaccinations, tested for COVID-19 in the past, perceived efficacy of the COVID-19 vaccination, recommended for vaccination, political leaning, perceived severity of COVID-19, perceived effectiveness of COVID-19 vaccine, belief that vaccination makes them feel less worried about COVID-19, believing in mandatory COVID-19 vaccination, perceived potential vaccine harms, presence of chronic disease, confidence, COVID-19 vaccine safety concern, working in healthcare field, believing vaccines can stop the pandemic, fear about COVID-19, cues to action, COVID-19 vaccine hesitancy, complacency, and receiving any vaccine in the past 5 years were associated with the willingness of receive COVID-19 vaccine. Conclusion There were insufficient levels of willingness to receive COVID-19 vaccine, and several factors were associated with it. Health education should be provided concerning this vaccine to improve the willingness of the community.
pandemic. 6 The commonly detected severe risk factors of COVID-19 include older age and with medical comorbidities like cancer and diabetes. 1 The outbreak of COVID-19 disease in China has been brought to global attention and declared as a pandemic by the WHO on March 11, 2020. 7 The COVID-19 pandemic remains a global challenge. 3 It is a highly contagious disease. The WHO has declared the ongoing outbreak to be a global public health emergency. 8 There is an obvious concern worldwide regarding the fact about the emerging 2019-nCoV as a global public health threat. 5 The pandemic of COVID-19 is spreading rapidly. 9 The COVID-19 is spreading all over the world. 10,11 Then, it has affected individuals of all nations, continents, races, and socioeconomic groups. 11 This pandemic COVID-19 has significant mortality and morbidity rate. 1,9,12 As per the report from WHO on 14 August 2020, there have been 20,730,456 confirmed cases of COVID-19, including 751,154 deaths worldwide. 13 Besides, the significant deaths and morbidities from the COVID-19 pandemic, there will also be a substantial economic crisis by far. 14 This large morbidity and socioeconomic influence have demanded extreme measures across all continents, comprising nationwide lockdowns and border closures. 15 Furthermore, the COVID-19 pandemic has the potential to overwhelmingly affect young children's development worldwide, through rises in poverty and food insecurity, loss of caregivers, heightened stress, and decreased health care. These can affect not only the whole life course of the child but also the future generations through physiological, psychological, and epigenetic changes occurring in utero and during early development that can decelerate the gains made since the turn of the century. 16 Because of this COVID-19 pandemic, mental health is a major public issue. 17 It has an immense influence on youth mental health. 18 It has put considerable stress on patients, healthcare workers (HCWs), and healthcare systems. However, fetal diagnosis and pregnancy care must be maintained, and we must struggle to protect the susceptible population of pregnant women and their fetuses. 19 COVID-19 has a higher burden on the emotional wellbeing of pregnant women and women in the early postpartum period. 20 Probably, there is the potential for pregnant women to be vulnerable to mental ill health during this pandemic virus. 21 Besides, COVID-19 pandemic was significantly interrupted the childhood vaccination practices. 22 The COVID-19 pandemic is a serious public health emergency, it is particularly deadly in exposed populations and communities in which healthcare providers are inadequately ready to manage the infection. 15 The responses required for COVID-19 pandemic were such as quarantining of entire communities, closing of schools, social isolation, and shelter in-place orders which have abruptly changed daily life to control the disease. 11 The management of patients with severe COVID-19 status is significant in decreasing the mortality of the ongoing pandemic, but the truly essential measures include prevention, monitoring, and timely intervention. Besides, in addition to rapid medical responses, continuous efforts to better understand the pathogenesis of COVID-19 will certainly enlighten the optimal management of the increasing pandemic. 9 The social media has the potential to provide rapid and effective dissemination routes for key information to enhance awareness about COVID-19 pandemic of the population if used responsibly and appropriately. 23 To decrease the spread of COVID-19, contact tracing, testing, and social restrictions are among the most powerful approaches adopted globally due to the lack of a COVID-19 vaccine. This leads to major physical, psychological, and economic distress suffering of most countries' citizens. Thus, a safe and effective COVID-19 vaccine is the most effective alternative to manage this pandemic. 24 The COVID-19 pandemic is anticipated to continue to put large impacts of morbidity and mortality, while harshly upsetting society and economies globally. Thus, governments must be ready to make certain largescale distribution of a COVID-19 vaccine and equitable access when a safe and effective vaccine is available. This will need sufficient health system capacity and methods to improve trust in and acceptance of the vaccine and those who deliver it. 25 The study has found that information concerning the process of vaccine development, vaccine efficacy, and individual variety affects the proportion of participants reporting COVID-19 vaccination intentions. Behavioral economics offers an empirical scheme to approximate vaccine claims to target subpopulations resistant to vaccination. 26
Study Setting
The present systematic review includes all studies conducted in different countries globally.
Search Strategies
A comprehensive literature search was conducted. For instance, The search was conducted by using the following electronic databases: PubMed/MEDLINE, HINARI, EMBASE, Google Scholar, Web of Science, Scopus, African journals, and Google for grey literature. The search was done by using the following keywords; "willingness", "acceptance", "hesitancy", "COVID-19", "SARS-CoV-2", "vaccine", "associated factors", and "determinant factors". At this time, "AND" and "OR" Boolean operators were employed to integrate the keywords.
Eligibility Criteria
The inclusion criteria for the present systematic review were: all cross-sectional studies done among adults globally, published in English language, and articles published until May 08, 2021 duration, whereas the exclusion criteria were as follows: articles with poor quality and articles in which the outcome variable was not clearly defined and measured were excluded from the present systematic review.
Outcome of Interest
In the present systematic review, the primary outcome was the level of willingness to receive COVID-19 vaccine, which was reported within the original article. Likewise, the secondary outcome was factors associated with the willingness to receive COVID-19 vaccine, which was reported within the original study.
Data Extraction
All studies obtained from all databases were exported to Endnote version 8 software, and the duplicates were removed. Finally, all studies were exported to Microsoft Excel spreadsheet. The Titles and abstracts of studies retrieved using the search strategy and those from additional sources were screened to identify studies that satisfy the inclusion criteria. Then, studies that satisfied the inclusion criteria by title or abstract screening were undergone a full-text review for eligibility and data extraction. The Preferred Reporting Item for Systematic Review and Meta-Analyses (PRISMA) flowchart was used for the stepwise inclusion and exclusion of the articles. The first author, publication year, country, sample size, prevalence, and factors were included into the data extraction format.
Quality Assessment
The Newcastle-Ottawa Scale (NOS) quality assessment criteria for cross-sectional studies were used to include the studies into the present systematic review. 27,28 The quality of each study was assessed using modified NOS for cross-sectional studies. It has 10 points in three domains of modified NOS components for observational studies. Thus, the studies which were scored ≥5 out of 10 points were included into the present systematic review. 29
Results
A total of 2671 articles were identified through the search strategies. They were retrieved from PubMed/MEDLINE, HINARI, EMBASE, Google Scholar, Web of Science, Scopus, African journals, and Google for grey literature. From the total of 2671, 1403 articles were excluded because of duplication. Of the remaining 1268 articles, 1210 articles were excluded after reviewing of the titles and abstracts because they were not related. Furthermore, out of 58 articles selected for full-text screening, 2 were excluded due to lack or inaccessible of full text. Then, 56 full-text articles were assessed for eligibility based on the pre-setted criteria and 11 articles were excluded with a reason. Finally, 45 articles were met the eligibility criteria and included into the present systematic review ( Figure 1).
Features of the Included Studies
Characteristics of the studies included in the present systematic review on the willingness to receive COVID-19 vaccine and its associated factors. Among the studies published up to May 08, 2021 globally, 45 studies were included in the present systematic review. The study design of all these studies were cross-sectional. Nine studies were done in the United States, [30][31][32][33][34][35][36][37][38] nine studies were done in China, [39][40][41][42][43][44][45][46][47] one study was done in Australia, 48 four studies were done in Saudi Arabia, [49][50][51][52] one study was done in Kuwait, 53 one study was done in England, 54 one study was done in Congo, 55 one study was done in Greece, 56 two studies were done in UK, 57,58 one study was done in Malaysia, 59 two studies were done in Japan, 60,61 one study was done in Israel, 62 one study was done in Bangladesh, 63 one study was done in Jordan, 64 one study was done in Iran, 65 one study was done in Italy, 66 one study was done in Ethiopia, 67 two studies were done in France, 68,69 one study was done in Vietnam, 70 one study was done in Uganda, 71 one study was done in Pakistan, 72 one study was done in Nigeria, 73 and one study was done in Latin America and the Caribbean (LAC). 74 The smallest sample size was 409 and the largest was 472,521, which was reported from Ethiopia, 67 and LAC, 74 respectively ( Table 1).
The Willingness to Receive COVID-19 Vaccine
As briefly displayed in (Table 1), 45 studies from various countries were included into the present systematic review.
A large variability was stated on the level of willingness to receive COVID-19 vaccine in different countries. The highest level of willingness towards receiving the COVID-19 vaccine was reported from China, which was 91.3%, 47 whereas the lowest level of willingness towards receiving the COVID-19 vaccine was recorded in Congo, which was 27.7%. 55 Factors Associated with the Willingness to Receive COVID-19 Vaccine
Discussion
It is known that more than half of the world's population faces long-term restrictions as the new normal to avoid the spread of COVID-19. 75 As the COVID-19 pandemic is extensive across the worldwide, there is an urgent requirement to develop effective vaccines as the most powerful approach to stop the pandemic. 76 Scientists are suffering to offer a verified treatment for COVID-19. This is due to that the development of vaccines against COVID-19 and their global access are a priority to end the pandemic. However, the success of this approach depends on individuals' willingness of immunization. Questions like "what will happen if the individuals' do not want the injection?" is what makes worry the experts. Because of this, numerous experts have warned against a worldwide for the decrease in community trust in immunization and the rise of vaccine hesitancy during the past decade. 77 The present systematic review has included all crosssectional studies conducted on the willingness to receive COVID-19 vaccine and its associated factors. This is because understanding the level of willingness to receive and the associated factors of COVID-19 vaccine would provide valuable knowledge and direction for clinical implementation and intervention development. The present systematic review has reviewed all evidences on the willingness to receive COVID-19 vaccine and its associated factors. During this, 45 cross-sectional studies from different countries were included. The findings of the present systematic review revealed that there was a large variability on the level of willingness to receive COVID-19 vaccine in different countries. The overall rate of participants' willingness to receive a COVID-19 vaccine was ranged from 27.7% to 91.3%, which was reported from Congo and China, respectively. 47,55 This suggests that there is a serious problem to manage and control of the current COVID-19 pandemic. For the purpose of a permanent solution, vaccines are being developed by numerous countries for the safety of their populations for the current COVID-19 pandemic. 60 This is because, if a vaccine becomes available, it might be achievable to develop herd immunity and guard those who are most susceptible to the critical consequences of COVID-19. 75 However, with this level of willingness towards receiving COVID-19 vaccine, it would be extremely difficult to manage and control the current COVID-19 pandemic. By in turn, this might prolong the period of this pandemic affecting all populations of age category globally.
Concerning to the associated factors, from a total of 45 studies included in the present systematic review, 39 of the studies have assessed the associated factors with individuals' willingness of receiving COVID-19 vaccine. From 39 studies assessed the associated factors with willingness of receiving COVID-19 vaccine, some of the studies have reported that sociodemographic factors such as age, 32,[35][36][37]39,43,48,49,56,58,60,61,65,68,69,72,73 educational status, [30][31][32]36,43,72 gender, 32,[35][36][37]47,48,52,53,55,[59][60][61]65,68,69,[71][72][73][74] income, 36,54,61 residency, 36,60,74 occupation, 54,62 marital status, 47,49,71 and race/ethnicity 32,35,37,54 were factors associated with willingness of receiving COVID-19 vaccine. This might be due to that education is a powerful strategy to disseminate the essential information about the health of individuals. In fact, the level of education of people will affect the general knowledge and awareness of the individuals in particular, whereas residency has also an effect on information achievement since the urban population receives information more easily when compared to the rural population. Occupation could also affect the willingness of individuals towards receiving COVID-19 vaccine in many ways of which like being HCWs or being stressed for his/her work is among them while they are eager for vaccination relative to their contrary. Age has also an impact on the willingness of people towards this vaccine. Particularly, older age individuals might have a sense of responsibility and accountability for themselves and their families' relative to the youngest age group individuals. The level of income also affects the willingness of an individual towards this vaccine. The possible justification would be that the expenses paid for the transport Furthermore, the study reported that factors such as perceived risk of COVID-19, 30,31,34,44,47,50,51,53,58,59,[68][69][70][71] trust in healthcare system, 46,51 health insurance, 31,48 norms, 31,65 attitude towards a vaccine, 31,55,58 perceived benefit of vaccine, 31,40,46,65,70 perceived vaccine barriers, 31,40,70 self-efficacy, 31 up-to-date on vaccinations, 32 being tested for COVID-19 in the past, 32,72 responsibility, 39 perceived efficacy of the COVID-19 vaccination, 41,47 use of social media for COVID-19 vaccine-related information, 41 recommended for vaccination, 34 political leaning, 34 perceived severity of COVID-19, 34,44,46,70,74 perceived effectiveness of a COVID-19 vaccine, 34,43,59,61 belief that vaccination makes them feel less worried about COVID-19, 59 believing in mandatory COVID-19 vaccination, 50 perceived potential vaccine harms, 34,42,53,58 presence of chronic disease, 45,48,60 previously received an influenza vaccine, 43,45,47,50,53,58,69 confidence, 39,42,73 having COVID-19 vaccine safety concern, 52,65 working in the healthcare field, 65 71 receiving any vaccine in the past 5 years, 71 perception of disease can be prevented by vaccine, 44 willingness to protect others by getting oneself vaccinated, 61 taking direct care of COVID-19 patients, 72 belief that only people who are at risk of serious illness should be vaccinated, 58 trust in government, 73 complacency, 39,42 willingness to pay for and travel for a vaccine, 73 themselves or a member of their household belonged to a vulnerable group, 56 trust in public health authorities, 73 believing COVID-19 virus was not developed in laboratories, 56 believing COVID-19 is far more contagious and lethal relative to the H1N1 virus, 56 compliance with community mitigation strategies, 74 being in a private sector, 45 encountering with suspected or confirmed COVID-19 patients, 45 selfreported health outcomes, 46 believing that next waves COVID-19 are coming, 56 knowledge score regarding symptoms, transmission routes and prevention and control measures against COVID-19, 56 and perception that COVID-19 will persist 57 were factors associated with the willingness of receiving COVID-19 vaccine.
This might be due to that behavioral factors have a critical influence on the newly developed things, particularly like that of vaccines. Perception or attitude towards COVID-19 vaccine might be due to lack of sufficient knowledge or awareness concerning to this vaccine. In fact, the information has a strong effect on the awareness of individuals because it would clarify the misunderstandings that make people confused. Besides, people might consider the personal protection behaviours as a substitute of vaccination in the prevention of COVID-19. They may believe commitment to these precautions is adequate for the prevention of this pandemic. 78 This evidence suggests that the community should be aware that personal protection behaviour could not be a substitute for vaccination to prevent COVID-19 infection. This might be because of the misinformation dissemination within the community. Furthermore, a pandemic is a community experience putting a substantial effect on all citizens and demanding a cooperative response. 79 However, vaccine hesitancy leftovers a barrier to community vaccination against extremely infectious diseases. 80 It is a key impending problem for this pandemic. 81 It remains insidious and multifactorial even in individuals of COVID-19 survivors, since most recovered patients revealed to be refusing or uncertain regarding SARS-CoV-2 vaccination. 79 COVID-19 vaccine hesitancy is common and can be a barrier to the distribution of vaccines. 82 This is because of that the community would be concerned for the safety of the vaccine, since COVID-19 vaccines were rapidly developed globally. This could contribute to vaccine hesitancy. 80 The Importance of Understanding the Level of Willingness to Receive COVID-19
Vaccine and Its Associated Factors
The COVID-19 pandemic has significantly harmed the lives of individuals globally. This is by affecting their economic welfare, their health, and changing the behavior of our society extensively. This condition may lead to a strong incentive for individuals to buy a COVID-19 vaccine. 83 However, there is controversy about the safety and efficacy of COVID-19 pandemic vaccines, which may contribute to low vaccination rates. 84 effort of the scientific community in searching a vaccine for COVID-19 may be hindered by a diffused vaccine hesitancy. 85 The actual effectiveness of vaccination against COVID-19 might be challenged by vaccine hesitancy. 86 The decline of participants' willingness to vaccinate for COVID-19 may undermine the pandemic response and the public health advantages' of an effective vaccine. 87 Besides, the low vaccination response could make the accomplishment of herd immunity to COVID-19 difficult and unnecessarily extend the pandemic. 88 Since HCWs are amongst the first to receive COVID-19 vaccines, their perception or attitude about the safety of these vaccines should be addressed as early as possible. 89 Therefore, advanced understanding of young adults' willingness to take a COVID-19 vaccine and the possible factors affecting their vaccine intention will contribute to the development and implementation of effective methods to encourage COVID-19 vaccine uptake among this group. 90 This finding was supported by the study which reported that addressing sociodemographic determinants relating to the COVID-19 vaccination may support to augment the utilization of the worldwide vaccination program to tackle future pandemics. 49 Improving the understanding of vaccination hesitancy in the perspective of COVID-19 and finding and using policies to control it, may be as significant as discovering a safe and effective vaccine. 91 Besides, in order to improve attitudes towards COVID-19 vaccination, it is very vital to start providing community health education on the COVID-19 vaccine as soon as possible prior to an availability of this vaccine. 88 Overall, based on these evidences, targeted health education interventions are required to augment the uptake of the future COVID-19 vaccine. 49 In addition to this, by educating the general population about the safety, advantages, and efficacy of vaccines can we hope to prevent the needless delay of the COVID-19 pandemic. 88 It is crucial that the public communicate their understanding that the risk of unfavorable results from anything other than rigorous product development likely will reverberate throughout the population and possibly spill into the fear of vaccines. Swiftness is essential for this urgently required vaccine. However, ensuring it is safe is an ethical and humanistic responsibility even if no one in the community is inspecting. 92 Finally, in the present systematic review, the level of participants' willingness towards COVID-19 vaccine and its associated factors throughout different countries have been briefly summarized. This showed that there were some countries that have very low levels of willingness towards COVID-19 vaccine. Furthermore, there were several factors that were found to have an association with the willingness to receive this vaccine (Table 1). This suggests that there is a critical problem. This is because if they have unfavorable attitude, perceptions, and hesitancy towards this vaccine, these would have a massive effect on the vaccination rate, particularly if they are HCWs. The study suggested that it is essential to ensure that both the HCWs and the public have access to reliable and sufficient information about vaccines to increase a vaccine acceptance rate. 93 This is due to that in fact, if HCWs will not eager to recommend the community to have COVID-19 vaccine, this would have a critical effect on the population's utilization of this vaccine. Therefore, these could prolong the era of this COVID-19 pandemic. This is supported by the study which stated that future education must be prioritized for HCWs for vaccine acceptance by the population. This is due to that their attitude regarding the vaccines has proved to be a determining factor significant to their own use of the vaccine and their willingness to recommend a vaccine to their patients. 94
Conclusion
COVID-19 has been initially reported from China. Then after, rapidly crossed all borders by infecting people of all age groups globally. It is known that this pandemic puts a critical worldwide confrontation with large impacts and several undisclosed events. This pandemic has caused a considerable loss of life and developed into a historic danger to several healthcare systems globally. The crucial element in this initiative is the human behavior to accept a COVID-19 vaccine.
The overall rate of participants' willingness to receive the COVID-19 vaccine was ranged from 27.7% to 91.3%, which was from Congo and China, respectively. Age, educational status, gender, income, residency, occupation, marital status, race/ethnicity, perceived risk of COVID-19, trust in healthcare system, health insurance, norms, attitude towards vaccine, perceived benefit of vaccine, perceived vaccine barriers, self-efficacy, up-to-date on vaccinations, tested for COVID-19 in the past, responsibility, perceived efficacy of the COVID-19 vaccination, use of social media for COVID-19 vaccine-related information, recommended for vaccination, political leaning, perceived severity of COVID-19, perceived effectiveness DovePress of a COVID-19 vaccine, belief that vaccination makes them feel less worried about COVID-19, believing in mandatory COVID-19 vaccination, perceived potential vaccine harms, presence of chronic disease, previously received an influenza vaccine, confidence, having COVID-19 vaccine safety concern, working in the healthcare field, believing vaccines can stop the pandemic, relying on Centers for Disease Control and Prevention website for COVID-19 updates, fear about COVID-19, being HCWs, close attention to the latest news of the vaccine, cues to action, COVID-19 vaccine hesitancy, receiving any vaccine in the past 5 years, perception of disease can be prevented by vaccine, willingness to protect others by getting oneself vaccinated, taking direct care of COVID-19 patients, belief that only people who are at risk of serious illness should be vaccinated, trust in government, complacency, willingness to pay for and travel for a vaccine, themselves or a member of their household belonged to a vulnerable group, trust in public health authorities, believing COVID-19 virus was not developed in laboratories, believing COVID-19 is far more contagious and lethal relative to the H1N1 virus, compliance with community mitigation strategies, being in a private sector, encountering with suspected or confirmed COVID-19 patients, self-reported health outcomes, believing that next waves COVID-19 are coming, knowledge score regarding symptoms, transmission routes and prevention and control measures against COVID-19, and perception that COVID-19 will persist were factors associated with the willingness of receiving COVID-19 vaccine.
The present systematic review has addressed crucial issues for healthcare providers, stakeholders, governments, health policy-makers and implementers, researchers, and for the community as a whole. The significant policy effort may be vital to improve the community willingness to accept a COVID-19 vaccine to have sufficient vaccination rates. It is very significant to start providing health education to the communities on the issue of COVID-19 vaccination as soon as possible in order to improve their willingness towards COVID-19 vaccination. The general public should be aware about the safety, benefits, and efficacy of a vaccine for COVID-19 to prevent the unnecessary prolongation of the COVID-19 pandemic. Lastly, since COVID-19 vaccine is found to be a crucial preventive approach that can halt this pandemic, all barriers that could influence the willingness of receiving COVID-19 vaccine should be urgently addressed by community health strategies.
Data Sharing Statement
The data used to support the findings of this study are on the hands of the corresponding author.
Author Contributions
The author made a significant contribution to conception and design, acquisition of data, or analysis and interpretation of data; took part in drafting, revising or critically reviewing the article; gave final approval of the version to be published; and agrees to be accountable for all aspects of the work.
Disclosure
The author declares no conflicts of interest for this work. | 2021-06-30T05:26:39.870Z | 2021-06-01T00:00:00.000 | {
"year": 2021,
"sha1": "50527e2ecd3cb44583824c152964c5e41471557c",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=70816",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "50527e2ecd3cb44583824c152964c5e41471557c",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226964671 | pes2o/s2orc | v3-fos-license | Temporal Dynamic Model for Resting State fMRI Data: A Neural Ordinary Differential Equation approach
The objective of this paper is to provide a temporal dynamic model for resting state functional Magnetic Resonance Imaging (fMRI) trajectory to predict future brain images based on the given sequence. To this end, we came up with the model that takes advantage of representation learning and Neural Ordinary Differential Equation (Neural ODE) to compress the fMRI image data into latent representation and learn to predict the trajectory following differential equation. Latent space was analyzed by Gaussian Mixture Model. The learned fMRI trajectory embedding can be used to explain the variance of the trajectory and predict human traits for each subject. This method achieves average 0.5 spatial correlation for the whole predicted trajectory, and provide trained ODE parameter for further analysis.
Introduction
Figuring out spatial temporal relationship in functional Magnetic Resonance Imaging(fMRI) trajectory is a grand challenging problem. The field lacks a explainable and accurate model to fit the measured data. We are going to provide a new method to fit the fMRI trajectory. There are two main challenges in the field. First, we don't know what's the process of resting state fMRI data trajectory (Liegeois et al. [2017]). Second, it's difficult to eliminate random noise and physiological noise in data, which makes the patterns more complex ). AutoRegressive model (Zalesky et al. [2014]) and Hidden Markov Model ( Vidaurre et al. [2017]), the two well known methods for explaining temporal dynamic, are far from understanding the process well.
Many work are dedicated to analyzing the spatial temporal dynamic regarding the measured fMRI data (Lurie et al. [2020], Chang and Glover [2010] Liegeois et al. [2017]). Some intrinsic properties are exploited from the observation data without an explicit mathematical model to interpret. No explainable and accurate model is proposed except some models based on Region Of Interest(ROI) with linear/non-linear Gaussian hypothesis (Liegeois et al. [2017]). Those works don't provide a exploration of latent representation of measured fMRI data for spatial temporal information, which can be utilized to model the whole brain data. Neural network can help explore the principle of transition between spatial temporal representation with a predefined network architecture. Our work provide a interpretable model that can help predict future brain map or what happened between two given trajectories to interpolate for original data. The main idea is to compress the trajectory into spatial temporal latent representation and use a backbone of video prediction model to constrain the representation and help do forward prediction. In this work, we are not going to give physiological interpretation of the temporal relationship of measured fMRI data but to provide a new way of spatial temporal modeling on fMRI trajectory.
Predicting what fMRI data will be in the future can be viewed as a video prediction problem. Video prediction is challenging because of its uncertainty (Jayaraman et al. [2018]). Network that predict video frames recursively will accumulate blurry in images which makes the prediction unusable after several time points. To mitigate this problem and make the prediction with high quality, representation learning can be used to compress the spatial temporal information arXiv:2011.08146v1 [eess.SP] 16 Nov 2020 in the bottleneck and temporal dynamic model can take advantage of these representation to do forward prediction. Our contribution is that we propose to use Neural Ordinary Differential Equation (Neural ODE) as video prediction backbone combined with a spatial temporal representation learning scheme to learn latent information of a group of fMRI images.
The paper is organized as follows, in section 2, we compare our model with other representation learning and video prediction model. We will also compare the difference between traditional temporal dynamic model for resting state fMRI data with ours. In section 3, formulation of Neural ODE will be introduced and we will explore the usage of learned spatial temporal representation. In section 4, we specify the experimental setting and result. In section 5, we discuss the key factors of success in our model and the potential usage of it in temporal relationship exploration for fMRI data.
Related work
Time varying functional connectivity is a popular terminology in describing temporal relationship of fMRI data discussed in Lurie et al. [2020] and Liegeois et al. [2017]. There are two popular methods in temporal dynmaic modeling for resting state fMRI data which are Auto Regressive (AR) model and Hidden Markov Model(HMM). Liegeois et al. [2017] and Zalesky et al. [2014] used AR model as a simple linear model to fit the trajectory of ROI of resting state data and generate null data to test whether dynamic functional connectivity is significant enough. HMM is used to find the hidden state underneath the data. Vidaurre et al. [2017] found there are two meta-states among tens of state got from HMM. However these methods are all based on ROI which can not recover the whole brain activity after establishing the temporal model. Previous study also introduce PCA and ICA to compress the whole data into a 1D array and analyze based on this. Kim et al. [2020] introduced beta Variational AutoEncoder to compress the transformed 2D image into 1D array while disentangling the latent factors into different latent variables. Inspired from this, we develop a method to compute the spatial temporal representation to model the whole trajectory for each subject. Other methods used to explore functional connectivity dynamic are introduced in work of Cabral et al. [2017], Kashyap and Keilholz [2019], Chang and Glover [2010] and Laumann et al. [2017].
In our setting, we are going to predict future fMRI images based on the given sequence, which can be viewed as a video prediction problem. This problem has two main categories, which are forward prediction and bidirectional prediction. Making future prediction while maintaining high quality of image is hard. There are mainly two ways to deal with this problem. The first method in Oh et al. [2015] learn the prediction in the image space. While Ranzato et al. [2014] and Jayaraman and Grauman [2015] learned the temporal dynamic information of video frame in latent space. Recently time agnostic video prediction proposed by Jayaraman et al. [2018] lend new view point to the field. In our model, we train the model to learn spatial temporal latent representation for a batch of images while adapting Neural ODE to fit the learned representation.
Neural ODE was first introduced in Chen et al. [2018]. Later on more machine learning method related with ODE appear, including Augmented ODE introduced in Dupont et al. [2019], second order ODE in Yildiz et al. [2019], and Stochastic Differential Equation (SDE) in Jia and Benson [2019] provide more methods to model the temporal relationship of given data. Here we adapt Augmented ODE in our experiment that enlarge the representation space of Neural ODE, which is beneficial for us to improve the quality of prediction and makes it possible to predict beyond the space of input fMRI data trajectory to account for the variation of complex data. Another method in using Neural Network to fit fMRI data was proposed in Kashyap and Keilholz [2020] Khazaee et al. [2017] and Du et al. [2018] explored fMRI data temporal relationship which is useful for disease diagnosis or providing insight into fMRI data trajectory as shown in Tagliazucchi et al. [2012]. In our work, we use the learned spatial temporal latent representation to do the human traits prediction. This method can also be useful in individual classification and critical point analysis.
Spatial Temporal modeling on rsfMRI data
Our goal is to fit observation rsfMRI data into Neural ODE Network to make it possible to predict the future fMRI data given the input trajectory. We can model it as a video prediction problem. In video prediction, the goal is to predict the future frames given the first several input frames. There are two basic tasks. The first one is to do the forward prediction, and the target is the all future frames. The second task is the bidirectional prediction. We are given the first and the last several frames, and the target is to do the interpolation. In this section, we will talk about how to train the Ordinary Differential Equation using neural network to fit resting state fMRI data trajectory. We will also explain how the variance of the future prediction is accounted in latent representation of trajectory. The latent representation will contribute to some downstream analysis.
Basic definition
Human Connectome Project (HCP) provides resting state fMRI data records for different subjects. Each subject provides 1200 time points volumetric data with data size 91 × 109 × 91. Each Volumetric data corresponding to a frame in video prediction context. We will adapt the dimension reduction method used in Kim et al. [2020] to reduce the Volumetric data dimension into 2D data with size 192 × 192. We denote the j th 2D fMRI data frame of subject i as X i,j . Our aim is to use the given first j video frames X i,0:j−1 to predict the future frames of subject i, which is X i,j:J , where J is maximum number of time point in record.
Neural ordinary differential equation
Video prediction is a challenging problem since the prediction quality may drop dramatically after several time points prediction. The uncertainty in video frames also contribute to the difficulty of predicting accurate future frames. Here we first try to solve the problem of prediction quality by using latent representation and Neural Ordinary Differential Equation introduced in Dupont et al. [2019]. Uncertainty will be explained by Variational AutoEncoder in next subsection.
(a) Frames in one fMRI trajectory are grouped into four sets. Encoded into spatial temporal latent representation and fitting ODE for prediction. Predicted latent code is decoded to image space.
(b) Spatial Temporal latent representation learning with Ordinary Differential Equation as backbone to explore temporal dynamic of original data and predict future frames. Figure 1: Video generation scheme learning the spatial temporal latent representation from groups of fMRI images and do forward prediction with Oridinary Differential Equation as backbone.
Since resting state fMRI data changes very slowly during recording time, we can first do the downsampling of the original 1200 time points into 400 time points to reduce the prediction length while not influencing the utility of our result. Moreover, we do a innovative data processing to group the whole fMRI data trajectory into 4 groups for each individual, in which each group has 100 data frames. As shown in Fig. 1a, we first use encoder to compress the 3D concatenated data (the first dimension is time while the last two dimensions are height and width of the 2D image frame) into a 1D array with length 64 to represent the spatial temporal information for fMRI sequence in 100 time points, which is denoted as z 0 . The reason of selecting 100 as the group size is that the length of the data can successfully reveal the spatial temporal information for a certain subject. The encoder introduced here can be written as (1) We will not include subscripts for z t to avoid mass in our notation but z 0 is determined by input data for a certain subject i. f θ is encoder and θ is parameter of encoder network. Then we apply Neural ODE on the latent representation to forward predict z t (t ∈ N + ) . The differential equation are shown as following in which W 1 , W 2 , b 1 , b 2 is linear network parameter and Φ is nonlinear function. Here we concatenate 0 with z t to enlarge its representation space following method of Augmented Neural ODE (Dupont et al. [2019]). By solving the differential equation, we can obtain the latent representation in the future by followinĝ Lastly, we decode the predicted spatial temporal representation to recover the data in image space followinĝ where g is the decoder and φ is parameter of it. Remind that we have four groups of data for each individual, the first group is used as input while the left three is regarded as output. The loss function is written as following where b is batchsize in the training and X i,t * 100:(t+1) * 100−1 is ground truth of the estimated fMRI data trajectory. We use Mean Squared Error as training loss between decoded images with ground truth images.
spatial correlation: In testing time, we do forward prediction given the first group of images. Spatial correlation is used to evaluate the performance of trained network where C is covariance matrix.
x,x are vectorized form of X andX. The high spatial correlation value means the similarity between the estimated images and ground truth images.
bidirectional prediction: While the above description focus on forward prediction, it is easy to generalize to bidirectional prediction. In this situation, for each subject, we only know the first 200 data and last 100 data in training. We can assign latent representation to the trajectory it belongs to so the three predicted latent representation will be decoded to three known groups of images. Neural ODE can interpolate the left 100 time point for us while fitting the data on both end of trajectory. The loss function will be tweaked as Therefore, we use the spatial temporal latent representation for different groups of frames. The network do forward predictions or bidirectional prediction of frames by first predicting on latent representation and then decode to image space.
Variance explanation introduced by VAE
To further explain the variance in spatial temporal latent representation, we introduce the Variational AutoEncoder (VAE) following Kingma and Welling [2013] here to model the latent space as prior Gaussian distribution. In previous subsection, we use AutoEncoder(AE) to encode and decode the fMRI data latent representation. Here we make a slight change of this part. The VAE encoder encodes the group of images into a latent distribution. We then adapt reparametrization tricks in Kingma and Welling [2013] to sample from this distribution and get latent representation. The distribution established here explains the variance of latent code for different subjects. The encoder part will be changed into where µ 0 and logvar 0 are mean and log variance of spatial temporal latent distribution, and z 0 has the same meaning as in previous subsection, which is spatial temporal representation of the given trajectory. Since we model the prior distribution of latent space as white Gaussian, here we use KL divergence to minimize the loss between approximate posterior distribution p(z 0 |X i,0:99 ) and given prior N (0, 1).
in which p(z) is prior Gaussian.
Other temporal dynamic model
In the model mentioned above, we use ODE as backbone to predict spatial temporal latent representation z t given z 0 . Here we will try a different backbone and a pure RNN model without spatial temporal representation learning. For model shown in Figure 1, we replace ODE by Recurrent Neural Network(RNN) as shown in Figure 10a. RNN take z t−1 as input and output z t with other setting the same. We also want to test effectiveness of the latent representation z t , so we tried a intuitive RNN architecture shown in Figure 10b. We called it pure RNN model, in which the RNN take fMRI 2D image in one time point as input and output the image in the next time point. So in pure RNN we do one time point forward prediction rather than predicting a group of fMRI images.
Gaussian mixture model analyze latent space
We model the prior distribution of z 0 as Gaussian distribution, while in Neural ODE, z t will not necessarily follows the Gaussian distribution. It's hard to use one Gaussian to explain the latent space. So here we introduce Gaussian Mixture Model (GMM) as shown in Reynolds [2009] to help explain and do clustering to see the common patterns among all the groups of images. We model the whole latent space as a Gaussian Mixture.
where t = 0, 1, 2, 3, and µ k , Σ k are the mean and variance of each Gaussian distribution, and π k is weight for this Gaussian distribution. We will use this model to do clustering on latent space to see how many Gaussian distributions' mixture can best explain the latent space. The process of GMM clustering is as follows. For a specific latent code z t of time point t. Suppose probability of data z t in cluster k (k = 1, ..., K) is γ(s tk ) = p(s tk = 1|z t ). The log probability of data z t is We maximize the log probability of z t to iteratively update variable γ(s tk ), π k , µ k and Σ k . EM algorithm is used here to solve this cluster problem. For more information of EM algorithm in our setting, see Appendix B.
Gradient flow analysis on ODE
Ordinary Differential Equation reveals gradient magnitude and direction of z t when given current z t value. We care about how the system looks like before and after training. We plot the gradient flow of first two latent variable in z t for random initialized ODE model and there are several equilibrium in the whole system. On the sub-diagonal, the gradient flow run fast towards negative value for both first two latent variables in latent code. The initialized system may not be stable and as prediction goes, latent code may diverge fast. We can compute the equilibrium by setting Equation 2 to 0 and got a equilibrium set {z t | dzt dt = 0}. Latent code around equilibrium will also be analyzed and corresponding temporal correlation for decoded estimation fMRI trajectory will be computed as Pearson product-moment correlation coefficients.
R(x, y) = (I(x) i −Ī(x))(I(y) i −Ī(y)) where I(x) i is pixel value at coordinate x for time point i whileĪ(x) is mean of pixel value in x among all the time points. Let's assume x is the seed we select and fix. y is varied to allow us compute temporal correlation for all the pixel with the seed in image.
Experiment
We have proposed a new way of video prediction in resting state fMRI data. We focused our evaluation on quality of frame generation and explanation of spatial temporal latent code. Two other architecture were implemented to be compared with. The architectures are available in appendix A. There are mainly three downstream tasks. First we used GMM to explain the geometry of spatial temporal latent space. Second we tried to see how temporal correlation changes along the given gradient flow provided by trained Neural ODE. Thirdly, we concerned how to use latent code to predict human traits of subjects in HCP data.
data preparation and model architecture
The data was collected from HCP 3T resting state fMRI data. We applied detrend, band pass filter and standardize the signal, then extracted grey matter of brain and apply the transformer following Kim et al. [2020] to convert Volumetric data into 2D fMRI data. For single volumetric fMRI data, two 2D images were obtained for left half sphere and right half sphere separately. Three hundreds subjects were used to train the network, fifty subjects were used in validation and one hundred fifty subjects were used for testing.
Our target is to extract spatial temporal latent code for fMRI data. We had a spatial encoder for both left and right half sphere 2D data to extract a latent representation for spatial information. We first added a dimension of channel for these 2D pre-processed data and input to Conv2D layer with output channel 32, kernel size 8, stride 2 and padding 0, then we concatenated data from left sphere and right sphere on the channel dimension. The concatenated data went through four Conv2D layers with output channel size 128, 128, 256, 256, each layer had kernel size 4, stride 2 and padding 0. Then we resized the output of last layer to 1D array and used one linear layer to transform into a vector of length 256. This is the spatial latent code for each data frame. We then concatenated spatial latent code of 100 consecutive frames into a matrix with size 100 × 256. We further used 2 Conv2D layers with output channel 2, 4 separately to encode this spatial temporal code matrix. Each layer has kernel size 3, stride 2 and padding 1. These two layers extract the spatial temporal information in this latent representation matrix and go through a linear layer to output z 0 with length 64. ODE network follows Equation 2 with tanh as nonlinear function. torchdiffeq package was used to do forward and backward propagation for training Neural ODE.
The decoder architecture is symmetric to the encoder. We first did linear transformation to spatial temporal latent code and feed into two Convolutional transpose 2D layer with output channel size 2, 1, and padding 0, 1 seperately. The kernel size is 3 and stride maintains 2. Then feed the spatial latent information into four 2D Convolutional transpose layers with padding changed to 3. Lastly the output is chunked into two part in its channel dimension and feed into another 2D Convolutional transpose layers with padding 1. Other setting in these two layers are symmetric with encoder. : Loss curve and spatial correlation in training for three different model architecture. ODE and RNN with AutoEncoder will take 10 4 epochs to converge and attain 0.6 spatial correlation. Pure RNN converges very fast and will achieve 0.8 spatial correlation for training data in 500 epochs.
Training and spatial correlation
The training was run on the GPU server Tesla K80. We selected Adam optimizer with learning rate 10 −4 , β 1 = 0.9 and β 2 = 0.99. We focused our evaluation on training loss, spatial correlation on 2D image data. For training strategy of this network. We first trained the network to learn the spatial latent representation, which can be regarded as AutoEncoder.
Then we trained the network to learn the spatial temporal latent code and ODE network. We focused our training on spatial temporal latent code part. It takes 10 4 epochs to train and the loss cuve is plotted on top of Figure 3a. On lower panel of the Figure 3a, it depicts the spatial correlation during training. The final training correlation is close to 0.6. The training is successful and maintain the average spatial correlation in a high level. After training, we tested on 150 subjects to see the spatial correlation versus time point. We found the spatial correlation for each time point is significantly high enough, and correlation value is around 0.53, which means the spatial temporal latent representation is learned well and ODE can be used as backbone to establish the temporal relationship between different latent code.
Compare to other architecture
We tested our spatial temporal representation learning on other two architectures by replacing the Neural ODE as RNN. The aim is to test whether the latent code is learned successful for different temporal model. One of the model architecture is shown in Appendix A Figure 10a and training loss curve, spatial correlation at training time is shown in Figure 3b. We got similar result compared with Neural ODE which means the spatial temporal latent representation is the key in success of video prediction in resting state fMRI data. We also tested a intuitive temporal modeling by using pure RNN to establish temporal relationship between frames of different time points. In pure RNN following architecture in Hochreiter and Schmidhuber [1997], we didn't use spatial temporal latent code but just compute spatial latent code and predict the future frame one by one. We didn't do grouping for the trajectory and treat each frame as the output of the network. The model architecture is listed in Appendix A Figure 10b. The model converges in 500 epochs and spatial correlation in training is high around 0.8 as shown in Figure 3c. However in testing time, the spatial correlation is low and drop very fast versus time. The result is as expected, since neither RNN and ODE can predict very long time sequence with high quality. What we should do is to compress the data not only in spatial but also in temporal to reduce prediction time points in RNN or ODE and improve prediction quality.
Forward and bidirectional prediction
We had 150 subjects data for testing. The test consisted three parts: forward prediction, bidirectional prediction and VAE resampling. The qualitative evaluation of forward prediction can be seen in the Figure 5. We had 100 concatenated transformed 2D fMRI data as input. They were compressed into spatial temporal latent code and were propagated forward to predict three subsequent latent codes z t following the given z 0 . We decoded these three subsequent codes into 300 2D fMRI data. Here we just show the 5 example input images and 5 example output images. More quantitative evaluation can be see in Figure 4. The x-axis is output time points and y-axis is spatial correlation. We deleted the first few and last few time point in the first two subfigure since they are lower than average abnormally, which can be left for further study. Figure 4a reveals our model performance when ODE is used as backbone for video prediction. Spatial correlation is high throughout the whole prediction trajectory. Figure 4b and Figure 4c reveal the performance of the trained network shown in Appendix A. The result reflects the success of training and latent code contain useful information for further usage.
(a) ODE model spatial correlation for test data.
(b) RNN model with AutoEncoder spatial correlation for test data.
(c) Pure RNN model spatial correlation for test data. The second test is bidirectional prediction which is trained following Equation 7. In test time, we were given the first 100 2D fMRI data. The model was asked to predict the whole trajectory following Equation 2 and 3. Figure 6 plots the training input and prediction output in image space. The first row is first several starting fMRI data and second row is the last few fMRI data in the same trajectory. The output is interpolation of the fMRI data in the middle of 'Start' and 'End'. Besides predicting the image and calculate the spatial correlation as evaluation, in Equation 8, 9 and 10, we introduced Variational AutoEncoder to explain the variance of the latent code. We tested to see how the latent distribution influence the output. We used the same trajectory as input and test for 100 times. Three output trajectories were selected to compare their difference. We show the result in Figure 7. They are basically the same but differ a little bit in some particular region. The regions with difference are pointed out by red circle.
Three cluster center best explain GMM model
In section 3.3, we used Gaussian Mixture model to describe the distribution of spatial temporal latent code. To select a suitable number of cluster, we computed two metrics which are Silhouette score and Jensen Shannon score as criteria. Silhouette score is a method to validate consistency within clusters of data. The score ranges from −1 to +1, where a high value indicates that the high dimensional latent code in each cluster is well assigned, and the clustering configuration is appropriate. The Jensen Shannon score is a method of measuring the similarity between two probability distributions. It is also known as total divergence to the average. We had to select a cluster number that minimize the Jensen Shannon score. As shown in Figure 8b, when cluster number is 3, Silhouette score is high while Jensen Shannon score is low. In Figure 8a, we plot the latent code distribution and cluster ellipsoid in the first two dimension of high dimensional latent code.
To further explore the property of three cluster centers, we first decoded the cluster center into three groups of images with each group 100 fMRI data frames. Then we computed the seed based temporal correlation and plot in Figure 8c. Seed location is plotted as a red point in the first subplot, and followed by three temporal correlation map in image space. The first subplot is 2D images representing surface of the brain map that we extract from HCP Volumetric data following Kim et al. [2020] Figure 7: We model the latent representation as prior Gaussian distribution. Different sample will result in image sequence with slight difference. Red circle point out the difference caused by VAE model.
(a) Spatial temporal latent representation distribution in first two dimensions. Ellipsoids depict mixture of three Gaussian distribution.
(b) Silhouette score and Jensen score for selecting cluster number. High Silhouette score and low Jensen score is preferred.
(c) Three seed based temporal correlation for three cluster centers shown in (a). Seed point is shown as red point in first subfigure. Figure 8: Gaussian Mixture Model is performed in latent space for 150 subjects. We select cluster number that Silhouette score is high while Jensen score is low. Each cluster center is decoded into image space corresponding to 100 image frames each. Temporal correlation is computed to reveal the common pattern.
Latent code change along gradient flow
In section 3.6, we talked about the analysis on trained Neural ODE parameter. After training, we can compute the equilibrium of ordinary differential equation. The equilibrium is also a latent code. We want to know how the change of latent code lead the change of temporal correlation for groups of images. The latent code was changed along the gradient flow by a predefined step size, and we obtained 4 different latent codes along this gradient flow starting from equilibrium. The latent code was decoded into groups of images. Seed based temporal correlation was calculated for these four groups of images separately and are depicted in Figure 9. The temporal correlation map shows the decreasing correlation of other area against the seed we selected. This may help understand how the brain pattern change on the direction of gradient flow provided by the trained ODE model. Figure 9: We select the equilibium point of the system and let latent code go along one gradient direction. Then we decode the latent representation to groups of images. Temporal correlation is calculated to reveal change of image space correlation value versus the change in latent space.
Human traits analysis using latent space
The spatial temporal latent code can be further used to estimate human traits for each subject in HCP data. We selected six human traits including passive traits and active tasks as target for our prediction. The value of each trait can be obtained accompanying with the record of fMRI data. We added one linear layer with latent code as input and output the estimation of different human traits.
Discussion
In this paper, we proposed a new method of video frame prediction for resting state fMRI data and made use of latent code to do some downstream tasks. The key point for success is the compression on both spatial and temporal information. The difference between Neural ODE and RNN is that in training time, Neural ODE is a recursive forward propagation method and output of first ODE block will be the input of the next ODE block. RNN is always used to do a one step forward prediction in training while in testing, it generate prediction recursively. Either ODE or RNN will work well regarding our spatial temporal latent code and differ little in performance. However ODE can be easily used to interpret how the latent code in one time stamp transferred to the next by analyzing differential equation while RNN lack this interpretability.
Besides the prediction quality, the geometry explanation of latent space and utilization of ODE trained parameters are two main concern in this paper. Kim et al. [2020] explained geometry of spatial representation of resting state fMRI as a sphere in high dimension which differs for different subjects. While in forward prediction in our temporal dynamic model, the latent space would not necessarily be Gaussian distribution and would be influenced by both starting point of forward prediction and ODE trained parameter. In experiment, we analyzed the distribution of latent code of ODE prediction. The visualization of first two variables in latent code reveals that it is not Gaussian but can be explained by Gaussian mixture. According to Silhouette score and Jensen Shannon score, it revealed that three Gaussian Mixture can explain well to the resulting latent code distribution. The latent code was transferred from zero mean Gaussian to another Gaussian with different mean value, but the variance change little, which is due to the combination of prior Gaussian assumption for z 0 and ODE propagation for z t .
We tested the trained model for forward prediction, bidirectional prediction and sampling from VAE latent space to get qualitative evaluation. The prediction looks good and VAE also explain the variance of the latent code and show in image space as shown in Figure 7. The region within the red circle may reveal the subject difference among whole population.
Another potential usage of this model in fMRI data is to do interpolation for smaller group of fMRI images. The measurement of fMRI data always include some random noise and physiological noise. Power et al. [2014] proposed censoring method to eliminate the head motion in fMRI data that is significant. Laumann et al. [2017] used censoring method to help understand time varying temporal correlation but that method is criticized for it destroys the temporal relationship. But ODE can interpolate the censored data if we know which time points are censored. In work of Khazaee et al. [2017] and Du et al. [2018], the author used fMRI data to do diagnosis. Our spatial temporal latent code can also be used in disease diagnosis and human traits prediction as shown in section 4.7.
Overall, we treat the temporal dynamic modeling for fMRI data as a video prediction problem. The model accurately predicts the image frames given a group of images. Analysis on latent space and image space may shed light on the study of temporal dynamic of resting state fMRI data.
Appendix A Network architecture of RNN w/ and w/o AE (a) ODE is replaced by RNN to predict the spatial temporal latent representation. Output of first RNN will be the input of the next RNN. Latent code is decoded to recover the data in image space.
(b) Traditional RNN architecture to predict spatial latent code. The output of RNN represents only one time points rather than groups of images. In test time, the prediction is recursively generated. Figure 10: We tried two other architecture for forward prediction of fMRI data. The key success of high quality prediction is do compression on data not only in spatial space but also in temporal space. The architecture on the right can predict one step forward very accurate but performance drop quickly.
Appendix B EM algorithm in GMM
We initialized means µ k , covariances Σ k , and mixing coefficients π k for K Gaussians in Equation 12. E step: γ(s tk ) = π k N ( x t |µ k , Σ k ) K j=1 π j N ( x t |µ j , Σ j ) M step: After several steps of EM, we could compute the mean and variance for each cluster. We decoded the clustering center to analyze the common brain patterns embedded in spatial temporal latent code. | 2020-11-17T02:01:06.124Z | 2020-11-16T00:00:00.000 | {
"year": 2020,
"sha1": "4b8ddbdb2568c2626e169a85bb757009adcfabd5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "4b8ddbdb2568c2626e169a85bb757009adcfabd5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
199404896 | pes2o/s2orc | v3-fos-license | Baicalin ameliorates oxidative stress and apoptosis by restoring mitochondrial dynamics in the spleen of chickens via the opposite modulation of NF-κB and Nrf2/HO-1 signaling pathway during Mycoplasma gallisepticum infection
Mycoplasma gallisepticum (MG) infection produces a profound inflammatory response in the respiratory tract and evade birds' immune recognition to establish a chronic infection. Previous reports documented that the flavonoid baicalin possess potent anti-inflammatory, and antioxidant activities. However, whether baicalin prevent immune dysfunction is largely unknown. In the present study, the preventive effects of baicalin were determined on oxidative stress generation and apoptosis in the spleen of chickens infected with MG. Histopathological examination showed abnormal morphological changes including cell hyperplasia, lymphocytes depletion, and the red and white pulp of spleen were not clearly visible in the model group. Oxidative stress-related parameters were significantly (P < 0.05) increased in the model group. However, baicalin treatment significantly (P < 0.05) ameliorated oxidative stress and partially alleviated the abnormal morphological changes in the chicken spleen compared to model group. Terminal deoxynucleotidyl transferase–mediated dUTP nick endlabeling assay results, mRNA, and protein expression levels of mitochondrial apoptosis-related genes showed that baicalin significantly attenuated apoptosis. Moreover, baicalin restored the mRNA expression of mitochondrial dynamics-related genes and maintain the balance between mitochondrial inner and outer membranes. Intriguingly, the protective effects of baicalin were associated with the upregulation of nuclear factor erythroid 2–related factor 2 (Nrf2)/Heme oxygenase-1 (HO-1) pathway and suppression of nuclear factor-kappa B (NF-κB) pathway in the spleen of chicken. In summary, these findings indicated that baicalin promoted mitochondrial dynamics imbalance and effectively prevents oxidative stress and apoptosis in the splenocytes of chickens infected with MG.
INTRODUCTION
Mycoplasma gallisepticum (MG) causes severe inflammatory response in the birds respiratory tract results in sneezing, nasal discharge and coughing. The disease is known as chronic respiratory disease in chickens and infectious sinusitis in turkeys, which causes great economic losses in the poultry industry (Gaunson et al., 2000;Jacob et al., 2014;Roussan et al., 2015). Unlike other bacteria, MG establish a firm attachment to host cells via cytadhesion and colonizes, essential for its progression and development (Chen et al., 2001;Purswell et al., 2012). MG infects a variety of non-phagocytic cells such as fibroblasts, HeLa cells, and chicken red blood cells (RBCs). Additionally, this pathogen invade, multiply and survive in extra pulmonary tissues such as heart, blood, liver, brain, and spleen (Majumder et al., 2014). Immune system protects the body from invading pathogens and damages. Spleen is one of the immune organs which plays a key role in innate immune system . Previous studies reported the effect of MG infection on lymphoid organs of broilers such as thymus, spleen, and bursa of fabricius, and explained that lymphocytes depletion has been noticed in these organs (Lockaby et al., 1998;Manafi et al., 2015). This means that MG infection modulated the immune response and development of immune organs.
Accumulative evidences showed that excessive reactive oxygen species (ROS) generation and/or oxidative stress results in impaired immune responses in the host body and induced apoptosis in immune organs Gostner et al., 2015;Hu et al., 2018). However, the effect of MG infection-mediated oxidative stress and apoptosis in the spleen is still unknown.
Mitochondria are pivotal organelles for the cell survival by regulating apoptosis, providing energy and calcium buffering. Excessive generation of ROS, alteration in mitochondrial respiratory chain and imbalance in mitochondrial dynamic-related proteins cause mitochondrial dysfunction (Halliwell, 1992;Praharaj et al., 2018), playing a critical role in the pathogenesis of different diseases. Studies demonstrated that increased expression of dynamin-related protein 1 (Drp1) caused mitochondrial fragmentation (Xu et al., 2013). Besides Drp1, mitochondrial fission factor (Mff) induced mitochondrial fission, and optic atrophy 1 (Opa1), mitofusin 1 (Mfn1) and mitofusin 2 (Mfn2) proteins are responsible for mitochondrial membrane fusion, which in turn, facilitated apoptosis (Alaimo et al., 2013). Cytochrome C is released upon mitochondrial damage followed by the activation of executioners of apoptosis such as caspase 9 and caspase 3 to initiate apoptosis (Caroppi et al., 2009). In addition, BCL2 Associated X (Bax) and B-Cell Lymphoma-2 (Bcl2) proteins are also involved in mitochondrial-dependent apoptosis. Bcl2 inhibits apoptosis by preventing the release of cytochrome C from mitochondria (Yip and Reed, 2008). It would be enthralling to investigate whether MG infection induce apoptosis and its underlying mechanism in the spleen of chickens that will help in exploring new drug targets for the prevention of MG infection.
Recently, plant-derived natural flavonoid compounds are becoming more and more popular due to their excellent pharmacological properties. Researchers reported that flavonoids possess anti-inflammatory, antitumor, anti-hepatotoxic, antioxidant, antimicrobial, anti-allergic and analgesic properties (Tian et al., 2019). Among them, one such flavonoid is baicalin extracted from the root of Scutellariae radix, has been proved to possess pharmacological effects against a variety of ailments including infection, inflammation, oxidant, and immune dysregulation (Hsu et al., 2016). The chemotherapeutic properties of baicalin may be attributed to its ability to modulate various transcription factors that are involved in various signaling pathways (Gong et al., 2017). Previous studies showed that baicalin suppressed the transcription factor nuclear factor-kappa B (NF-κB), a critical regulator of inflammation (Cheng et al., 2017). Importantly, NF-κB overexpression exacerbate the inflammation reaction through excessive production of pro-inflammatory mediators leading to immune dysregulation (Byun et al., 2013). Therefore, suppression of NF-κB pathway could prevent immune impairment in the spleen of chickens during MG infection. In addition, nuclear factor erythroid 2-related factor 2 (Nrf2) is a critical reg-ulator of cellular redox balance to maintain homeostasis inside the cells (Itoh et al., 1999). In normal conditions, Nrf2 is found in the cytoplasm bound with Kelch ECH associating protein 1 (Keap1) (Zipper and Mulcahy, 2002). Increase in oxidative stress or exposure to electrophilic agents react with keap1 and causes the nuclear translocation of Nrf2. Nrf2 binds to the antioxidant response element (ARE) in the nucleus leading to the transcriptional activation of several antioxidant and detoxifying genes (Kensler et al., 2007). Numerous studies reported that baicalin exerts beneficial effects in part through the activation of Nrf2/HO-1 signaling pathway activation (Zhang et al., 2012). However, the effect of baicalin on NF-κB and Nrf2/HO-1 signaling pathway in the spleen of chicken is still unknown. Thus, the inhibition of NF-κB pathway and activation of Nrf2/HO-1 signaling pathway could be a novel pharmacological approach for the prevention of MG-induced oxidative stress and apoptosis. The present study was aimed to investigate the preventive effects of baicalin against oxidative stress and apoptosis induced by MG infection in the chicken spleen.
Ethical Statement
All the experimental procedures and guidelines were approved by the Institutional Animal Care and Use Committee of Northeast Agricultural University (SYXK (Hei) 2012-2067) in the present study.
Strain r low of mg and Culture Conditions
Strain R low of MG was used in these experiments provided by Harbin Veterinary Research Institute (Chinese Academy of Agricultural Science, Heilongjiang, China). The culture conditions for growing MG were kept the same as mentioned in our previous study . Briefly, modified Hayflicks medium containing 0.05% Penicillins, 0.1% Nicotinamide adenine dinucleotide (NAD), 10% freshly prepared yeast extract, 20% fetal bovine serum and 0.05% thallium acetate. At mid-exponential phase of MG, a color change was observed from phenol red to orange. Chickens were challenged at a density of 1 × 10 9 color change unit per milliliter (CCU/ml) for the pathogens.
Chickens and Treatments
120 one-day-old white leghorn chickens were bought from Chia Chau chicken farm situated in Harbin (China). Chickens were reared for one week to adapt to experimental conditions prior to experiments. Ad libitum feed and fresh drinking water were provided to chickens. After 1 wk, chickens were divided into 4 experimental groups in 3 replicates. Each group were randomly assigned 10 chickens. Experimental groups including (A) Control group (normal chickens) (B) Model group (MG infected group) (C) Baicalin alone treated group (450 mg/kg) and (D) Model group treated with baicalin (450 mg/kg). Chickens were challenged with MG strain R low (1 × 10 9 CCU/mL) in the bilateral air sacs in the thoracic region as reported previously (Xiao et al., 2014). After 3 d of post challenge, baicalin solution (0.5 ml) was given orally (Cheng et al., 2017), once in a day at a dose of 450 mg/kg. After 7 d of baicalin post treatment, chickens were humanely sacrificed to avoid pain and suffering of chickens and spleen were collected for further experimental analyses.
Histopathological Examination of Chicken Spleen
For histopathological examination, 10% buffered formalin were used for fixing spleen samples for 12 h. Following dehydration with graded ethanol, samples were processed for paraffin wax and cut into sections (5 µm thickness). Then, the sections were mounted on slides, stained with hematoxylin and eosin dye and observed under a light microscope (Nikon E100, Japan, 40X magnification).
Terminal Deoxynucleotidyl Transferase-Mediated dUTP Nick Endlabeling Assay
Terminal deoxynucleotidyl transferase-mediated dUTP nick endlabeling (TUNEL) assay was employed to detect apoptotic cells in the chicken spleen. The specimens were first fixed in formalin, dehydrated in graded ethanol and embedded in paraffin wax. The sections were mounted on glass slides, and apoptotic cells were detected by an apoptosis cell detection kit (Beyotime biotechnology, Jiangsu, China) according to the instructions of the manufacturer. Hydrogen peroxide was used to inhibit endogenous peroxidase activity following treatment with proteinase K. The slides were incubated at 37°C for 1 h with terminal TdT/nucleotide mixture and rinsed in phosphate buffer solution (PBS). After nuclear labelling developed with diaminobenzidine and horseradish peroxidase, the slides were counterstained with hematoxylin and examined under a fluorescence microscope.
Determination of Cytokine Activities
The splenic tissues were first washed in normal saline to get rid of excess debris and homogenized in normal saline solution. It was then centrifuged at 1000 × g for 10 min in PBS and the supernatant was collected in new ependorff tubes. Cytokines activities were measured by enzyme-linked immunosorbent assay (ELISA) according to the manufacturer instructions. The samples were loaded in duplicate along with a blank control sample in 96 well plate and run on iMARKTM microplate reader (Bio-Rad Co., Ltd. Shanghai, China).
Protein Extraction and Western Blotting
Protein was extracted from spleen samples using radioimmunoprecipitation assay and a protease inhibitor phenylmethyl sulfonyl fluoride as described earlier . Proteins were separated on SDS-PAGE (10 to 15%) after equal loading and transferred to membranes made of nitrocellulose. Membranes were blocked with 5% non-fat dry milk in TBST for 1 h and incubated with specific primary antibodies overnight. The membranes were again washed 3 times with TBST 10 min each time and incubated with secondary anti-mouse or anti-rabbit IgG peroxidases for 1 h at room temperature. After, bound immune-complexes were visualized with enhance chemiluminescence (ECL, Biosharp Life Sciences, China) reagent. Image J software (National Institute of Health, Bethesda, Maryland) was used to analyze the blots.
Data Analysis
All the analysis of data was performed by statistical package for social sciences (SPSS window version 21.0, Chicago, Illinois) software and one-way analysis of variance was applied to determine the statistical sig-nificance among data at a P value < 0.05 followed by LSD test. The experiments were performed in triplicates (n = 3) and the data were expressed as mean ± SD. All the graphs were made by GraphPad prism (window version 6.01, San Diego, California).
Baicalin Alleviated Oxidative Stress in Spleen Tissues
Oxidative stress-related parameters (Figure 1) were observed in the chicken spleen infected with MG. SOD, GSH-Px and CAT activities were significantly (P < 0.05) reduced in the model group, and iNOS and G-GT activities were significantly (P < 0.05) enhanced in the model group compared to control and baicalin alone group (Figure 1). Similarly, NO and MDA content were significantly (P < 0.05) increased in the model group. However, compared to model group, baicalin treatment significantly restored the normal level of these enzymes and alleviated oxidative stress in the spleen of chickens. Intriguingly, it is worthy to mention that baicalin alone treatment has no significant effect on these enzymes compared to control group.
Histopathological Assessment of Spleen Tissues
Histopathological observation (Figure 2) showed morphological changes in the tissue sections of spleen in the model group compared to control and baicalin alone group. Abnormal morphology including lymphocyte reduction, cells hyperplasia, and red, white pulp were not clearly visible in the model group ( Figure 2B). Spleen tissue micrographs from control ( Figure 2A) and baicalin alone group ( Figure 2C) showed normal morphology and appearance. However, the observed abnormal morphological signs and structural deterioration partially disappear with baicalin treatment in the spleen of chickens infected with MG ( Figure 2D).
Suppression of Proinflammatory Cytokines and NF-κB by Baicalin
The mRNA and protein expression level of proinflammatory cytokines and NF-κB are shown in Figure 3 (A, B). NF-κB, tumor necrosis factor-α (TNF-α), interleukin-6 (IL-6) and IL-1β mRNA expression ( Figure 3B) were significantly (P < 0.05) enhanced in the model group compared to control and baicalin alone group. Protein expression results ( Figure 3A) of NF-κB, TNF-α, and IL-6 showed the same trend as mRNA in the model group and significantly (P < 0.05) increased in the model group compared to control and baicalin alone group. Meanwhile, the mRNA and protein expression level of NF-κB, and proinflammatory cytokines were reduced (P < 0.05) with baicalin treatment in comparison to model group. Proinflammatory cytokine activities ( Figure 3C) were increased in the model group compared to control and baicalin alone group. While, baicalin treatment significantly (P < 0.05) alleviated the increased expression of these cytokines in the spleen.
Baicalin Attenuated Apoptosis in the Spleen
Level of apoptosis (Figures 4 and 5) was measured in the spleen of chickens to evaluate the preventive effects of baicalin against MG-induced immune impairment. The mRNA and protein expression level (Figure 4) of mitochondria-related apoptosis genes showed significant (P < 0.05) upregulation in the model group compared to control and baicalin alone group with the exception of Bcl2. Bcl2 level significantly Model group treated with baicalin (450 mg/kg). Statistical significance were represented as * P < 0.05 vs. control group, * * P < 0.05 vs. model group. All the bar graphs shows mean results ± SD (n = 3). decreased in the model group both at mRNA and protein expression level. More importantly, significant (P < 0.05) decrease has been noted in mRNA and protein expression level of these enzymes with baicalin treatment compared to model group. In addition, anti-apoptotic gene (Bcl2) mRNA and protein levels were significantly (P < 0.05) enhanced with baicalin treatment in comparison with model group. In addition, TUNEL results ( Figure 5) showed extensive positive-stained nuclei in the model group compared to control and baicalin alone treated group. Meanwhile, baicalin treatment signifi-cantly reduced positive-stained nuclei in the spleen of chickens infected with MG as compared to model group. Figure 6 represents the level of mRNA expression level of mitochondrial dynamics of 4 experimental groups. MG infection significantly altered the mRNA Model group treated with baicalin (450 mg/kg). Statistical significance were represented as * P < 0.05 vs. control group, * * P < 0.05 vs. model group. All the bar graphs shows mean results ± SD (n = 3). expression level of these genes in comparison to control and baicalin alone group. The expression level of Drp1 and Mff were significantly enhanced in the model group. While, Mfn1, Mfn2, and Opa1 mRNA expression level were significantly (P < 0.05) decreased in the model group. Meanwhile, baicalin treatment significantly prevented the altered mRNA expression level of these genes.
Effect of Baicalin and MG Infection on the Expression of Nrf2/HO-1 Signaling Pathway
The mRNA and protein expression level of Nrf2 and its downstream genes are represented in Figure 7. Interestingly, Nrf2 mRNA and protein expression level increased (P > 0.05) in the model group compared to control group. Similarly, NAD(P)H: quinone oxidoreductase 1 (NQO1), HO-1 and Glutathione Stransferase-A 2 (GSTA2) mRNA also increased in the model group compared to control group. However, the increase in mRNA expression level of these genes is not statistically significant compared to control group. Meanwhile, baicalin treatment significantly (P < 0.05) enhanced Nrf2 and its downstream genes mRNA and protein expression level compared to control and model group.
Principal Component Analysis
Principal component analysis was carried out to determine the key factors involved in individual variations, which define the most important parameters with the advantage of compressing the data. Using Principal Component Analysis (Figure 8), the data is divided into two principal components, the first principal component (78.201%) and second principal component (98.105%), respectively, except Bcl2. Mfn1, Mfn2, Opa1, Nrf2, NQO1, and HO-1 are positively correlated with principal component 1, but only Casp-8, Mfn2, Opa1, IL-6, Nrf2, NQO1, HO-1, and GSTA2 are positively correlated with principal component 2. The rotating component matrix obtained through principal component analysis is shown in Table 2.
DISCUSSION
Baicalin has been used as a traditional medicine in East Asia for several decades (Ishimaru et al., 1995). A wide variety of pharmacological properties of baicalin such as anti-cancer, anti-pruritic and anti-inflammatory effects had been reported in literature (Lin and Shieh, 1996). Nowadays, natural products have been extensively used against bacterial diseases with the benefits of avoiding antibiotic resistance and harmful side effects (Jang et al., 2014). Baicalin is one of the natural flavonoids that showed potential therapeutic effects against bacterial diseases (Fujita et al., 2005). In addition, it shows synergistic effects with a variety of other antibiotics (Cai et al., 2016). A previous study demonstrated that baicalin prevents from mycoplasma pneumonia infection (Meng et al., 2013). (Garmyn et al., 2017) reported the efficacy of tiamulin alone or in combination with chlortetracycline against MG infection in chickens. However, the immunomodulatory effects of baicalin against MG infection in the chicken spleen is still not reported. In the present study, we investigated the preventive effects of baicalin against MG infection induced immune impairment involving oxidative stress and apoptosis in the chicken spleen. Histological and 999 −0.190 −0.175 −0.190 ultrastructural observation showed that baicalin treatment partially ameliorated the pathological changes in the spleen. Intriguingly, MG infection produced oxidative stress which is the possible cause of these abnormal pathological changes and structural deterioration in the spleen of chickens. Oxidative stress-related enzyme activities were subsequently altered in the spleen tissues, confirmed an increase in oxidative stress in the chickens infected with MG. More importantly, baicalin treatment significantly alleviated oxidative stress compared to model group. These findings are in consistence with previous studies that baicalin prevented from oxidative stress (Wu et al., 2018). In addition, we noticed a profound level of apoptosis in the spleen of model group as compared to control and baicalin alone treated group. TUNEL assay results showed a number of positive stained nuclei in the model group. Subsequently, the mRNA and protein expression level of apoptosis-related genes were significantly changed. It has been suggested from these results that baicalin significantly attenuated apoptosis and could prevent immune impairment in the spleen of chickens infected with MG. Previous reports demonstrated that mitochondrial apoptosis is also associated with mitochondrial dynamics genes (Dabrowska et al., 2015). In the present study, we noted an increase in mRNA expression level of Drp1 and Mff genes in the model group, responsible for mitochondrial fragmentation and fission. While, mitochondrial membrane fusion-related genes such as Opa1, Mfn1 and Mfn2 mRNA expression level were significantly downregulated in the model group. These findings are in agreement with earlier studies (Halliwell 1992;Alaimo et al., 2013;Xu et al., 2013;Jin et al., 2017;Praharaj et al., 2018), demonstrating that the alteration in mitochondrial dynamics results in mitochondrial dysfunction leading to apoptosis. In addition, our data showed that baicalin treatment significantly promoted the expression level of mitochondrial dynamics and prevents mitochondrial dysfunction during MG infection. However, further studies are needed to investigate the effect of baicalin on mitochondrial respiratory chain complex associated with energy metabolism in the spleen of chickens.
Previous studies demonstrated that upregulation of Nrf2 pathway protects the body from oxidative stress and injuries , and Nrf2 is a key therapeutic drug target in case of oxidative stress and various other diseases (Zhang and Gordon, 2004). Antioxidant enzymes including GST, HO-1, and NQO1 prevent cells from oxidative stress (Banerjee et al., 1999). Recently, the upregulation of these antioxidant enzymes by chemical or natural products is a common strategy to provide protection to cells in case of diseases as well as cancer (Hwang et al., 2011). In addition, iNOS plays a crucial role in inflammatory response often induced by oxidative stress via NF-κB pathway (Chung et al., 2007). Therefore, the molecular inhibition of iNOS through the modulation of NF-κB is considered to be a key target in the spleen of chickens during MG infection. Our results showed that baicalin significantly inhibited the expression of NF-κB pathway. It could be speculated that baicalin alleviated oxidative stress-mediated alteration in cytokines expression. In addition, baicalin significantly upregulated the transcription factor Nrf2 and its downstream genes such as GST, HO-1 and NQO1 both at mRNA and protein level. The data provided an evidence that the cytoprotective action of baicalin is partially attributed to its ability to induce cytoprotective genes during MG infection, and Nrf2 promoted defense system against MG infection-mediated immune impairment in the chicken spleen. These results are in line with previous findings (Kim et al., 2010;Shen et al., 2017) that baicalin inhibited NF-κB and upregulated Nrf2 signaling pathway to attenuate oxidative stress and confers cytoprotection. Moreover, further molecular and mechanistic studies are needed to scrutinize the underlying mechanism of baicalin behind the regulation of NF-κB and Nrf2 signaling pathway. In conclusion, baicalin efficiently prevented oxidative stress and apoptosis in the chicken spleen during MG infection (refer to schematic diagram, Figure 9). In addition, baicalin promoted the balance among mitochondrial dynamics and protect mitochondrial dysfunction. Overall, these findings suggested that baicalin protect immune dysfunction through the activation of Nrf2/HO-1 signaling pathway, and suppressed NF-κB and proinflammatory cytokines in the spleen of chickens. Nevertheless, further studies are required to investigate the crosstalk between these pathways to better understand the preventive mechanism of baicalin against various infections. | 2019-08-05T13:02:02.886Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "4c5dd9bf85f67e922c33138a2d751f2e928d2bd9",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.3382/ps/pez406",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e2ec31e31b0bf8d3801704d9be55e95e56d8989f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
220808623 | pes2o/s2orc | v3-fos-license | Efficacy and safety profile of statins in patients with cancer: a systematic review of randomised controlled trials
Purpose A growing body of preclinical and observational research suggests that statins have potential as a therapeutic strategy in patients with cancer. This systematic review of randomised controlled trials (RCTs) in patients with solid tumours aimed to determine the efficacy of statin therapy on mortality outcomes, their safety profile and the risk of bias of included studies. Methods Full-text articles comparing statin therapy versus control in solid tumours and reporting mortality outcomes were identified from Medline and Embase from conception to February 2020. A systematic review with qualitative (primarily) and quantitative synthesis was conducted. This systematic review was prospectively registered (Prospero registration CRD42018116364). Results Eleven trials of 2165 patients were included. Primary tumour sites investigated included lung, colorectal, gastro-oesophageal, pancreatic and liver. Most trials recruited patients with advanced malignancy and used sub-maximal statin doses for relatively short durations. Aside from one trial which demonstrated benefit with allocation to pravastatin 40 mg in hepatocellular carcinoma, the remaining ten trials did not demonstrate efficacy with statins. The pooled hazard ratio for all-cause mortality with allocation to pravastatin in patients with hepatocellular carcinoma in two trials was 0.69 (95% confidence interval CI 0.30–1.61). Study estimates were imprecise. There were no clinically important differences in statin-related adverse events between groups. Overall, included trials were deemed low risk of bias. Conclusion The trial evidence is not sufficiently robust to confirm or refute the efficacy and safety of statins in patients with solid malignant tumours. Study and patient characteristics may explain this uncertainty. The potential role of high-dose statins in adjuvant settings deserves further research. Electronic supplementary material The online version of this article (10.1007/s00228-020-02967-0) contains supplementary material, which is available to authorized users.
Background
Hydroxy-3-methylglutaryl-CoA (HMG-CoA) inhibitors, better known as statins, are a class of lipid-lowering agents that are highly effective and used widely in clinical practice for the primary and secondary prevention of cardiovascular disease [1]. Statins inhibit the rate-limiting step of the mevalonate pathway, a ubiquitous metabolic cascade which plays an essential role in the synthesis of downstream sterol (e.g. cholesterol) and non-sterol isoprenoids [2]. There is growing evidence that a number of these biologically active intermediates exert functions which have direct relevance to cancer biology, with roles in proliferative signalling, cell-cycle regulation, angiogenesis, and metastases [3]. Interest in the potential of statins to prevent and treat cancer has grown over the last three decades.
In vitro studies have demonstrated that statins inhibit proliferation, induce apoptosis and limit invasiveness in numerous malignancies, and have demonstrated the functional relevance of mevalonate pathway intermediates in these observations [4][5][6]. Mutant TP53, the most frequently mutated gene in cancer [7,8] and consistently associated with poor prognosis [9], has been shown to upregulate transcription of mevalonate pathway products to sustain malignant proliferation [10], a pathway potently inhibited by statins. Furthermore, statins have been shown to selectively destabilise mutant TP53 protein [11]. Preclinical in vivo studies have demonstrated statins effectively inhibit growth of established tumours with no noticeable effect on normal tissues [11,12]. These preclinical observations underscore the potential for statins as a viable therapeutic strategy in human malignancy.
The most recent systematic review of observational research included 95 cohorts with over 1.1 million cancer patients and demonstrated post-diagnostic statin use was associated with a significant reduction in all-cause mortality (HR 0.70, 95% CI 0.66-0.74 pooled from 55 studies), with broadly similar effect sizes for progression-free survival, cancerspecific mortality and disease-free survival [13]. However, to varying degrees, studies were potentially susceptible to selection bias, immortal-time bias and confounding. Nevertheless, compared with studies with a higher risk of bias (≤ 8 points on a 6-item scale [14]), effect sizes of those with a lower risk of bias (> 8 points) were attenuated, however remained statistically significant. While preclinical and epidemiological evidence is encouraging, causality remains to be established. To determine whether statins are an effective therapeutic option for specific cancers, evidence from well-designed, sufficiently powered, randomised controlled trials (RCTs) are required.
A series of trials have assessed the efficacy and safety of statins in patients with solid tumours; however, there remains considerable uncertainty, and the justification for further trials has been questioned [15]. The conduct of future trials should be reliably informed by critical appraisal of existing randomised studies in patients with cancer. Therefore, we undertook a systematic review of statins in patients with any malignancy to assess the current state of evidence from RCTs. Specifically, in patients with solid tumours, we aimed to determine (i) the efficacy of statin therapy on mortality outcomes, (ii) the safety profile of statins, and (iii) the risk of bias in RCTs of statin therapy.
Methods
This systematic review was registered (CRD42018116364) on the PROSPERO database and conducted in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [16].
Search strategy
We sought relevant published articles by searching MEDLINE (1948 onwards) and Embase (1980 onwards) (Supplementary Table 1) using the OVID interface and manual searches of reference lists of any systematic reviews identified by the previous step. We used the following search terms to search each database: hydroxymethylglutaryl-CoA reductase inhibitors, statin, cancer, carcinoma, neoplasms, malignancy and randomised controlled trial. The literature search was limited to the English language and human subjects. Searches were completed in Feb 2020.
Eligibility criteria
Only RCTs satisfying the following eligibility criteria were included in the systematic review: (i) statin therapy was the intervention, either given alone or in combination with a cointervention across trial arms; (ii) at least one trial group received placebo, no statin or standard care alone; (iii) participants were diagnosed with a malignant solid tumour prior to enrolment; and (iv) overall survival (OS), progression-free survival (PFS) or response rate (RR) were reported outcomes. No restrictions were placed on the statin administered, posology, frequency or duration of administration. No restrictions were placed on length of follow-up. Two reviewers (JPT and LA) independently screened abstracts and selected fulltext articles for inclusion based on the above criteria. Discrepancies were resolved through discussion among two or more reviewers.
Data extraction and quality assessment
Two reviewers (JPT and LA) independently extracted data from each selected article for study characteristics (location, setting, number of randomised patients, recruitment period, primary cancer site, intervention, duration of statin therapy, concomitant therapy and reported outcome measures); patient characteristics at enrolment (number of patients allocated to active and control groups, age, gender, cancer stage and Eastern Cooperative Oncology Group [ECOG] performance status); study outcomes (reported median overall and progression-free survival in allocated groups with corresponding hazard ratios (and confidence intervals) and reported response rates (%) in each group); and toxicity profile. For continuous participant characteristics and outcomes, we extracted means (with corresponding standard deviations) and medians (with corresponding ranges) as appropriate in each arm. To assist the comparison of statin type and posology used between studies, the defined daily dose (DDD) for each trial was calculated [17]. The DDD is a standardised measure of drug exposure relative to the assumed average maintenance dose per day for a drug used, for its main indication in adults was as defined by the World Health Organization. For example, a single dose of simvastatin 30 mg or atorvastatin 20 mg is equivalent to 1 DDD. Two reviewers (JPT and LA) used the Cochrane risk of bias tool to assess internal validity of each eligible study across seven items: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessment, incomplete outcome data, selective reporting and other sources of bias [18]. Given the outcomes of interest were objective (e.g. all-cause mortality), open-label study designs, where applicable, were deemed to pose minimal risk of bias for the domains of "blinding of participants and personnel", and "blinding of outcome assessment". Discrepancies were resolved through consensus discussion between reviewers. We contacted authors for additional information where required.
Study outcomes
The primary outcome was overall survival (OS), defined as the time from randomisation to death from any cause [19]. Secondary outcomes were (i) progression-free survival (PFS), defined as time from randomisation to first observed cancer progression or death; (ii) response rate (RR), defined as the proportion of patients with tumour size reduction of a predefined amount and for a minimum time period [19]; and (iii) toxicity (proportions of grade 3-5 and separately statinrelated adverse events in each group).
Statistical analysis
From the outset, we decided it would be inappropriate to conduct a quantitative meta-analysis comprising trials with different primary cancers as any resultant summary effect size estimate for mortality outcomes would be difficult to interpret. This is because each distinct cancer has disparate biology, behaviour, prognosis, treatments and responsiveness to therapy. Furthermore, while the mevalonate pathway is ubiquitous to all eukaryotes and will be functional in malignancy, there is insufficient evidence at present to suggest a universally consistent role in effecting cancer prognosis. As a result, we primarily undertook a qualitative assessment of included trials to critically review the study characteristics, participant characteristics, mortality and safety outcomes of eligible studies. We performed a quantitative meta-analysis, where possible, of any trials in patients with the same primary cancer.
Summary study characteristics were calculated and weighted by sample size for gender, cancer stage and ECOG performance status. Where p values were not provided in original study reports for comparisons between intervention and control arm for overall response rate, we calculated these with extracted categorical data using the chi-squared test or Fisher's exact test, as appropriate. Meta-analysis of trials involving patients with the same primary cancer was performed to quantify the association between statin use and overall survival. Effect estimates were pooled by the inverse of their variance and are presented as pooled hazard ratios (HRs) with corresponding 95% CIs. Due to differences in recruited study populations, concomitant therapies and intervention protocols, we utilised a random-effects meta-analysis using the method of DerSimonian and Laird [20]. Heterogeneity was estimated using the Cochrane's Q and I 2 statistics. A two-tailed p value of less than 0.05 was defined as statistically significant for all analyses apart from Cochrane's Q test for heterogeneity where a p value of 0.10 was selected as the threshold of significance. Results of this meta-analysis were illustrated by means of a forest plot. Analyses were performed with STATA version 15.1 (StataCorp LP, College Station, TX, USA).
Mortality outcomes
Two trials investigated the effect of pravastatin 40 mg in patients with advanced hepatocellular carcinoma [22,31]. Allocation to pravastatin therapy was associated with significantly improved overall survival in one of these studies only [31]: median survival was 18 months in the pravastatin group and 9 months in the control group (HR 0.42, 95% CI 0.20-0.83). Meta-analysis of overall survival with pravastatin in both these trials revealed a HR of 0.69 (95% CI: 0.30-1.61) which was not statistically significant (p = 0.392) ( Supplementary Fig. 1). The Cochrane Q test (p = 0.024) and I 2 statistic (80.5%) demonstrated a statistically significant degree of heterogeneity (p < 0.10). None of the other included trials demonstrated significant improvements in overall survival with statins, including for small-cell lung cancer, nonsmall cell lung cancer, oesophageal/GOJ/gastric cancers, colorectal cancer and pancreatic cancer. No improvements in progression-free survival were observed with allocation to statins individually in nine studies (n = 2050) in which this outcome was reported [22][23][24][25][26][27][28][29][30]. There were no significant differences in overall response rate for the eight studies (n = 1727) reporting this outcome [23][24][25][26][27][28][29][30].
Safety profile
Five trials reported grades 3-5 adverse events. None of these trials demonstrated significant differences in grades 3-5 adverse events between statin and control group (n = 1497) outcome [21,24,26,27,29] (Supplementary Table 3). Statinrelated adverse events (myalgia/myopathy or abnormal alanine aminotransferase/aspartate aminotransferase or elevated creatine phosphokinase) were similar in proportion between groups in all nine studies reporting these outcomes [21][22][23][24][25][26][27][28]30]. Most trials had small sample sizes and may have been inadequately powered to detect clinically relevant differences in adverse events if they existed ( Table 3). Figure 2 shows the assessment of risk of bias in the included trials as per the Cochrane risk of bias tool, illustrated using the robvis application [36]. Four trials reported random sequence generation and allocation concealment adequately [21,24,26,28], while this was insufficiently reported in the remaining seven. While six trials were open-label studies, any deviations from intended intervention were unlikely to impact on the outcome and therefore were deemed at low risk of performance bias [22,23,25,[29][30][31]. Risk of detection bias for all trials overall was determined to be low given that knowledge of statin allocation (where applicable to open-label studies) would seem unlikely to bias reported outcomes not involving subjective judgement, such as mortality outcomes or measures of treatment response. All trials were deemed to be at low risk of selective reporting. Nine studies excluded as these involve different primary cancers (n = 9)
Discussion
In summary, this systematic review included eleven trials of statin therapy in 2165 patients with solid tumours in total, including small cell lung cancer (n = 846), non-small cell lung cancer (n = 106 and n = 68), colorectal cancer (n = 269), gastric adenocarcinoma (n = 244 and n = 30), oesophageal adenocarcinoma (n = 32), pancreatic cancer (n = 114), hepatocellular cancer (n = 83 and n = 323) and patients with brain metastases (from mainly breast and lung primaries) (n = 50). Most patients recruited had advanced malignancy and received concomitant palliative chemotherapy. Most patients received 40 mg of simvastatin or pravastatin (1.33 DDD), and typically for short durations (on average fewer than 9 months). Most trials did not demonstrate significant improvements in overall survival (aside from one trial of pravastatin 40 mg in hepatocellular carcinoma [31]), and no trials reported improvements in progression-free survival or overall response rate. Metaanalysis of the two trials involving pravastatin 40 mg in advanced hepatocellular cancer [22,31] revealed no significant improvements in overall survival ( Supplementary Fig. 1).
There was no indication in any trial of an increased rate of adverse events in those allocated to statins. Overall, included trials were deemed to be at low risk of bias using the Cochrane risk of bias tool [18].
Comparison with previous work
This is the second systematic review of RCTs to examine both the clinical efficacy and safety profile of statins in patients with solid tumours. The first included a meta-analysis of eight RCTs included in this systematic review [37]. This review provided a brief description of study characteristics and the overwhelming focus was on quantitative synthesis of the effect of statins on OS, PFS, RR and adverse events. In contrast, our review is primarily a qualitative synthesis of included trials and provides more detail regarding important characteristics relating to included studies (country, blinding, duration of statin therapy, DDD) and participants (demography, cancer staging, performance status) to aid interpretation. Another more recent systematic review focused on a meta-analysis of nine of the included RCTs in our review to examine the effect of allocation to statins on OS and PFS [38]. As previously stated, we deliberately did not conduct a meta-analysis of all RCTs given irreconcilable heterogeneity of included studies and uncertainty surrounding the assumption of a uniform treatment effect, with resultant difficulties in interpretation of summary estimates. The cholesterol treatment trialists' collaboration individual patient data (IPD) meta-analysis of 22 RCTs of statin vs. control (primary or secondary prevention of cardiovascular disease, n = 134,537) and 5 RCTs of high-dose vs. low-dose statins (secondary prevention, n = 39,612) demonstrated no evidence of reduced incident cancer overall (RR 1.00, 95% CI 0.96-1.04) or related cancer-specific mortality (RR 0.98, 95% CI 0.92-1.05) for those allocated to the active arm [39]. No significant associations for mortality were demonstrated individually for any of the 23 primary sites examined. However, only cancers diagnosed after randomisation were considered (1.4% developed cancer per year after randomisation), and it is not clear how many of these patients were receiving study drug from the point of cancer diagnosis. It is therefore difficult to make inferences of the effect of allocation to statins on mortality outcomes in patients with cancer from this IPD meta-analysis.
Limitations
It is possible that statins do not exert clinically relevant effects in patients with solid tumours; however, other explanations for the divergence of trial evidence from the promising preclinical and epidemiological data deserve consideration. Of included studies, only four were phase III studies, and the remaining seven were not powered to detect significant differences in mortality outcomes. Of the phase III studies, three [22,26,27] were powered to detect relatively large effect sizes (HR 0.74, HR 0.65, and HR 0.67 respectively) and were at risk of type II error should the actual effect sizes have been more conservative. The largest trial to date in small cell lung cancer (n = 846) was powered to detect a HR of 0.82 [24]. Treatment response to statins could feasibly differ between palliative and adjuvant settings, depending on their primary mechanism of action in individual tumour types (for example, a primary effect on inhibition of metastases as seen in colorectal cancer may favour response in the adjuvant setting [40]) and the influence of baseline tumour burden. All but one trial included patients with metastatic disease at baseline (65% of participants overall where reported); in such patients with poor prognosis in receipt of statins for short durations, precluding a marked cytotoxic effect of statins (which would seem unlikely), it may not be possible to elicit or demonstrate treatment response. Furthermore, it is difficult to generalize these trial findings to the adjuvant setting. Although an effective statin dose has yet to be defined in the setting of cancer therapy, and may differ from the licenced doses prescribed for the prevention of cardiovascular disease, the dose of statins assessed in these trials may have been insufficient. All trials used statins at sub-maximal doses (ten with a DDD of 1.33 and one with a DDD of 2.67); higher doses (e.g. atorvastatin 80 mg-DDD 4) are clinically licenced in cardiovascular prevention [41] and could be investigated in a trial. Stratification of effect sizes according to statin type, dose (as defined by DDD) and intended duration of therapy may have been informative; however, such comparisons would have included trials with different primary sites in each strata, and the resulting estimates and tests for interaction would have been difficult to interpret. It is unclear whether statin use prior to randomisation is an effect modifier for the association between statin allocation and mortality outcomes, as most studies excluded prior/current statin use; and those studies which did not specifically exclude such users did not report the proportion of existing users in the randomised population.
Recommendations
Given the imprecise estimates for efficacy and the limitations of previous trials discussed above, the current trial evidence base does not preclude the conduct of future statin trials in patients with solid malignancies. Further definitive phase III trials are required to determine the efficacy and safety profile of statins in individual tumour types, provided there exists sufficient scientific justification for their conduct: including the proposed mechanism of action applicable to underlying tumour biology and the relevance of the pharmacokinetic properties of the selected statin. High-dose statin therapy should be considered to maximize the probability of observing clinically relevant effects: given the dose-dependent effects of statins in pre-clinical research [42] and trial data for their current licenced indications [41]. Future trials should be adequately powered to detect more conservative effect sizes than previously examined; indeed, relatively small clinically significant differences in primary outcomes may be justifiable given that statins are easily administered, low-cost medications with a favourable safety profile when used for their licenced indications [43]. Investigators should consider the merits of investigating statins in the adjuvant setting, where there is mounting pre-trial evidence [44]. Future trials should ideally collect blood and fresh frozen tissue to permit translational research studies including biomarkers predictive of treatment response.
Conclusions
Overall, the trial evidence is not sufficiently robust to confirm or refute the efficacy and safety of statins in addition to the current standard of care in patients with solid malignant tumours. Most trials were not adequately powered to detect more conservative differences in efficacy outcomes, and statins were administered for short durations at submaximal doses in patients with predominantly advanced malignancy. Based on this evidence, it may be premature to disregard a potential beneficial role of statins in cancer therapy and there is insufficient evidence to preclude the conduct of future trials. The potential role of high-dose statins in adjuvant settings deserves further research.
Authors' contributions JPT: methodology, search strategy, data extraction, and writing original draft. YPL: methodology and editing. LA: conceived the review, methodology, search strategy, data extraction, editing and supervision.
Funding information JPT is an Academic Clinical Fellow and LA is a Clinical Lecturer, both funded by the National Institute of Health Research (NIHR). The funding source had no input regarding the design, conduct, data collection, data analysis, interpretation, manuscript preparation or publication decision.
Data availability All data reported in this manuscript are found in the literature as cited in the text.
Compliance with ethical standards
Conflict of interest The authors declare that they have no competing interests. Disclaimer The views expressed are those of the authors and not necessarily those of the NHS, the NIHR or the Department of Health.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 2020-07-28T15:03:26.193Z | 2020-07-28T00:00:00.000 | {
"year": 2020,
"sha1": "c05c0a79be4714cc4e6b52ed5e7b6c68fbe048d5",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00228-020-02967-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "1e52622cc7ef928b7c5829e5c3838d553a76f0a9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4990655 | pes2o/s2orc | v3-fos-license | Magnetic structure of an imbalanced Fermi gas in an optical lattice
We analyze the repulsive fermionic Hubbard model on square and cubic lattices with spin imbalance and in the presence of a parabolic confinement. We analyze the magnetic structure as a function of the repulsive interaction strength and polarization. In the first part of the paper we perform unrestricted Hartree-Fock calculations for the 2D case and find that above a critical interaction strength $U_c$ the system turns ferromagnetic at the edge of the trap, in agreement with the ferromagnetic Stoner instability of a homogeneous system away from half-filling. For $U<U_c$ we find a canted antiferromagnetic structure in the Mott region in the center and a partially polarized compressible edge. The antiferromagnetic order in the Mott plateau is perpendicular to the direction of the imbalance. In this regime the same qualitative behavior is expected for 2D and 3D systems. In the second part of the paper we give a general discussion of magnetic structures above $U_c$. We argue that spin conservation leads to nontrivial textures, both in the ferromagnetic polarization at the edge and for the Neel order in the Mott plateau. We discuss differences in magnetic structures for 2D and 3D cases.
I. INTRODUCTION
Cold atoms constitute a promising route to simulate model Hamiltonians of strongly correlated many-body physics with accurate control of system parameters [1,2]. After major experimental breakthroughs with ultracold bosonic atoms like the Bose-Einstein condensation (BEC) of alkali metal gases [3,4] or the observation of the superfluid-Mott-insulator transition in a bosonic Hubbard model [5], the field of ultracold atoms is currently addressing problems of strongly correlated fermionic systems [6][7][8][9]. Arguably, the most prominent goal is the understanding of the phase diagram of the fermionic Hubbard model, which is believed to be of major importance for high-temperature superconductivity [6,[10][11][12][13][14][15]. A twocomponent Fermi gas in an optical lattice is well described by the single-band Hubbard model whenever the energy gap to higher bands is much larger than on-site interaction, temperature, and chemical potential [1,2,11]. Only recently the fermionic Mott transition has been realized experimentally [16,17]. The major challenge for studying the magnetism of the fermionic Hubbard model is to reach temperatures below the Néel temperature [18,19]. In addition to the preparation of the antiferromagnetic state, characterization tools have to be developed to allow a clear identification of the magnetic structure. Possible experimental techniques include Bragg spectroscopy [20,21], local measurements of the magnetization [22,23], noise correlations [24,25], or the recently realized quantum-gas microscope [26].
The experimental control of spin imbalance in Fermi gases offered a unique way to study pairing phenomena beyond the standard BCS picture for attractive interactions [27,28]. Motivated by these results, we address in this work the effect of spin imbalance on the repulsive fermionic Hubbard model [29,30]. While we study strong optical lattices, where a singleband Hubbard model is realized, the magnetic structure of * bwunsch@physics.harvard.edu weak-to-intermediate lattice strength including multiple bands has also been discussed [31]. We find rich physics arising from the interplay between antiferromagnetic and Stoner ferromagnetic instabilities and spin imbalance.
The magnetic order of the two-dimensional (2D) repulsive Hubbard model has been extensively studied in the past (see Ref. [14]). Cold atoms in optical lattices differ in several ways from typical condensed-matter systems. First, there is a superposed external confinement potential, which divides the system in an incompressible Mott state in the center of the trap and in a compressible region at the edge. Second, the total spin is conserved, which means that we need to minimize the energy of the system given a global magnetization rather than a finite Zeeman field. One interesting problem concerns the spatial distribution of the imbalance between Mott plateau and edge, and it turns out that the solution strongly depends on the interaction strength. The constraint of spin conservation affects the ferromagnetic instability at the edge by enforcing nontrivial spin textures [32,33] which also affects the Néel order in the Mott plateau in the center, as we will discuss in Sec. IV.
In this work we study the repulsive fermionic Hubbard model including a parabolic confinement potential. In the first part of this work, we perform unrestricted Hartree-Fock calculations for the 2D case. Relevant physics for this system can be identified based on the mean-field phase diagram for the repulsive 2D homogeneous Hubbard model [34]. Up to a critical interaction strength U c , it predicts antiferromagnetic order close to half -filling and paramagnetic order elsewhere. In the spirit of a local density approximation, one might then expect that cold fermionic atoms in an optical lattice have antiferromagnetic correlations in spatial regions with one atom per site and are paramagnetic elsewhere. In order to account for a finite imbalance, the system has to change its magnetic structure. Using an unrestricted Hartree-Fock approach for the 2D system, we find a canted antiferromagnet in the Mott plateau in the trap center and a partially polarized edge. We note that canted antiferromagnetic order close to half-filling has been reported previously in Ref. [35]. With spin polarization along the z direction, the canted antiferromagnet accommodates the imbalance forming a constant z component of the local magnetization, and simultaneously it benefits from the superexchange interaction by building up an alternating magnetic order perpendicular to the z direction. Fixing the global imbalance and increasing the interaction strength results in more imbalance flowing to the edge.
Above a critical interaction strength U c , the unrestricted Hartree-Fock calculation predicts that the system turns ferromagnetic at the edge of the trap, in agreement with the ferromagnetic Stoner instability of a homogeneous system away from half-filling. Furthermore, the orientation of the antiferromagnetic order in the Mott plateau is perpendicular to the direction of the ferromagnet in the edge. Spin conservation has again a strong impact on the magnetic structure of the system since a uniformly polarized ferromagnetic edge together with an antiferromagnetic Mott plateau are generally not allowed. We will discuss spin textures in 2D and threedimensional (3D) lattices for U > U c , which fulfill spin conservation and which show the two prominent features predicted by the mean-field calculation, namely a) magnetic instabilities toward ferromagnetism in the compressible edge and antiferromagnetism in the Mott plateau and b) at the interface between Mott plateau and compressible edge, the orientation of the antiferromagnet and the ferromagnet are perpendicular to each other.
We are aware that the chosen mean-field approach generally overestimates symmetry breaking, and therefore the critical on-site interaction strength, U c , corresponding to the appearance of an intrinsic ferromagnetic edge, will presumably be higher than the one predicted here. However, intrinsic ferromagnetism away from half-filling is expected for sufficiently large interaction strength [32,36], and in fact, experimental indications for itinerant ferromagnetism in a Fermi gas of ultracold atoms have been reported recently in Ref. [37]. Given the tunability of the ratio between on-site interaction and nearest-neighbor hopping, U/t, the interaction strength required for the presented phase separation should be accessible in experiment (U/t = 150 have been reported in Ref. [16]).
This article is organized as follows. In Sec. II, we introduce the model, and in Sec. III, we calculate the magnetic structure for U < U c within an unrestricted Hartree-Fock approach. The topology of the intrinsically ferromagnetic edge arising for U > U c is addressed in Sec IV and in the Appendix. Finally in Sec. V, we summarize our findings and comment on the experimental significance of our results.
II. MODEL
We consider the fermionic single-band Hubbard model on a 2D and 3D cubic lattice with an external parabolic confining potential. The Hamiltonian is where σ ∈ {↑, ↓} labels the two fermionic components, which are the eigenstates of the z component of a spin algebra. These two components can either be the hyperfine state of the trapped fermions or even correspond to different atomic species. c iσ denotes the annihilation operator for a particle with spin σ at site i, whereas n iσ = c † iσ c iσ and n i = σ n iσ are the spin resolved and total occupation of site i. U is the on-site interaction, and t is the nearest-neighbor hopping. Finally r i denotes the distance of site i from the trap center measured in units of the lattice spacing a and α = mω 2 a 2 /2 characterizes the strength of the external confinement. The associated energy scale is the confinement strength at the edge of the atom cloud with one atom per site, denoted by V t . In 2D, V t = N α/π, where N is the particle number.
III. UNRESTRICTED HARTREE-FOCK APPROACH IN 2D
We now apply a Hartree-Fock mean-field decoupling in the spin and the density channel. Since the trap breaks translational invariance, the mean-field parameters will be site-dependent. Allowing for arbitrary spin and density at each site, we obtain the following mean-field Hamiltonian [38]: iα σ α,β c iβ )/2 denotes the spin operator at site i ( σ is the vector of Pauli matrices) and M i = S i is the local magnetization. Magnetization and density are determined self-consistently for fixed total particle number N . In the following, we assume zero temperature. The energy of the self-consistent solution is given by the sum over the lowest N single-particle energies of the Hamiltonian (2) plus the constant energy E 0 = U i ( M 2 i − n i 2 /4). An important subclass of self-consistent solutions are the ones with collinear magnetization where M y (i) = M x (i) = 0 on all sites. In particular, the generic phases of the homogeneous Hubbard model [34] have a collinear magnetization; either ferromagnetic M z (i) = M, antiferromagnetic M z (i) = (−1) i M, or paramagnetic M z (i) = 0. However, we will show that generally the combination of trapping potential and imbalance will lead to a non-collinear-magnetization profile.
We are interested in the ground state for a given imbalance, characterized by the polarization P = (N ↑ − N ↓ )/(N ↑ + N ↓ ), which is an experimentally controllable parameter [27]. The imbalance is conserved since the two components correspond to different internal states of the atoms (typically different hyperfine states) and transitions between these states are energetically forbidden unless they are driven by additional lasers. The single-particle eigenstates of the Hamiltonian in Eq. (2) only have well-defined spin if the magnetization is collinear. Generally, an expectation S z = 0 can be tuned by spin-dependent chemical potentials or equivalently by a fictitious magnetic field in z direction H z = −BS z .
The parabolic confinement will decrease the density away from the trap center. In a local density approximation, a cross section through the trap corresponds to a cut through the (n, U ) phase diagram at constant interaction U . Polarization can most easily be accommodated by ferromagnetism, but also antiferromagnetic and paramagnetic regions can account for finite imbalance. As discussed in the Introduction, in a canted antiferromaget, a spatially constant component aligned with the field is added to the alternating component perpendicular to the imbalance. The paramagnetic region can be partially polarized in the spirit of Pauli paramagnetism, where the polarization is proportional to the applied field. In the following, we show that canted antiferromagnetic order is realized at half-filling, and we study how the imbalance is distributed between Mott plateau and edge as a function of interaction and imbalance. Self-consistent solutions of the Hubbard model on the two-dimensional square lattice (2) have either a collinear or coplanar magnetization [35,38], and we can set M y = 0 without loss of generality. However, we note that enforcing vanishing global in-plane magnetization can lead to nontrivial three-dimensional topologies for the intrinsic ferromagnet [32,33], which will be discussed in Sec. IV.
A. The homogeneous system at half-filling Figure 1 shows the mean-field energies of canted and collinear solutions as a function of increasing imbalance for the homogeneous system at half-filling. A rough explanation of why the canted antiferromagnetic order is favored can be given within the mean-field Heisenberg model. Here the energy increases only quadratically with polarization for the canted order but linearly with polarization for collinear magnetization. Since the solutions are the same at the extreme values P = 0 and P = 1, the ground state is always a canted antiferromagnet. particle number, N = 540, correspond to V t = 3.4t which is smaller than the on-site interaction so that double occupancies are absent.
B. Magnetization profile in the trap
Within the Mott plateau, we find canted antiferromagnetic order, as expected from the analysis of the homogeneous system. The cross sections of the spin resolved densities and the local magnetization in panels (b) and (c) of Fig. 2 show that the edge is partially polarized and does not have antiferromagnetic order, although the x component of the magnetization extends into the edge.
We now consider the distribution of a fixed imbalance for various on-site repulsions. Figure 3 illustrates that increasing interaction moves the imbalance to the edge. (We define the Mott plateau through |n i − 1| < 0.05.) Above a critical interaction strength (of order U c ≈ 10t), the edge is fully polarized and the Mott plateau is a pure antiferromagnet. The maximum in the majority density at the border of the Mott plateau can be understood by recalling that in the homogeneous system for strong interactions, there is a first-order phase transition between an antiferromagnet close to half-filling and a ferromagnet at finite doping [34]. By decreasing interactions below U c , the canting in the Mott plateau increases and the polarization at the edge decreases.
Next we describe the magnetic structure as a function of the global polarization, P , keeping the other parameters fixed. For U = 5t, the upper panel of Fig. 4 shows that both the polarization in the center with canted antiferromagnetic order and in the partially polarized edge increases linearly with the global polarization. The polarization at the edge is always larger than in the center until the Mott plateau disappears close to full polarization.
We now discuss the case of strong interaction (i.e., U > U c ). Here the edge is intrinsically ferromagnetic. As shown in Fig. 3, at U = 12t, the edge is already fully ferromagnetic in the absence of any fictitious magnetic field that is otherwise used to fix a certain global imbalance. Given the total number of atoms in the trap, N , and the number of atoms in the edge, N 01 , this defines a critical polarization P c = N 01 /N, which is P c ≈ 0.5 in Figs. 3 and 4. Our mean-field approach predicts for P < P c and U > U c a spatially uniform ferromagnetic edge with a direction other than the z direction. This implies a finite global in-plane magnetization. However, as we discuss in detail in the next section, such a solution which is forbidden by spin conservation and the preferred ferromagnetic order in the edge will have nontrivial spin textures for P < P c and U > U c . For now we restrict our discussion to P > P c and U > U c . Then the ferromagnetic order at the edge points in the z direction and the antiferromagnet in the Mott plateau is canted as shown in the lower panel of Fig 4. We now increase the number of particles so that the center of the trap is more than half-filled. In agreement with the symmetry of the homogeneous Hubbard model around half-filling, we find that the edge between the Mott plateau and double-occupied sites shows similar features as the outer edge discussed above. Figure 5 shows the magnetization profile and the spin-resolved densities. Here V t = 15.7 which is larger than the chosen on-site interaction. The Mott plateau is formed on a ring and has canted antiferromagnetic order. Moving away from the Mott ring, the antiferromagnetic order rapidly vanishes and the edge is strongly polarized. In fact, for this rather large value of U , we see a small maximum of the majority component at the outer edge and a minimum in the minority component at the inner edge.
IV. NONTRIVIAL SPIN TEXTURES FOR U > U c
The Hartree-Fock calculation predicts that above a critical interaction strength U c , the edge of the atom cloud turns ferromagnetic, even in absence of any imbalance or fictitious magnetic field. In the previous section, we defined a critical polarization, P c , corresponding to a fully polarized ferromagnetic edge along the z direction and an antiferromagnetic Mott plateau. In this section, we discuss qualitatively the magnetic structure for U > U c and P < P c .
A cold atom experiment is prepared from a paramagnetic state with no optical lattice. Controlling the imbalance between the two fermion species, the initial state is characterized by where P is the polarization and N the number of atoms. Since there is no coupling between the effective spin degree of freedom and the rest of the experimental system, the same constraints apply in the presence of an optical lattice and with strong on-site interaction U [32,33]. This additional constraint is always fulfilled in our mean-field treatment except for U > U c and P < P c , where a spatially uniform ferromagnetic edge is predicted with a direction other than the z direction. However, such a solution leads to a finite global in-plane magnetization, which is forbidden by the boundary condition. In order to fulfill Eq. (3), itinerant ferromagnetism in cold atom systems can have nontrivial topology as shown recently for balanced systems with filling factor less than unity everywhere [32,33].
In the following, we discuss the magnetic structure for U > U c and P < P c , both for 2D and 3D systems. We look for magnetic structures that fulfill spin conservation (3) and which show the two prominent features predicted by the mean-field calculation, namely a) magnetic instabilities toward ferromagnetism in the compressible edge and antiferromagnetism in the Mott plateau and b) at the interface between Mott plateau and compressible edge, the orientation of the antiferromagnet and the ferromagnet should be perpendicular to each other. Our qualitative analysis is based on the Ginzburg-Landau-type free energy functional (see Refs. [32] and [33]): where ρ is the positive stiffness constant, M 0 is the magnitude of the favored magnetization, and β > 0 determines the cost of amplitude fluctuations. The favored spin texture for strong interactions, U > U c , is determined by minimizing the total energy under the constraint of Eq. (3). In our qualitative analysis, we neglect that at the edge, the system parameters in Eq. (4) depend on the radius. This allows us to write the total energy of a spin structure, as a sum of three contributions: the energies of the spin structures at the edge, inside the Mott plateau, and at the interface of both regions. We note that the energy scale related with the spin structure of the ferromagnetic edge is of the order t and thus much bigger than the small superexchange t 2 /U that determines the spin structure in the Mott plateau. Therefore, we first minimize the free energy of the intrinsically ferromagnetic edge. The remaining two energy terms describe the interface between ferromagnetic and antiferromagnetic order at the edge of the Mott plateau and the energy of the spin structure in the Mott plateau. Based on the different scaling with the system size, we argue that the interface term dominates for large systems. While the interface term scales with r D−1 M , where r M is the radius of the Mott plateau and D denotes the dimension, the antiferromagnet scales like ln r M in 2D and like r M ln r M in 3D, as we will show. We minimize the interface term by choosing the orientation of the ferromagnetic and the antiferromagnetic order to be perpendicular to each other at the interface between Mott plateau and compressible edge. In the following, we discuss solutions, where the Mott plateau has no net imbalance. In fact, in the limit of large interactions U → ∞, the superexchange t 2 /U vanishes, so that one could allow for a strong polarization of the edge, by polarizing the Mott plateau in the opposite way. As estimated in Appendix B, such a solution is however higher in energy for realistic interaction strengths.
A. 2D lattice
We argue that (i) in presence of the Mott plateau a vortex structure for the ferromagnetic edge should be energetically 013616-5 favored as depicted in Fig. 6(a), and (ii) a finite imbalance should result in a vortex structure of the ferromagnetic order parameter in the xy plane together with a small z component (see Fig. 7). An important experimental consequence is a strong z component of the antiferromagnetic order in the center, which is aligned perpendicular to the ferromagnetic order in the edge. In Appendix A, we derive the energy of the different topological orders of the ferromagnetic edge: vortex, domain wall, or Skyrmion. These structures are illustrated in Fig. 6. It turns out that for realistic parameters, the vortex is lowest in energy. For finite imbalance, the edge will then be described by a ferromagnetic vortex in the xy plane and a constant z component. The energetically preferred direction of the antiferromagnetic order in the Mott plateau is perpendicular to that of the ferromagnet at the edge. The antiferromagnet in the Mott plateau will therefore have a small in-plane magnetization forming a vortex, which grows with increasing imbalance, and a strong z component as illustrated in Fig. 7.
B. 3D lattice
Similar arguments can be applied to a 3D system. Taking into account the boundary condition of vanishing global magnetization in balanced systems and by applying Eq. (4), one finds that the preferred structure of the ferromagnetic edge in a balanced system is a hedgehog [32,33]. As shown in Appendix A, the energetically preferred antiferromagnetic order in the center should then be either a planar vortex structure with M AF = ±M 0 e φ or a 3D spherical vortex M AF = M 0 e θ , where e φ = (− sin φ, cos φ, 0) and e θ = (cos θ cos φ, cos θ sin φ, − sin θ ) are spherical unit vectors.
Both solutions are illustrated in Fig. 8. They guarantee that at the edge of the Mott plateau, where the antiferromagnetic order of the center of the trap has an interface with the ferromagnet order at the edge, the orientations of the antiferromagnet and the ferromagnet are perpendicular to each other. A violation of this requirement would cost an energy that scales with the area of the interface r 2 M . Deformations of the perfect Néel order in the center of the trap, either in amplitude or phase, are minimized and the corresponding energy scales as r M ln(r M /a). For perfectly balanced systems, the vortex within the Mott plateau could lie in any plane. Imbalance will deform the hedgehog leading to a net z component (see Fig. 9). While this does not affect the energy of a vortex in the xy plane, While the ferromagnetic edge always has a hedgehog structure, the Mott plateau has either a planar vortex structure (a) or a 3D "spherical" vortex structure (b). While both structures have the same energy for the balanced system, the planar vortex is favored by finite imbalance (see Fig. 9).
FIG. 9. (Color online) Illustration of magnetic structures in 3D for an imbalanced system with U > U c and 0 < P < P c . Outer arrows indicate the magnetization at the ferromagnetic edge. Inner arrows illustrate the staggered magnetization in the antiferromagnetic Mott plateau. Note that finite imbalance only deforms the hedgehog structure of the ferromagnet edge, while the antiferromagnetic order in the Mott plateau is unchanged [see Fig. 8(a)].
it increases the energy for the vortices in other planes or for the spherical vortex. Therefore, we expect that for imbalanced systems in 3D with U > U c , the antiferromagnetic order in the Mott plateau will form a planar vortex structure in the xy plane as in Fig. 9. In contrast to the 2D case where we expect a strong z component of the antiferromagnetic order for U > U c and P < P c , we expect a vanishing z component in 3D.
V. DISCUSSION
In this work, we studied an interacting two-component Fermi gas on a 2D and 3D cubic lattice subject to a parabolic external confinement. We analyzed the magnetic structure as a function of the repulsive interaction strength and spin imbalance. Applying an unrestricted Hartree-Fock calculation for a 2D system, we identified the critical interaction strength U c where the edge turns ferromagnetic and analyzed the spatial distribution of a finite imbalance between the two Fermi components for U < U c . We found that the system has canted antiferromagnetic structure at half-filling with antiferromagnetic ordering in the plane perpendicular to the imbalance and is partially polarized elsewhere. Fixing the global imbalance and increasing the interaction strength results in more imbalance flowing to the edge. We expect the same qualitative behavior for 3D in that regime.
In the second part of the work, we gave a general discussion of the magnetic structure above U c both for 2D and 3D. We showed that spin conservation generally leads to nontrivial spin textures, both in the Mott plateau and at the edge. We predict that the edge has non-vanishing in-plane magnetization with a vortex structure in 2D and a hedgehog structure in 3D. We furthermore expect that for U > U c and small imbalance, the antiferromagnetic order in the Mott plateau has a finite z component in 2D, while in 3D a vanishing z component of the antiferromagnetic order in the Mott plateau is predicted.
We expect our findings to have clear experimental signatures if temperatures below the Néel temperature can be reached. A phase-contrast image [27] showing the density of each component separately can test our predictions of a Mott plateau with ferromagnetic borders. Detection of a canted antiferromagnet in the Mott plateau requires direct access to the order parameter. This can be achieved for instance through noise correlations [24] or by measuring the local magnetization [22,23,26]. Additionally, one can use Bragg spectroscopy [20,21] where the double-unit cell of the antiferromagnet results in additional Bragg peaks. Furthermore, the intensity of the additional Bragg peaks can then be used to measure the strength of the z component of the antiferromagnet.
APPENDIX A: GINZBURG-LANDAU THEORY
Following Ref. [32], we apply a Ginzburg-Landau-type description of the magnetism based on Eq. (4) to analyze the magnetic structure for U > U c , where the edge is intrinsically ferromagnetic. By enforcing a vanishing global in-plane magnetization, the ferromagnetic edge acquires nontrivial topology. For the energy estimate, we consider three energy contributions. The most relevant contribution is the free energy of the intrinsically ferromagnetic edge. Thereafter the contribution of the interface between ferromagnetic and antiferromagnetic order at the edge of the Mott plateau has to be taken into account, which is minimized by choosing the orientation of the ferromagnet and the antiferromagnetic to be perpendicular to each other. Finally the free energy of the antiferromagnetic Mott plateau has to be minimized.
We simplify our calculation by assuming constant parameters ρ, β, and M 0 in Eq. (4), thus neglecting a radial dependence of these parameters due to the trapping potential [33]. We denote the radius of the atom cloud by R c and the radius of the Mott plateau by r M .
A. 2D lattice
In a 2D system we expect the magnetization at the edge to form a vortex-like structure. Furthermore, we claim that for a small imbalance, the vortex will lie in the xy plane with a uniform magnetization component pointing in the z direction. The energetically preferred direction of the antiferromagnetic order in the Mott plateau is perpendicular to that of the ferromagnet at the edge. At the interface, the antiferromagnet in the Mott plateau will have an in-plane magnetization forming a vortex and a z component. The lowest energy corresponds to the maximally allowed z component of the antiferromagnetic order parameter, thus minimizing the in-plane vortex.
We now give quantitative arguments for the physics described in the preceding discussion based on a comparison of the energies of a ferromagnetic edge with different topologies: either a vortex, a domain wall, or a Skyrmion as depicted in Figs. 6 and 10. First we discuss the balanced system. For the vortex, the direction of magnetization is independent of radius, but it rotates by 2π on each circumference. A particular realization of a vortex is M V = M 0 e r . However, for the balanced system, there is global rotation invariance and the plane of the vortex is arbitrary. Using Eq. (4), the energy cost of a vortex is given by E V = πρM 2 0 ln(R c /r M ). Even in the absence of a Mott plateau, the lattice spacing, a 0 , gives a natural cutoff for the core energy leading to E V < πρM 2 0 ln(R c /a 0 ). A vortex naturally fulfills the requirement of vanishing global magnetization in all three spatial directions.
Another possibility is the formation of a domain wall. In the inner ring r M < r < r 0 , there is a uniform polarization (e.g., M = M 0 e z ), and within a finite region, r 0 < r < r 0 + L, the sign of the magnetization is inverted [e.g., M = M 0 (1 − 2(r − r 0 )/L) e z ]. In the outer ring, r 0 + L < r < R c , the magnetization points in opposite direction (e.g., M = −M 0 e z ). While the inner and outer rings have a perfect uniform ferromagnetic order, the domain wall is energetically costly due to the suppression of the amplitude of the order parameter. The energy cost is given by E D = πρM 2 0 (r 0 /L + 1/2)[4 + 4L 2 /(15ξ 2 )], with ξ = ρ/(βM 2 0 ) denoting the coherence length. r 0 and L are not independent of each other but related by the condition of vanishing global magnetization. In absence of any Mott plateau, r M = 0, the smallest allowed value is r 0 /L ≈ 0.6 which increases with r M . Neglecting the term containing the coherence length, we therefore obtain a lower bound for the energy of the domain wall: E D > πρM 2 0 4. Finally, we estimate the energy of a Skyrmion. The magnetization is uniform (e.g., M = M 0 e z ) in the inner ring, r M < r < r 0 , and then it rotates by an angle aπ around a local axis in a ring of width L, r 0 < r < r 0 + L [e.g., M = M 0 cos( r−r 0 L aπ) e z + M 0 sin( r−r 0 L aπ) e r ]. For a = 1, the magnetization in the outer ring is inverted, while for other angles, it has a vortex structure [e.g., M = M 0 cos(aπ) e z + M 0 sin(aπ) e r ]. The Skyrmion interpolates between the inner and outer rings by tilting the order parameter, keeping the amplitude of the magnetization fixed in constrast to the domain wall where the amplitude is suppressed. In the region, r 0 < r < r 0 + L, the magnetization of the Skyrmion changes in the radial direction and along the circumference. The radial dependence of the magnetization gives rise to an energy contribution given by E S = πρM 2 0 (r 0 /L + 1/2)(aπ 2 ). Again the variables r 0 , L, and a are not independent of each other but related by the condition of vanishing global magnetization. By minimizing this energy only, and neglecting the energy cost of the change of magnetization along the circumference, we get a lower bound for the Skyrmion energy: E S > πρM 2 0 [r M /(R c − r M ) + 1/2]π 2 . According to these estimates, the lower bound of the energy for the domain wall is larger than the total energy of the vortex for r M > exp(−4)R c ≈ R c /50, and the lower bound for the Skyrmion is larger than the vortex energy for r M > exp(−5)R c ≈ R c /150. In fact, the real minima for both Skyrmion and domain wall will be larger. Since the radius of the whole atomic cloud is about 50 lattice sites [16,17], our conservative estimate shows that the vortex should be favored for practically any size of the Mott plateau. We note that for the results shown in the main part of this article, r M /R c ≈ 1/2. For the balanced system there is global rotation invariance and the plane of the vortex structure is arbitrary. However, in the presence of a finite imbalance, the energetically preferred magnetization will be a vortex in the xy plane with a uniform ferromagnetic z component. Assuming that the directions of the ferromagnetic edge and the antiferromagnet in the Mott plateau are perpendicular, we expect the direction of antiferromagnet to have a large z component.
B. 3D lattice
Minimizing the free energy in Eq. (4), one finds that the preferred structure of the ferromagnetic edge in a balanced 3D system is a hedgehog [32,33]. We now explain why we expect the antiferromagnetic order in the center to have a planar vortex structure for U > U c and P < P c . First, at the edge of the Mott plateau, the preferred direction of the antiferromagnetic order is perpendicular to the orientation of the ferromagnet order in the edge. A violation of this requirement will cost an energy that scales with the area of the interface r 2 M . In order to fulfill the boundary condition at the edge of the Mott plateau, the antiferromagnetic order in the trap center can neither have perfect Néel order nor a hedgehog configuration, since the latter needs to be oriented in the radial direction. One possibility is to build up the 3D magnetization from the preferred 2D solution for each plane z, which is given by where e z = (0, 0, 1) and e ρ = [cos φ, sin φ, 0] are cylindrical unit vectors. However, this solution is not realized in 3D, since the change in the z component of the magnetization between different planes costs a large energy that scales with the volume of the Mott plateau E AF ∝ r 3 M /a 2 . In fact, the preferred magnetic orders in the Mott plateau are either planar vortex structures like M AF = M 0 e φ or 3D solutions like M AF = M 0 e θ , where e φ = (− sin φ, cos φ, 0) and e θ = (cos θ cos φ, cos θ sin φ, − sin θ ) are spherical unit vectors. For the balanced system, these solutions have the same energy given by E AF ≈ 4πM 2 0 ρr M ln(r M /a). However, imbalance will deform the hedgehog at the edge of the trap leading to a net z component. Such a deformation increases the energy of these solutions except for a vortex in xy plane. Therefore, we expect the magnetization profile in 3D for U > U c and small imbalance to be given by a (slightly deformed) hedgehog ferromagnet at the edge of the trap and an antiferromagnetic order with a vortex structure in the xy plane in the center of the trap. In contrast with the 2D case, where we expect a strong z component in the antiferromagnetic order for U > U c and P < P c , we expect a vanishing z component of the antiferromagnetic order in the Mott plateau in 3D.
APPENDIX B: POLARIZING THE MOTT PLATEAU
In Sec. IV, we propose spin structures for U > U c and P < P c that minimize the total energy while fulfilling the spin conservation (3). The constraints (3) prohibit the formation of a uniform ferromagnet at the edge of the trap if simultaneously the Mott plateau has an antiferromagnetic structure with zero net imbalance. However, since the constraints (3) apply to the whole system, one could imagine a system consisting of a fully polarized ferromagnetic edge and a Mott plateau strongly polarized in the opposite direction, such that the global imbalance is small or even zero. We now justify why such solutions are energetically more costly than the ones proposed in Sec. IV.
We therefore discuss the magnetic structure of a balanced Fermi gas in a 3D trap. The system can be divided into a Mott plateau for radius r < r M and an edge for radius r M < r < R c . In Sec. IV, we claimed that the ferromagnetic structure at the edge forms a hedgehog. Applying the Ginzburg-Landau-type free energy in Eq. (4), the energy of a hedgehog can be estimated. We strongly simplify our calculation by assuming a constant density at the edge [32]. The magnitude M 0 of the ferromagnetic magnetization is therefore constant along with the stiffness, ρ, in Eq. (4). We now estimate the energy of the ferromagnetic hedgehog as E F = 8πρM 2 0 (R c − r M ). The stiffness of a homogeneous Fermi gas is given by ρ = 1/(12k 2 F χ 0 ) =h 2 /(36m n), where k F is the Fermi wave vector, χ 0 the magnetic susceptibility, m the mass of the fermions, and n is the density (see Ref. [33]). For sufficiently small densities, the mass of a particle hopping between nearest neighbors in a 3D cubic lattice is given by m =h 2 /(ta 2 ), where a is the lattice constant and t the hopping matrix element between nearest-neighbor sites. The stiffness on a 3D cubic lattice is therefore given by ρ ≈ ta 2 /(36n), and the energy of the balanced hedgehog becomes E ≈ (2π/9)t(R c − r M )/(a 4 n) tN 1/3 E , where N E is the number of atoms at the edge of the trap. This energy could be gained by uniformly polarizing the edge. However, due to the conservation of the total imbalance, the Mott plateau would then also be polarized by P M = N E /N M , where N M denotes the number of atoms in the Mott plateau. The corresponding cost in energy can be estimated as E AF ≈ P 2 M 2N M 4t 2 /U = 8t 2 N 2 E /(UN M ), where 2N M is the number of nearest neighbors in the Mott plateau and 4t 2 /U is the superexchange. The energy cost of polarizing the Mott plateau is smaller than the energy gain of forming a uniform ferromagnetic edge if U/t > 36N 5/3 E /(πN M ). This is not satisfied for realistic particle numbers N E , N M > 10 3 and N E /N M 1. We thus conclude that in 3D for U > U C and P < P c , the Mott plateau is not significantly polarized. The magnetic structures that minimize the total energy and fulfill Eq. (3) are therefore the ones presented in Sec. IV. We expect similar arguments to hold in 2D. | 2010-01-25T13:42:28.000Z | 2009-09-04T00:00:00.000 | {
"year": 2009,
"sha1": "db38c25cf109e1c1a4643e321f2080a8a97dd318",
"oa_license": "CCBY",
"oa_url": "https://dash.harvard.edu/bitstream/1/26370363/1/ref120.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3a1ade8c3ae243d59b2c0fbedadfa08425e4ed9f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
236405557 | pes2o/s2orc | v3-fos-license | Yoğun Bakım Ünitesinde Takip Edilen Kafa ve Göğüs Travması Hastalarında Mortalite Oranı Tahmin Edilebilir mi? Can Mortality Rate in Head and Chest Trauma Patients in the Intensive Care Unit be Predicted?
Objective: The aim of this study is to evaluate the patients who are followed up with a diagnosis of head and chest trauma; to investigate the revised trauma score, the simplified acute physiology score, and various biochemistry parameters and to reveal the role of these values on mortality rates. Methods: Our study is an observational cohort study that retrospectively examines head, and thoracic trauma patients followed in the university hospital ICU, based on the hospital database. Data of trauma patients who were treated consecutively in the ICU of a tertiary hospital between June 2016 and June 2017 were recorded. Patients were divided into two groups as mortality and living. Demographic data of the patients simplified acute physiology score (SAPS II) and revised trauma score (RTS), Sorumlu Yazar: Özlem ÖNER Dokuz Eylül Üniversitesi Anesteziyoloji ve Reanimasyon Anabilim Dalı, İzmir, Türkiye namdaroner@gmail.com Geliş Tarihi: 01.01.2021 – Kabul Tarihi: 21.03.2021 Yazar Katkıları: A) Fikir/Kavram, B) Tasarım, C) Veri Toplama ve/veya İşleme, D) Analiz ve/veya Yorum, E) Literatür Taraması, F) Makale Yazımı, G) Eleştirel İnceleme Adnan Menderes Üniversitesi Sağlık Bilimleri Fakültesi Dergisi 2021: 5(2); 230-238 Journal of Adnan Menderes University Health Sciences Faculty doi: 10.46237amusbfd.852002 Araştırma Makalesi Research Article Adnan Menderes Üniversitesi Sağlık Bilimleri Fakültesi Dergisi 2021: 5(2); 230-238 Journal of Adnan Menderes University Health Sciences Faculty 231 length of stay in mechanical ventilation and intensive care unit, mortality rates, admission Glasgow coma score (GCS), hemodialysis requirements during follow-up, first post-ICU admission Nutritional status and various biochemistry parameters admitted to intensive care unit were evaluated within 48 hours. Results: In our study, 28-day mortality rates were found to be higher in patients with head and chest trauma, those who underwent hemodialysis treatment (p = 0.0016), were intubated, followed by mechanical ventilation (p <0.001), and fed parenterally. Patients with 28-day mortality rates, simplified acute physiology score (SAPS) 2 (p <0.001), length of stay in the ICU (p = 0.009), high mechanical ventilation duration (p <0.001), and those with increased serum creatinine and glucose levels found high. In patients with a high 28-day mortality rate, GCS, RTS, and serum albumin levels were found to be significantly lower. Conclusion: We think that RTS, GCS, and serum albumin levels may be useful markers to estimate the mortality rates of head and chest trauma patients.
INTRODUCTION
In a publication published by the World Health Organization in 2014, it was stated that worldwide trauma-related deaths exceeded 5000000 cases per year and constituted 9% of all death rates (1). In our country, deaths due to traumas rank fifth among the causes of death (2). Approximately 400000 patients are victims of traumatic injury, including accidents, assault, vehicle collisions, and penetrating trauma in Turkey (3). Therefore, it is a major cause of death and a public health problem in Turkey.
Approximately half of the deaths due to trauma are due to head trauma (4). The mortality rate due to severe head traumas is 35% (5). Additionally, twenty to twenty-five percent of trauma-associated deaths are due to thoracic trauma (6). Management of trauma patients is a highly complex process, usually involving considerable resuscitation efforts, comprehensive imaging, multiple operations, prolonged intensive care unit (ICU), and complex rehabilitation programs. In severe trauma cases, reducing mortality and morbidity by increasing the quality of care has become a target for healthcare professionals. Although keeping vital parameters within reasonable limits is valuable in demonstrating its effectiveness in resuscitation, it is a fact that the evaluation of these values alone is not sufficient for the management of critical patients. For this reason, trauma centers use trauma scores, intensive care scores, and various biochemistry parameters that can help guide diagnosis and treatment. Today, factors affecting mortality are investigated in trauma patients followed in ICUs.
In our study, we aimed to investigate their serum glucose, creatinine, albumin levels, and patients' GCS, RTS, and SAPS II values and the predictive power of these values on mortality.
MATERIALS AND METHODS
The study started after obtaining approval from Dumlupınar University, Faculty of Medicine Ethics Committee. (Approval No: 2018-2/9) In this study, the data of 1052 patients who were followed up in the anesthesia intensive care unit between July 2016 and July 2017 were retrospectively reviewed. The data of pregnant patients were excluded from the study. 124 consecutive patients with a diagnosis of head and thoracic trauma were included in the study. Demographic data of the patients, SAPS II, RTS, GKS, length of stay in mechanical ventilation and ICU, hemodialysis requirements, nutritional status within the first 48 hours after admission to ICU, and various biochemistry values during access to intensive care were evaluated. The 28-day mortality rates of the patients were recorded. Patients were divided into two groups as survivors and patients who died within the first 28 days. The predictive power of biochemistry parameters on mortality was investigated with the calculated ICU and trauma scores.
Statistical Analysis
The data of the study were transferred to the SPSS v20 program. Categorical data are expressed by frequency and percentage, while continuous data are represented by mean, standard deviation, median, minimum, and maximum. The chi-square test compared categorical data. The consistency of consistent data to normal distribution was tested with the Shapiro Wilk test. While comparing the means, the t-test was used for data with normal distribution, whereas the Mann Whitney U test was used for data not suitable for normal distribution. Survival analyzes were done with the Kaplan Meier Log-Rank test. The variables that were found significant in survival analyzes were taken into the Cox regression analysis model. p <0.05 was accepted as the level of significance.
Figure 1. Overall Survival of Patients
It was found that the overall survival time in patients hospitalized in the intensive care unit due to trauma did not differ significantly according to gender (p = 0.869), hemodialysis requirement (p = 0.189), and nutritional status (p = 0.232) ( Table 3). As a result of Cox Regression Analysis, CRP (p = 0.023), Albumin (p = 0.002) and GKS (p = 0.013) were found to be associated with 28-day mortality (p = 1.000) ( Table 4).
DISCUSSION
In intensive care patients, mortality may vary according to age, gender, pre-trauma health status, the severity of the injury, and response to treatment. The mortality rate of patients with trauma varies between 15-40% according to the studies until this time (7). In this study, the mortality rate was determined as 15.3% in parallel with the literature.
When the demographic data of trauma patients are examined, it is seen that the male gender is in the majority (8). In our study, 75.8% of patients are male. Similarly, (9) defined 71% of patients as male and 28% as females in their study on 80544 trauma cases.
In many studies, although the age factor was found to be among the significant factors affecting mortality, it was not found statistically significant in our study. It was thought that the age factor was not statistically significant since the mean age of both groups was very close. Of the patients included in our study, the average age of 44.7 was documented.
Evaluation of trauma patients is made using several trauma scores. Scoring systems are classified into physiologic, anatomic, and combined anatomic with physiologic (10). The revised trauma score (RTS) is a physiologic-based triage score. The RTS has three variables, respiratory rate, systolic blood pressure, and GCS. In this study, it was found that there was a statistically significant difference between low RTS and increased mortality. (11), also reported RTS statistically significant difference between low RTS and increased mortality. Similarly, (12) analyzed 1276 death trauma patients between 1995 and 2000 and showed that RTS could be used to predict mortality in trauma patients. RTS includes the GCS. Therefore, if there is any head injury, this scoring system can be used for better assessment. However, if there is no significant head trauma, RTS prediction can be decreased for the prediction of survival. In our study, in parallel with the literature, RTS were found to be significantly lower. However, the presence of head trauma in most of the patients followed up contributed to this result. The increase in respiratory rate due to pain and respiratory distress in thoracic traumas confirms the power of rts to predict mortality.
SAPS II systems are based on multiple logistic regression equations that describe abnormalities in multiple physiologic variables during the first 24 hrs in the ICU because many deaths occur soon after admission (13), (14) showed SAPS II predictive implications for ICU death. When the present study SAPS II scoring of those who died within 28 days was calculated, the median values were found to be significantly higher (13) defined SAPS II had an excellent ability to discriminate between survivors and non-survivors. The GCS used both in calculating RTS and in SAPS II are essential in patient follow-up, especially in the presence of head trauma. GCS, developed by Jennett and Bond, is used to evaluate the neurological status of the patient and cerebral dysfunction in multiple traumas associated with head trauma (15). Low GCS values are associated with increased mortality. In our study, it was found that there was a statistically significant difference between low GCS and increased mortality. (16) also found that GCS is clinical variability of which statistically significant relationship to acquaint abnormal CT findings.
In the study, the mean duration of stay in ICU was 9.98± 12.55 days. (17) The average length of stay ICU in 143 patients were reported to be 8.6 days. As patients' intensive care unit stay increases, the risk of infection increases, and at the same time, the duration of stay and mortality increases in patients who develop infections (18). However, in this study, we found that the period of intensive care stay of trauma patients did not affect 28 days of mortality of the patients. We believe that the complications caused by prolonged hospitalization time in multiple trauma do not affect on 28 days of the death, even if the duration of hospitalization is prolonged thanks to the prevention of early complications such as early enteral feeding, the implementation of ventilator-related pneumonia preventive protocols, early appropriate antibiotherapy and appropriate follow-up.
We followed to evaluate the nutritional status of the trauma patients, parenteral nutrition was chosen which does not tolerate the enteral feeding route or has contraindications, and we found that feeding patients parenterally was found to be high with a 28-day mortality rate. When we look at the literature, in support of this, in the latest European Clinical Nutrition and Metabolism Guidelines, the recommendation level is to apply nutrition to all patients who are not expected to start full-dose nutrition within three days, and all critical patients who are hemodynamically stable and have gastrointestinal system functions as possible by the expert committee (19). Sufficient early (<24 hours) feeding was recommended.
Serum glucose regulation is as important as nutritional status of patients. (20). Recent randomized prospective data suggests that early hyperglycemia is associated with excess mortality in critically ill patients, and tight glucose control leads to improved outcomes in a prospective randomized and controlled study of including 1548 patients. It was concluded that intensive insulin therapy reduces mortality and morbidity in patients admitted to the surgical intensive care unit. (21) investigated the different relationship levels of early blood glucose elevation to outcome in a trauma ICU population they concluded early hyperglycemia as defined by glucose > or = 200mg/dl are associated with significantly higher infection and mortality rates in trauma patient independent of injury characteristics. In our study, it was shown that the mortality rate of the group with high blood glucose levels (200 mg/dl over) is higher in inpatient admission in ICU.
Hypoalbuminaemia is a predictor of increased mortality and morbidity in ICU patients (22). By looking at all the studies about albumin in literature in 16 years, Goldwasser and Feldman evaluated ten studies that have the maximum number of participants (minimum 609, maximum 17,440 patients). They found that the rate of mortality's highness and level serum albumin's lowness is the same. A reduction in serum albumin concentration of 2.5 g / dL has been reported to increase the probability of death by 24-56% (23). A significant association between low albumin levels and mortality was also shown in our study. Besides, this study, mortality rates were significantly higher in patients with high serum creatinine levels. We evaluated the increase in serum creatinine as trauma-induced acute kidney injury (AKI). AKI is a clinical diagnosis guided by standard criteria based on changes in serum creatinine, urine output, or both. The severity of AKI is determined by the magnitude of the increase in serum creatinine or a decrease in urine output (24). The presence of AKI is associated with increased morbidity and mortality (25). Our study was compatible with literature knowledge. Six patients who were in this study (4.8%) needed hemodialysis. Post-traumatic AKI might be prevented by resuscitating patients aggressively in an early phase and avoiding prolonged untreated shock. Nevertheless, more evidence is required to support this observation.
CONCLUSION
In our study, we aimed to evaluate as many parameters as possible. We examined the predictive power of the most frequently used biochemical parameters and various scores on mortality while evaluating the trauma patients who were followed up.
Summary, it was shown that serum glucose, albumin, creatinine and GCS, RTS, and SAPS II scores could be correlated with mortality rates. Especially, we thought that RTS could be effective in predicting mortality in patients with both head and chest trauma since it includes both GCS and respiratory rate. However, we think that further studies are needed on the factors predicting mortality in trauma patients.
Ethical Consideration of the Study
The study started after obtaining approval from Dumlupınar University, Faculty of Medicine Ethics Committee. (Approval No: 2018-2/9) | 2022-06-03T20:41:20.204Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "1794dc04b853243b3658621c90bb09ada21ce09a",
"oa_license": "CCBYNC",
"oa_url": "https://dergipark.org.tr/en/download/article-file/1482860",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1794dc04b853243b3658621c90bb09ada21ce09a",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
16113914 | pes2o/s2orc | v3-fos-license | Tissue Expression and Actin Binding of a Novel N-Terminal Utrophin Isoform
Utrophin and dystrophin present two large proteins that link the intracellular actin cytoskeleton to the extracellular matrix via the C-terminal-associated protein complex. Here we describe a novel short N-terminal isoform of utrophin and its protein product in various rat tissues (N-utro, 62 kDa, amino acids 1–539, comprising the actin-binding domain plus the first two spectrin repeats). Using different N-terminal recombinant utrophin fragments, we show that actin binding exhibits pronounced negative cooperativity (affinity constants K1 = ∼5 × 106 and K2 = ∼1 × 105 M−1) and is Ca2+-insensitive. Expression of the different fragments in COS7 cells and in myotubes indicates that the actin-binding domain alone binds exlusively to actin filaments. The recombinant N-utro analogue binds in vitro to actin and in the cells associates to the membranes. The results indicate that N-utro may be responsible for the anchoring of the cortical actin cytoskeleton to the membranes in muscle and other tissues.
Introduction
Utrophin and dystrophin are large (395 and 427 kDa, resp.) modular proteins that link the cytoskeletal F-actin filaments to the plasmalemma. The C-terminal portion including the cysteine-rich domain associates with over a dozen of different scaffold and signaling proteins (utrophin-/dystrophin-associated protein complexes, UAPC and DAPC). These complexes are coupled via the transmembranous beta-dystroglycan to alpha-dystroglycan and to the extracellular matrix (ECM) proteins. At their N-terminus, both proteins contain a homologous actin-binding domain composed of a pair of calponin homology motifs (CH1 and CH2 in tandem) that are connected to the long sequence of spectrin-like triple helical repeats, 22 in utrophin and 24 in dystrophin. The spectrin repeat sequences are interspersed by hinges (H1-H5) giving these structural molecules flexibility ( Figure 1). Thus, these two cytoskeletal proteins directly link the intra-cellular cytoskeleton to the ECM in muscle and nonmuscle tissues.
In striated muscle, full-length dystrophin (Dp427) localises to the inner side of the plasma membrane (sarcolemma) with its N-terminus binding to nonmuscle beta-and gammaactin in the costameres which physically couple to the Z-disc of force-generating myofibres [1]. Dp427 is thought to provide mechanical stability for the muscle fibre and the necessary flexibility of its anchoring to the surrounding ECM during contraction and extension; in fact, it may act as a cellular "shock-absorber" [2,3]. Loss of functional Dp427 causes severe muscle wasting in the fatal Duchenne's muscular dystrophy (DMD). The muscle symptoms can be improved, at least in the dystrophic mdx mouse model, by transgenic expression of Dp427 or by expression of engineered shorter versions of dystrophin comprising the essential constituents for binding to actin and the sarcolemma [4][5][6]. In addition, overexpression of full-length utrophin (Up395) in Fiona mice lacking Dp427 localises along the sarcolemma resulting in complete recovery of normal mechanical functions and prevents the occurrence of muscular dystrophy [7,8]. The Fiona mouse line overexpresses Up395 under a human alpha-actin promoter in skeletal muscle, but not in the heart, over 20 times compared to control mice. Based on the homologous domain structure ( Figure 1) and the high amino acid (aa) sequence similarity, utrophin and dystrophin are expected to serve comparable functions. In fact, in fetal animals utrophin prevails and localises to the sarcolemma like dystrophin in the adult. During postnatal maturation of striated muscle, Up395 disappears from the sarcolemma and becomes restricted to the neuromuscular (NMJ) and myotendinous junctions [9]. Thus, utrophin can functionally replace dystrophin. Nevertheless, there are significant differences between the two during spaciotemporal development, in tissue and subcellular localisation, in isoform complement, and, on a molecular level, in actinbinding properties [4,[10][11][12]. DMD (X-linked recessive disorder) is the most common lethal disease in childhood concerning 1 in 3500 boys. In addition, about one-third of DMD patients display mental retardation related to alterations in integrated brain circuits [13]. This points to vital functions of these skeletal proteins other than muscle membrane stability.
The DMD gene localises to human chromosome Xp21 and comprises 79 exons and at least 7 internal promoters [14]. The protein Dp427 is mainly expressed in skeletal and cardiac muscle and to a lesser extent in the nervous system. It derives from three independent promoters (M: muscle; B: brain, P: cerebellar Purkinje cells) consisting of spliced unique first exons that regulate specific expression. In adult skeletal muscle, Dp427 is located at the sarcolemma and in the troughs of the postsynaptic membrane. Its N-terminal actin-binding domain contains aa 1-246 ( Figure 1). Four shorter nonmuscle products harboring the cysteine-rich and C-terminal domains, but lacking the N-terminal actin binding domain, are expressed from downstream promoters and have been named according to their molecular weights, Dp260, Dp140, Dp116, and Dp71 [14,15]. Dp71 has been detected in cardiac muscle and most nonmuscle tissues including brain, retina, kidney, liver, and lung [11,12,16,17].
The gene of utrophin, paralogous to DMD, is located on human chromosome 6q24 containing 75 exons and 6 internal promoters [18,19]. Up395 is named utrophin because of its ubiquitous tissue distribution in comparison to Dp427. Expression of Up395 is driven by two independent promoters UtrnA and UtrnB [20]. The UtrnA protein is the main isoform in adult skeletal muscle and appears in the NMJ at the crests of the postsynaptic membrane folds in association with the nicotinic acetylcholine receptors (AchR). The UtrnB isoform is found enriched in vascular endothelium. Both Up395 isoforms are, however, found in brain structures and in many other tissues as well [11,20]. As for the DMD gene, the internal utrophin promoters give rise to several shorter C-terminal isoforms with preservation of the cysteine-rich and the C-terminal domains, which correspond to the similar short isoforms from dystrophin and thus also lack the N-terminal actin-binding domain [14,19,21]. Up140 corresponds to DP140, Up113 (also called G-utrophin) to Dp116, and Up71 to Dp71.
While no short N-terminal dystrophin isoform is known, we have described the cloning of a transcript from the utrophin locus in rat C6 glioma cells which codes for a short N-terminal utrophin isoform (N-utro). N-Utro aa 1-539 comprises the actin-binding domain (Ch1 and CH2) plus the first two spectrin-like repeats (Figure 1) [22].
By immunoblotting with monoclonal antibodies (mABs) against aa 1-261 (N-terminal actin-binding domain) of utrophin, a 62 kDa fragment was earlier detected in rat C6 glioma cells [23]. This finding suggested the existence of a truncated N-terminal form of utrophin that was confined to the glioma cells. Indeed, its apparent molecular mass in sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE) precisely matches that of N-utro. Here we confirm the expression of mRNA for N-utro in all rat tissues examined including skeletal and cardiac muscle, brain, kidney, and liver. The expression of the N-utro protein could be verified in cardiac muscle and kidney.
Recombinant utrophin fragments aa 2-594 (corresponding to N-utro), aa 2-261 (actin-binding domain), aa 262-543 (first two spectrin repeats), aa 1754-2091 (spectrin repeats R14-R16), and the dystrophin fragment aa 1-246 (actinbinding domain) were heterologously expressed in E. coli and in eukaryotic cells ( Figure 1, black domains). These fragments were used for production of polyclonal antibodies (pABs) and for testing actin-binding function. Transfection of COS7 cells and myoblasts allowed to follow intracellular localisation by immunostaining. As this novel N-terminal fragment (N-utro) is added to the panoply of the short utrophin isoform family, it is important to characterise its genetics and functional potential.
Recombinant Protein Expression.
Three fragments from rat utrophin (UT11, UT12, and UT31) and two from human muscular dystrophin (DYS11 and DYS12) were cloned into pQE vectors (Qiagen) for expression in Escherichia coli M15[pREP4] (Figure 1). This provides a MRGSH6 tag at the Nterminus of the proteins. In order to maintain the correct reading frame, one to two additional amino acids (aa) appeared between the tag and the in-frame proteins (see below). For DYS12, the DNA coding for aa 1-246 was cloned into the blunted BamHI site of pQE-32 after PCR amplification from a human dystrophin minigene with the following sense and antisense primers 5 -ATGCTT-TGGTGGGAAGAAGTA-3 and 5 -TATTCAATGCTCACT-TGTTGAGGC-3 , respectively. The dystrophin minigene in pUC18 (44) was kindly provided by Dr. S. J. Winder. The other four pQE expression plasmids contain DNA fragments obtained by restriction digestion. For the DYS11 plasmid which codes for aa 1-617, the minigene was digested with NcoI, blunted, and digested with NarI, and the resulting NarI-NcoI-fragment was ligated into the NarI HincII-site of the DYS12 plasmid; the UT11 plasmid coding for aa 2-594 is a blunted EaeI-EaeI-fragment of clone α-213 ligated into the blunted BamHI site of pQE-30; for production of the UT12 plasmid coding for aa 2-261, clone α-213 was digested with BsaHI, blunted, digested again with StuI, and the thus resulting StuI-BsaHI fragment was then ligated into the StuI-HincII site of the UT11 plasmid; for the UT31 plasmid coding for aa 1754-2091, the clone α-215 was first digested with BamHI to produce a 2.7 kb fragment that was subcloned into a pBluescript II SK vector (Stratagene Ltd) from which a BamHI-HincII fragment was ligated into the BamHI-HincII site of pBluescript II SK again from which a BamHI-KpnI fragment was ligated into the BamHI-KpnI site of pQE-31.
All ligation junctions were sequenced for checking the correct reading frame. Due to restriction cloning, a variable number of vector-derived aa were attached to the C-terminus of the proteins. The following final expression constructs were obtained (numbers in brackets refer to the rat utrophin or human dystrophin primary structure): All five described plasmids could be expressed in Escherichia coli by induction with isopropyl-β-D-1-thiogalactopyranoside (IPTG) and purified by affinity chromatography on Ni-nitrilotriacetic acid agarose (Qiagen). UT11, UT12, and DYS12 were purified under native conditions by elution with 200 mM imidazole (example given in Figure 2). UT31 and DYS11 could only be extracted and purified under denaturing condition in 8 M urea with elution from the affinity column at pH 4.5. All protein preparations were tested by SDS-PAGE. Protein concentrations were determined by the Bradford test [24] with BSA as standard and by UV absorption at E280 with extinction coefficients of 87 580 M −1 cm −1 for UT11, 39 620 for UT12, 52 040 for DYS12, and 106 700 for DYS11 as derived from GCG software.
RT-PCR.
1 μg poly(A) + RNA was mixed with 1 μg pd(N) 6 or with 0.86 μg oligo(dT) 12−18 primers, in water and incubated for 5 min at 70 • C. The reverse transcription was done according to manufacturer manual; in brief, the RNA-primer complex was mixed with 200 U M-MLV reverse transcriptase (Promega, USA), 500 μM dNTPs each, 50 mM Tris-Cl pH 8.3, 75 mM KCl, 3 mM MgCl 2 , 10 mM DTT, 25 μL reaction volume. First strand cDNA was synthesised at 37 • C for 2 h, stopped with 45 μL 77 mM EDTA/0.23 M NaOH, and heated to 95 • C for 5 min. After mixing with 18 μL 1 M Tris-Cl pH 8.0, the first strand cDNA was extracted with equal volumes of phenol, phenol/chloroform, and chloroform, precipitated with ethanol/NH 4 Ac and resolved in 50 μL TE.
Western Blots.
Rat tissues from liver, brain, kidney, heart, skeletal muscle, aorta, and uterus were collected, immediately frozen in liquid nitrogen, and crushed to powder using a mortar. 100 μL tissue lysis buffer per 10 mg tissue was added (62.5 mM Tris-Cl pH 6.8, 5% sucrose, 5 mM EDTA, 2% SDS and protease inhibitor (Complete Mini, Roche)), and the lysate was homogenised three times for 20 sec with a polytron (Kinematica) and centrifuged at 20,000 g for 5 min at 4 • C. The supernatant was collected and used for analysis.
Lysates were boiled in 50 mM Tris-Cl pH 6.8, 2% SDS, 5% glycerol, 10 mM DTT and run on 4-15% gradient SDS-PAGE [27]. Blotting of the gel was performed in 7 mM Tris, 87.5 mM glycine, pH 8.3 (without methanol) using a Mini Trans-Blot Cell (BioRad). After staining with Ponceau-S, the membranes were cut into appropriate lanes for incubations with different antibodies and antisera (see legends of Figure 3). After incubation with the primary antibodies and a horseradish-peroxidase-(HRP-) labelled secondary antibody Table 1: Primer used for PCR (for localisation of primers along the cDNA, see Figure 2(a)). detection was performed by SuperSignal chemiluminescence (Pierce). The different membrane lanes were precisely rejoined together and exposed to Fuji's medical X-ray film (Fuji Photo Film). Controls were done by omission of the first antibody, by using preimmune serum (not shown) and by competition with the appropriate antigen, 10 μg/mL ( Figure 3). Molecular weight markers spanning the range from 14.4 kDa lysozyme up to 200 kDa myosin were used (BioRad, broad-range markers).
Antibodies and Antisera.
Antibodies used were monoclonal Abs (mAb) NCL-DYS1 (Novocastra Laboratories), anti-actin monoclonal IgM (Amersham Life Science), goat anti-rabbit IgG-HRP (Pierce) and goat anti-mouse IgM-HRP (Pierce). New Zealand white rabbits and guinea pigs were immunised with purified recombinant UT11 and UT31 together with Freund's adjuvant [22]. Prior to immunisation the 26 N-terminal aa of recombinant UT11 and UT31, were sequenced for confirmation. The recombinant proteins were run on 10% SDS-PAGE and eluted for sequencing after electroblotting onto polyvinylidene difluoride membranes. Figure 3: Immunoblots of rat tissues for full-length utrophin and N-utrophin (4-10% SDS-PAGE). (a) Adult tissues with protein loads of ∼45 μg (1x), ∼90 μg (2x), and ∼90 μg for C. All lanes stem from one gel blotted onto nitrocellulose membrane. After staining with Ponceau-S, the membrane was cut for separate immunostaining (1x and 2x together plus lane C separate). Lanes 1x and 2x were incubated with anti-UT11. Anti-UT11 recognises full-length (∼395 kDa) and N-utrophin (∼65 kDa). C Lanes present competition by preincubation of the serum with recombinant UT11 antigen (10 μL/mL serum) before loading onto the blot. In addition, actin (∼44 kDa) was visualised in brain tissue by mixing anti-actin antibody with anit-UT11. Finally, the differently incubated membrane lanes were precisely joined together for exposure on X-ray films. Full-length utrophin is seen in all tissues and N-utrophin in kidney and heart. Competition with UT11 antigen removes full-length and N-utrophin. The second lower band underneath the full-length utrophin may represent a degradation product which is, however, only partially removed by antigen competition. (b) The same approach was employed with the gel comprising H, M, and C, except that anti-actin antibody was added to all lanes for loading control and as internal molecular size marker. Protein load was ∼90 μg protein of heart (H) and muscle (M) from young (8 days postnatal) and adult tissues. N-Utrophin can only be seen as a faint band in young heart tissue (arrow). The band of full-length utrophin is greatly reduced in adult muscle (double arrow) and abolished by competition in all samples labeled with C. For comparison, C6 rat glioma cells stained with anti-UT11 displaying full-length and N-utrophin as well as adult muscle stained with anti-DYS12 (a) and NCL-DYS1 (b) derived from different gels are included.
Circular Dichroism.
Five mM dithiothreitol was added to the purified recombinant proteins before dialysis into 10 mM Tris-Cl, pH 7.4, plus either 0.5 mM CaCl 2 or 1 mM EGTA plus 1.5 mM CaCl 2 for 40 hours with three buffer changes at 4 • C. Samples were analysed by circular dichroism at 25 • C in a Jasco J 715 spectropolarimeter with a path length of 1 mm. Samples from two different preparations were analysed at a concentration range of 1.5-3.1 μM with scanning from 260 to 190 nm. Data was processed according to [29] and the alpha-helix content derived by infinite analysis.
Primary cultures of mammalian skeletal muscle cells were initiated from neonatal myogenic cells obtained by trypsinisation of muscle pieces from hind limbs of 1-to 3day-old neonatal rats. For three days following plating, cells were maintained in growth medium (300 μM Ca 2+ ), consisting of HAM F12 (Invitrogen SARL, Cergy Pontoise/France) with 10% heat-inactivated horse serum (Invitrogen SARL), 10% fetal calf serum (Invitrogen SARL), and 1% antibiotics. Myoblasts underwent myogenesis in differentiation medium (1.8 mM Ca 2+ ), containing DMEM (Invitrogen SARL) supplemented with 5% heat-inactivated horse serum. After 48 h of culture, this control medium (DMEM + serum) was used to promote the formation of myotubes, which occur within 15 to 18 h. The fusion-promoting conditions were provided by the presence of a higher calcium concentration (1,8 mM) and horse serum. This medium exchange was used as time zero for the differentiation.
2.10.
Transfections. COS7 cells were seeded at 2 × 10 5 cells per dish (4 cm diameter) and grown over night. The next day, they were used for transfections. 3 μg plasmid DNA in 75 μL DMEM were mixed with 6 μL Superfect Transfection Reagent (Qiagen) vortexed and incubated for 7 min at 37 • C. After washing the cells once with PBS, 0.5 mL DMEM mixed with the DNA-Superfect complex was added and the cells incubated for 3 h at 37 • C/5%CO 2 . The cell were washed twice with PBS, and 2 mL complete DMEM per dish was added and the cells cultivated for 24-48 h. After 48 h of culture, myotube formation was induced which occurs within 15-18 h. Two days later, the cells were used for immunocytochemistry.
Proliferating myoblasts were transfected with the plasmid cDNA by the Effectene Reagent kit (Qiagen, Courtaboeuf, France). Cells were cultured for 36 hours on glass coverslips (50 × 10 4 cells) in proliferating medium. Cells were rinsed twice in fresh culture medium, and transfection of 1 μg of plasmid cDNA per 35 mm plastic dish was performed in the presence of 8 μL enhancer for compacting cDNA and 10 μL of the effectene cationic lipid. Following a 16 hour incubation, the transfection mixture was replaced with fresh complete proliferating medium.
2.11. Immunocytochemistry. COS7 cells were washed 3 times for 2 min with PBS and fixed in 4% paraformaldehyde, 0.15 M sodium phosphate buffer pH 7.4 for 15 min, washed 3 times 2 min with PBS, and permeabilised with 0.1% Triton-X-100 in 10% normal goat serum (Sera-Tech) in PBS for 10 min. Primary antibodies were applied in PBS, 10% normal goat serum for 1 h at RT. Cells were washed 3 times 2 min with PBS, then the second antibodies were added in PBS, 10% normal goat serum for 1 h at RT. Cells were washed 3 times 2 min with PBS, then they were covered with glycerol gelatine (Merck) and viewed with a Carl Zeiss Axioplan2 microscope. Antibodies used were anti-FLAG-M2 mouse monoclonal antibody (Stratagene Ltd.), rabbit anti-UT11, rabbit anti-UT31, goat anti-mouse IgM conjugated to Oregon green (Molecular Probes Inc.) goat anti-rabbit conjugated to Cy3 (Jackson Immunoresearch Lab.) F-actin was visualised with rhodamine-phalloidin (Molecular Probes Inc.).
Myoblasts and myotubes were fixed with 4% paraformaldehyde in TBS (20 mM Tris-HCl, pH 7.5, 150 mM NaCl, 2 mM EGTA, 2 mM MgCl 2 ) for 20 min at room temperature, washed three times with TBS, and incubated with 0.5% Triton X-100/TBS for 10 min to improve permeability to the reagents. After 10 min of exposure to a blocking solution (TBS containing 1% bovine serum albumin; Sigma), fixed cells were incubated 1 h with an anti-Flag-M2 monoclonal mouse antibody (1 : 1000). Samples were then exposed 1 h in the dark to a 1 : 200 diluted FITC-conjugated goat antimouse antibody (Jackson Immunoresearch, West Grove, Pa, USA) altogether with TRITC-conjugated phalloidin (Sigma) for directly staining F-actin microfilaments. Samples were mounted using Vectashield mounting medium (Vector, Burlingame, Calif, USA). The immunolabelled samples were examined by confocal laser scanning microscopy (CLSM) using a BioRad MRC 1024 ES (BioRad, Hemel Hempstead, UK) equipped with an argon-krypton gas laser. The TRITC fluorochrome was excited with the 568 nm yellow line, and the emission of the dye was collected via a photomultiplier through a 585 nm long pass filter. The FITC fluorochrome was excited with the 488 nm blue line, and the emission of the dye was collected via a photomultiplier through a 522 nm band pass filter. Data were acquired using an inverted microscope (Olympus IX70, Tokyo, Japan) through a ×60 oil immersion lens and processed with the Laser Sharp software, version 3.0 (BioRad). All the images were performed at equal excitation intensities (10% of the laser power) with a variable confocal aperture, a gain of 1500 and a black level of −3.
2.12. Statistics. Data evaluation was done using nonlinear regression analyses with GraphPad Prism version 2 (Graph-Pad Software). Values are mean ± standard error of the mean. Statistical analysis was performed using ANOVA and unpaired Student's t-test. Significance was accepted at P < 0.05. in Different Tissues. We previously described the message and protein of the short isoform N-utro in C6 rat glioma cells [22]. Here, we identify its presence in different rat tissues. 1st strand cDNA prepared from tissue 8 Journal of Biomedicine and Biotechnology poly(A) + RNA served as template for specific probing for the occurrence of N-utro mRNA by PCR. The schematic cDNA structures for full-length utrophin (Up395) and N-utro in the region of interest around nucleotide 1803 are given in Figure 2(a). The two sequences are identical down to nt 1802 where the deviation from Up395 begins in N-utro with GTA, immediately followed by the stop codon TGA. The protein sequence of N-utro is thus identical to that of Up395 except for the last residue which in N-utro is Val instead of Cys in Up395. N-Utro comprises the two calponin homology domains, CH1 and CH2, followed by the first two spectrin-like repeats and ends with amino acid (aa) residue 539 (Figure 1). The GT motif at the start of the sequence diversion could represent an unused splice donor site, followed by intron material that is completely different from the sequence in Up395 [22]. This allowed the construction of forward and backward primers specific for either the N-utro or the Up395 sequence in order to identify the respective molecular species by PCR (Figure 2(a)). Putative exon-intron boundaries (dashed lines) are given in analogy to their positions in the dystrophin gene.
mRNA of N-Terminal Utrophin Isoform (N-Utro, aa
The 1st strand cDNA was obtained from rat kidney poly(A) + RNA, primed with either oligo(dT) or pd(N)6. As oligo(dT) priming starts at the 3 end and might some times stop within the sequence, we also employed pd(N)6 random hexadeoxynucleotide for priming which starts at corresponding sites along the sequence [30]. On agarose gels, all primer pairs yielded the same distinct bands with both methods in RT-PCR (Figure 2(b)). The AB product is a 110 bp fragment encoded by exon-13, AD comprises exon-13 plus exon-14, and CD only exon-14. All three fragments are shared by Up395 and N-utro. In contrast, the products AF, AG, CF, and CG comprising exon-13 and/or exon-14 plus different lengths of intron-14 are specific for N-utro. EF and EG derive from the N-utro-specific intron-14. To rule out possible transcription artefacts and to confirm correctly spliced N-utro RNA, genomic rat liver DNA was directly subjected to PCR with the following primer pairs, AF, AG, and EG, (Figure 2(c)). Both the AF and AG products comprise now the 1.2 kb intron-13 (see Figure 2(a)) increasing their size with a corresponding slower electrophoretic migration in the agarose gel (from 0.3 kb to 1.5 kb for AF and from 0.5 kb to 1.7 for AG). The size of EG remains unchanged since it derives entirely from intron-14. Thus, splicing N-utro from genomic DNA does not alter fragment size.
For detection of the N-utro splice variant poly(A) + RNA was isolated from different tissues and pd(N)6-primed for production of 1st strand cDNAs. These were subjected to RT-PCR with primers yielding the products EG (276 bp) specific for N-utro and HJ (104 bp) specific for full-length Up395 (Figure 2(d)). Both species are seen in all tissues examined, brain, heart, kidney, liver, and skeletal muscle. The message for N-utro (EG) appeared in all tissues lower than for those Up395 (HJ). Different primer pairs do, however, not allow comparative quantification since expression efficacy may vary between different primer pairs. On the other hand, the N-utro message (primer pair EG) was clearly fainter in adult skeletal muscle than in muscle from 8 days old rats or in adult heart. The same holds for the full-length-utrophin Up395-specific primer pair HJ. Quantitative comparison between one primer pair from the same gel under identical conditions is valid.
Utrophin and N-Utro Protein in Different Tissues.
Western blots on 4-15% gradient SDS-PAGE were performed with all tissues where the mRNAs for Up395 and N-utro have been analysed (Figure 3). The polyclonal antibodies (pABs) raised against the recombinant utrophin fragments and DYS12 proved specific and did not cross-react. Probing with anti-UT11 clearly revealed Up395 in all tissues (Figures 3(a) and 3(b)) including C6 rat glioma cells which was added for comparison (Figure 3(b)). As expected, Up395 was drastically reduced in adult (double arrowhead) skeletal muscle when compared to muscle from 8 days old rats (P8). The protein bands migrating at ∼62 kDa indicated by arrows in heart and kidney may represent the N-utro protein as they coincide in position with the N-utro band in C6 cells. Furthermore, the disappearance of the bands of Up395 and N-utro by competition with an excess of UT11 antigen (columns headed by C) supports the suggestion that the band at ∼62 kDa indeed represents the N-utro protein. Together with the fact that the message for the Nutro isoform is well expressed in these tissues (kidney and heart in Figure 2(d)), the corresponding protein we observed seems to represent N-utro. The higher molecular weight bands between Up395 and N-utro in heart and kidney are only partially outcompeted by UT11 antigen and may thus represent utrophin degradation products comprising the Nterminal portion or nonspecific cross-reactions by the primary antiserum or the secondary antibodies. For positioning of full-length dystrophin (Dp427) in relation to Up395 adult muscle, samples were immunestained by anti-DYS12 (a, in Figure 3(b)) and by the commercial monoclonal NCL-DYS1 antibody stemming from aa 1181-1388 in the spectrin repeats 8 and 9 of the rod (b, in Figure 3(b)). The actin stained with a monoclonal IgM antibody in brain (Figure 3(a)), muscle, and heart (Figure 3(b)), was used for loading control and as intrinsic molecular weight marker.
In general, the levels of mRNAs need not necessarily correspond to the amount of protein expression. Nevertheless, it was reported that in human NCl-60 cancer cells 65% of the genes showed statistically significant transcript-protein correlation [31]. We, thus, estimated the relative content of Up395 by semiquantitative immunoblot densitometry in the various tissues from 2 to 8 days young and 6 to 8 weeks adult rats (Table 2). Staining intensities (±SEM) were all expressed in relation to that of adult liver tissue which was taken as 100. For calibration, liver and/or adult heart tissue was included in all electrophoretic runs. The results indicate that young and adult liver and brain tissue comprise similar amounts of Up395 protein, while young and adult kidney and heart as well as young skeletal muscle display significantly higher values. In adult muscle, Up395 is down to 10% of that found in young muscle as expected. The high Up395 content in adult aortic and uterine tissue is given for comparison. The content of N-utro was generally too low for ( * ) Significantly lower (P < 0.002) than young and adult heart and adult kidney. ( * * ) Significantly lower (P < 0.0001) than any other tissue with n of 5 or higher.
quantitative assessment (Figure 3). The relatively high Up395 content in adult heart and kidney and in young muscle coincides with the brightest transcript bands for Up395 in the agarose gels (HJ product in Figure 2(d)) pointing to a correlation between message and protein expression. An absolute protein content for Up395 of 0.0006% of total protein was determined in adult mouse skeletal muscle [2,32]. This is around 30 times less than dystrophin in adult muscle (∼0.02%) and would allow to translate the relative values from Table 2 into approximate protein content in the different tissues.
Actin Binding of N-Terminal Fragments of Utrophin
and Dystrophin. Two types of recombinant N-terminal fragments were prepared from utrophin and dystrophin for in vitro actin-binding studies ( Figure 1). UT12 (coding for aa 2-261 = 31.6 kDa) and DYS12 (aa 1-246 = 29.9 kDa) comprise the actin-binding domain (CH1 and CH2) alone. Second, UT11 (aa 2-594 = 71.1 kDa) contains in addition the first hinge region followed by two spectrin-like repeats plus 68 aa running into spectrin repeat-3. This fragment is taken as analogue to the N-terminal utrophin isoform (Nutro) earlier isolated from C6 rat glioma cells which comprises the two spectrin-like repeats plus 13 aa of repeat-3. As mentioned above, its primary sequence is identical to that of rat utrophin except for the last residue of Cys instead of Val. All three recombinant fragments bearing a His-tag (MRGSH6GS-) at their N-terminus for affinity purification on Ni-nitrilotriacetic acid (Ni-NTA) agarose column could be eluted with 200 mM imidazole under native conditions. The affinity purification from expression in E. coli is shown in SDS-PAGE ( Figure 4). The electrophoretic mobility of the purified protein fragments is also revealed by immunoreaction with a mAB against the His-tag. Despite its calculated molecular mass of 71.1 kDa, UT11 persistently migrated at a somewhat lower position (with an apparent molecular mass of ∼62 kDa) in SDS-PAGE. The . SDS-PAGE was performed by loading equal volumes of supernatant and resuspended pellet. Densitometric evaluation was executed after rigorously controlled staining and destaining conditions yielding a linear relationship from 0.2 to 10 μg of protein under the assumption that all proteins stained with equal intensity. Actin concentration was always kept constant and thus served as internal loading standard, while the potential ligand was varied. BSA (∼67 kDa) and tropomyosin (subunit chain ∼33 kDa) were run with all series in parallel for negative and positive controls of the binding as well as control sedimentation of the ligands in the absence of actin. The electrophoretic runs of an example of UT12 binding to actin together with BSA as a control are given in Figure 5. While BSA at all concentrations occurs unbound in the supernatant ( Figure 5(b)), the bound as well as the unbound portion of UT12 increase with ascending ligand concentration ( Figure 5(a)). Traces of probably\linebreak non-polymerised actin were occasionally seen in the supernatants, never, however, exceeding 2% of total actin.
Binding data and Scatchard plots of all experiments with UT11, UT12, and DYS12 are given in Figure 6. All three protein fragments exhibit saturation binding independent of Ca 2+ with UT11 and UT12, while Dys12 binding was Ca 2+ -sensitive (inhibition with EGTA). UT11 and UT12 both display a concave curve in the Scatchard plot indicating a Table 3: Actin binding of recombinant protein fragments of utrophin and dystrophin. Parameters are derived from the binding data given in Figure 6. high affinity and a second lower affinity (Figures 6(b) and 6(d)). The binding parameters derived from Figure 6 are summarised in Table 2. The affinity constant K corresponds to the reciprocal value of the dissociation constant K D . The higher affinity constant K 1 is for both UT fragments around 5-6 × 10 6 M −1 . The second lower K 2 is for UT11 about 70 times lower and for UT12 about 20 times lower. Concave Scatchard curves imply either the existence of two types of binding sites on filamentous actin (F-actin), or ligand-induced negative cooperativity. The two parts of the Scatchard curves for UT11 and UT12 are sufficiently distinct to allow separate evaluation. One molecule of UT11 binds with high affinity per 25 actin monomers, while UT12 binds per 7 actin monomers with high affinity (Table 3). This high stoichiometric relation is greatly reduced at full saturation binding involving the two affinities together. Then one molecule of UT11 binds per 3.1 actin monomers, and UT12 binds per ∼1.3 actin monomers. The lower binding stoichiometry for UT12 conforms to that reported for the recombinant utrophin fragment (aa 1-261) which corresponds to UT12 [33]. Winder and coworkers [34] have reported an affinity of ∼5 × 10 4 M −1 for the binding of the N-terminal utrophin fragment (aa 1-261) to muscle F-actin with a stoichiometry of almost 1 : 2. Its affinity is, however, ∼100 times lower than that reported here for K 1 and still 5 times lower than that of K 2 .
In contrast to the utrophin fragments, DYS12 (aa 1-246 = actin binding domain) binding to F-actin in the absence of EGTA displays a sigmoidal saturation curve with a lower slope at the beginning that increases with rising ligand concentration before saturation sets (Figure 6(e)). Such a sigmoidal binding curve is characteristic for positive cooperativity and yields a typical convex Scatchard's plot ( Figure 6(f)) with a Hill coefficient of 1.52. The combined binding constant for DYS12 is 2-3 × 10 5 M −1 , and one DYS12 molecule binds per 2 actin monomers. Reported stoichiometries for recombinant N-terminal protein fragments from utrophin (aa 1-261) and dystrophin (aa 1-246) are 1 : 1 and 2 : 1, respectively, [10,33] which agrees with our results given here. Affinities around 8 × 10 4 M −1 have been published for several in vitro actin-binding studies with fragments from utrophin and dystrophin [33]. Our K1 values for UT11 and UT12 are definitely higher by more than an order of magnitude (Table 3). On the other hand, the binding affinity of our DYS12 with a His-tag at its N-terminus is close to 7.3 × 10 4 M −1 reported [35] for a corresponding N-terminal dystrophin peptide (aa 1-246) with the His-tag at its Cterminus and also close to 5.3 × 10 4 M −1 published for an untagged N-terminal peptide (aa 1-246) [36]. It may, therefore, be concluded that an attached His-tag does not grossly affect actin binding.
The apparent Ca 2+ sensitivity of DYS12 binding to actin (inhibited by EGTA) concealed a peculiarity we first were not aware of. In order to evaluate the free Ca 2+ ion concentration critical for the actin binding of DYS12, cosedimentation assays were performed in a Ca 2+ -EGTA buffer system with 5 mM EGTA and varying amounts of Ca 2+ . Surprisingly, at all free Ca 2+ ion concentrations up to 1 mM, the binding of DYS12 never exceeded 30% of the maximal level obtained in the absence of EGTA (data not shown). Therefore, in further series of experiments, the EGTA concentration was varied with always 0.5 mM CaCl 2 in excess over EGTA. The results in Figure 6(g) indicate that the actin binding decreases significantly at EGTA concentrations higher than 0.5 mM.
No recovery of the actin binding was observed after removal of EGTA by dialysis in the presence of 0.5 mM CaCl 2 . Actin sedimentation was not affected by the presence of EGTA at any concentration used. EGTA up to 5 mM never had an affect on actin binding either of UT11 nor UT12. For a rough estimation of possible protein structural changes, circular dichroism (CD) measurements were performed on UT12 and DYS12 in the absence of EGTA but with 0.5 mM Ca 2+ and in the presence of 1.0 mM EGTA plus 1.5 mM Ca 2+ (spectra not shown). Both fragments exhibited a double minimum under both conditions typical for proteins with a high alpha-helical content. The results revealed a decrease of alpha helix content by 16% for UT12 and by 18% for DYS12 in the presence of EGTA with CaCl 2 in excess (Table 4). The reduction of alpha-helix was compensated for by an increase in turns. The decrease in alpha-helix is of similar magnitude in the two proteins and does not explain the selective impairment of EGTA on the actin-binding function of DYS12.
Winder and Kendrick-Jones [37] reported that actin binding of recombinant UTR261 (aa 1-261) was inhibited by calmodulin in the presence of Ca 2+ , but not in its absence. We reproduced this result with the corresponding UT12 (aa 2-261) without and with additional EGTA (Figure 7(b)). Surprisingly, calmodulin plus Ca 2+ and also calmodulin, 2 mM Ca 2+ plus 1 mM EGTA, which both inhibited binding of UT12 did not affect actin binding of UT11 bearing two spectrin repeats (Figure 7(a)).
Immunolocalisation of UT11, UT12, and UT21 in COS7
Cells and Myotubes. For evaluation of intracellular localisation of the actin binding domain alone (EUT12, aa 1-261), the actin-binding domain plus the first two spectrin repeats (EUT11, aa 1-543) and the two first spectrin repeats R1 and R2 alone (EUT21, aa 262-543) were inserted into eukaryotic plasmids for transfection to COS7 cells (Figure 8) as well as to myoblasts (Figure 9). For unambiguous recognition, all three recombinant fragments were fused to the "FLAG" peptide (-DYKDDDDK) at their C-terminus which can be detected by anti-Flag M2 mAB. Actin was visualised in double staining with rhodamine-phalloidin. Transfection efficiency was directly assessed by fluorescence microscopy in a number of experiments in which a GFP containing second plasmid was included. The transfected cells displayed unaltered shape and growth compared to nontransfected cells in phase contrast analysis.
COS7 cells are derived from monkey's kidney and immortalised by an origin-defective mutant of SV40 [38]. These cells are often used for transfection with recombinant plasmids, and during growth they readily adhere to glass and plastic surfaces. Spread on the substratum the cells display a fibroblast-like appearance (Figures 8(a), 8(c), and 8(e) at the right). The cytoskeletal actin (probably beta-and gammaactin) appears in stress fibre structures and in densely organised cortical cytoskeleton along the cell surface membrane. In the left vertical row (a-f), anti-FLAG staining reveals the recombinant utrophin proteins. EUT12 (actin-binding domain) follows the same pattern by staining all actin structures outside the cell nucleus (Figures 8(c) and 8(d)). EUT21 (first two spectrin repeats) localises in the cell nucleus and does not associate with actin (Figures 8(e) and 8(f)). Anti-UT31 (against the rod) faintly stains endogenous full-length utrophin (Up395) in the entire cell (d the at right). EUT11 (actin-binding plus two spectrin domains) is found in and around the cell nucleus but also throughout the cell body and faintly along the membrane (Figures 8(a) and 8(b)). This distribution of the FLAG-labeled EUT11 is underlined by anti-UT11 (b the at right) that intensely stains EUT11 plus the endogenous utrophin. Taken together, only the isolated actin-binding domain of utrophin (EUT12) clearly associates with stress fibres and submembranous cortical actin structures. The relatively diffuse staining of Up395 with anti-UT11 and anti-UT31 throughout the cells lets assume that additional portions of the intact molecule other than the actin and the first two spectrin domains must be involved in specific target sorting.
Transfection with the cDNA plasmids encoding the utrophin fragments EUT11, EUT12, and EUT21 fused to the FLAG Tag was performed on myoblasts from neonatal rat hindleg muscle in primary cell cultures. After 48 h of culture, myotube formation was induced, and two days later the fixed cells were immunostained for the utrophin fragments with FITC-conjugated AB and with TRITC-conjugated phalloidin for F-actin microfilaments. Staining patterns were assessed by confocal laser scanning microscopy ( Figure 9). The anti-Flag stained utrophin fragments are displayed in the left vertical row in green, while F-actin is stained in red in the right vertical row. In all cases there appears only one transfcted green cell (Figures 9(b)-9(e)). Evidently the transfected myoblasts often fuse to myotubes together with nontransfected cells. In the spindle-shaped myoblasts (Figures 9(a) and 9(f)), staining for EUT11 and EUT21 fills the entire cell including the nucleus. The myotubes (Figures 9(b)-9(e)) display a rich filamentous actin structural network, probably mostly nonmuscle beta-and gamma-isoforms of the cortical cytoskeleton. Sarcomere structures are not yet visible except for just a beginning in the nontransfected myotube in red at the top of Figure 9(b). EUT12 (actin-binding domain alone) staining precisely follows the actin structures throughout the myotubes and becomes especially dense at the ends where the cells adhere to the substratum (Figures 9(c) and 9(d)). EUT11 staining does not follow the actin filaments but remains diffuse and punctate, sparing out the nuclei. Its staining pattern suggests that EUT11 may attach to the myotube surface membrane. EUT21 seems almost entirely confined to a row of nuclei in Figure 9(e).
The general staining patterns of the three recombinant fragments from the N-terminal utrophin region in myotubes and in COS7 cells are similar in kind. The isolated actinbinding domain (EUT12) clearly associates with F-actin structures in both cell types, while the other two protein fragments more variably appear in the cytoplasm, along the membranes or in the cell nuclei.
mRNA and N-Utro Protein in Rat
Tissues. The occurrence of message and protein of a novel N-terminal short utrophin isoform (N-utro) in different rat tissues is reported here for the first time. In addition, its intracellular sorting as well as in vitro actin-binding properties was examined. The message for full-length utrophin (Up395) is the highest in cardiac muscle, in skeletal muscle of early postnatal animals, and in kidney. It is apparently lower in adult skeletal muscle, brain, and liver (product HJ in Figure 2(d)). Although a quantitative correlation between protein and message can often not be demonstrated, in this case the Up395 content (expressed relative to that in adult liver) was significantly higher in those tissues where the message was also higher ( Table 1). This correlation between message and protein is remarkable in view of the notorious difficulties encountered with quantitative immunoblotting involving transfer of large proteins such as Up395 to nitrocellulose membranes for electrophoresis. Correspondingly, the protein N-utro could be only visualised by immunoblots in those tissues with the highest message levels for kidney and cardiac muscle (Figure 3). In the other, tissues its content was too low for detection.
Actin Binding.
Since N-utro is the only N-terminal short isoform possessing the actin-binding domain known so far, extensive binding studies were done with the recombinant analogue UT11 (aa 2-594, actin-binding domain plus two spectrin repeats) in comparison to UT12 (aa 2-261) and DYS12 (1-246), the latter two comprising the actin-binding domain alone (Figures 6 and 7 plus Table 3). Several of the findings presented here shed light on fundamental differences in actin-binding function between these N-terminal fragments of utrophin and dystrophin.
First, the high-and low-binding affinities for UT11 and UT12 greatly affect the stoichiometric relation of ligand to actin. With the high affinity, UT11 binds to every 25th actin and UT12 to every 7th actin. The two additional spectrin repeats in UT11 differentiate the two utrophin fragments from one another. These two motifs of UT11 must be responsible for the more extended spacing of this ligand as compared to UT12. With saturation binding the second lower affinity reduces the spacing of the ligands along the F-actin filament to ∼3 actin per UT11 and close to one actin per UT12. In our experiments, there were no additional proteins present such a tropomyosin or troponin components that could impart a defined spacing periodicity. Thus, the high affinity binding results suggest that UT11 with two spectrin repeats affects the actin filament binding properties over a distance of 25 actin monomers. But on increasing ligand saturation, additional UT11 molecules seem able to associate with actin at lower affinity in between those already bound with high affinity. The overall stoichiometry is, thus, reduced to three actin monomers per UT11. A similar deliberation applies to the shorter UT12 without spectrin repeats. With high affinity, the actin-binding domain alone affects the accessibility of 7 actins in the filament, while on saturation including low-affinity binding, the stoichiometry drops to ∼1 : 1. Taken together, our results suggest, under the assumption of indistinguishable binding sites of the individual actin monomers in the filament, that the distinct lower affinity is induced by the binding of the ligands UT11 and UT12 with high affinity. Furthermore, in view of the different spacing induced by UT11 or UT12 binding with high affinity, it seems most unlikely that a second class of defined binding sites with lower intrinsic affinity exists along the actin filament. Several reports on N-terminal utrophin and dystrophin fragments mention similar actin-binding affinities in the range of ∼10 5 M −1 and similar stoichiometries for binding saturation as given here [33][34][35]39]. However, these published binding affinities may routinely represent the lower values at full saturation, while the higher values at low ligand concentrations of utrophin fragments may have been overlooked. Actin-binding studies have mostly been performed with skeletal muscle alpha-actin. Yet the cytoskeletal Up395 and Dp427 primarily interact in vivo with cytosolic beta-and gamma-actin which are ubiquitously expressed. Even in striated muscle, the cortical gamma cytoskeletal actin in the costameres between the sarcolemma and the Zdisks presents the interaction partner for Up395 and Dp427 [40]. Nevertheless, the different actin isoforms share over 93% sequence identity and can replace each other to a large extent [40]. Reassuringly, it was reported that the binding characteristics of utrophin and dystrophin and their fragments to nonmuscle actin are almost identical to those with skeletal muscle actin [8,[33][34][35]. The binding affinities are in some cases slightly higher for cytoplasmic than for skeletal muscle actin.
Second, in stark contrast to the negative cooperativity in the binding of the utrophin N-terminal fragments, DYS12 binds to actin filaments with moderately positive cooperativity (Hill coefficient of 1.5) and a stoichiometry at saturation of one ligand per two actin monomers. Positive cooperativity with a Hill coefficient of 3.5 was also reported for actin binding of the dystrophin fragment (aa 1-246) that corresponds to UT12 [6,36]. Tropomyosin was reported to affect neither affinity nor stoichiometry in the binding of dystrophin in vitro [39,41]. Curious enough, Up395 and Dp427 both bind to actin by lateral association along the actin filament but do not compete in their binding with one another [2]. Dp427 comprises 5 basic spectrin repeats within the stretch of R11 to R17 in the middle of the rod. This spectrin repeat stretch firmly associates with F-actin and reinforces the binding of the N-terminal domain to actin. The actin interaction of the spectrin repeat region is salt dependent pointing to its electrostatic nature. The two actin-binding regions in dystrophin are separated by ∼1200 aa. Ervasti has presented a model for dystrophin as molecular shock absorber during muscle contraction. On stretch the electrostatic interaction of the positively charged spectrin repeats would slide along the negative surface of the actin filament and, thus, dampen elastic recoil [2]. This is not the case with Up395. Up395 lacks the basic nature of the corresponding spectrin repeats in the middle of the rod, and, consequently, this stretch does not interact with actin [42]. Instead, the 10 spectrin repeats immediately following the N-terminal actin-binding domain firmly interact with actin and stabilise the binding. This interaction is not salt dependent and, thus, may be of hydrophobic nature. The N-terminal domain and the following spectrin repeats function as a single contiguous unit [43]. This is compatible with the notion that UT11 with the two first spectrin repeats occupies more actin monomers than UT12. Consequently, it may be speculated that intact Up395, because of its contiguous binding region from the Nterminus through to spectrin repeat R10, does not function as molecular shock absorber but rather as stabiliser of cortical actin filaments in costameres and in the postsynaptic membranes of NMJs [2,4].
Finally, we assessed potential regulation by Ca 2+ and calmodulin of fragment binding to actin. The actin binding of UT11 and UT12 proved to be independent of Ca 2+ and not affected by EGTA. However, the interaction of DYS12 with actin is virtually abolished in the presence of more than 0.5 mM EGTA even in the simultaneous presence of 0.5 mM CaCl 2 in excess. It was not possible to restore the binding function to DYS12 by gradually removing EGTA in dialysis. Molecular parameters derived from circular dichroism measurements indicated a reduction of alpha-helix content by around 17% for both fragments in the presence of 5 mM EGTA plus 6 mM CaCl 2 . This does not explain the selective abolition of DYS12 actin binding by EGTA. The inhibition of actin binding of UTR261 (or UT12) by calmodulin in the presence of Ca 2+ was suggested to be due to its competitive binding to the CH1 domain [37]. Immediately upstream of the actin-binding sequence-2 (ABS2) near the C-terminus of CH1 localises a highly hydrophobic stretch of 15 aa that may accommodate calmodulin complexed to Ca 2+ . It is speculated that this represents a mode of regulation in vivo for the interaction of the N-terminal utrophin with actin. Our results indicate, however, that the two spectrin-like repeats in UT12 are sufficient to prevent such a calmodulin/Ca 2+ mode of regulation. In vivo with Up395 running along the actin filament firmly bound from its N-terminal actin-binding domain through to spectrin repeat R10, it is unlikely that this part of the molecule responds to subtle Ca-calmodulin regulation. Furthermore, these findings are of particular interest in relation to the novel N-utro isoform.
After spreading on the substratum, the COS7 cells display a fibroblast-like appearance as revealed by staining of the actin cytoskeleton with rhodamine-phalloidin (Figure 8). Differentiation and myotube formation were induced in the myoblasts by adding 1.8 mM CaCl 2 and horse serum ( Figure 9). As soon as the spindle-shaped diffusely staining myoblasts fuse to myotubes, an intensely stained cytoskeleton develops with cytoplasmic actin filaments filling the entire tubes. Only in one nontransfected myotube an early expression of sarcomeric striation can be seen (Figure 9(b)). In both types of cells, the transfected EUT12 neatly tracks the actin cytoskeleton and accumulates at the edges where the myotubes attach to the substratum. In the myotubes, the actin fibres could serve as scaffold for the nascent myofibrils as we have described earlier for the remodeling of rat cardiomyocytes in long-term culture [44]. Recombinant UTR261-GST (glutathione S-transferase) fusion-protein microinjected into chick embryo fibroblasts was shown to label stress fibres and focal contacts [34]. The diffuse staining for EUT11 comes as a surprise. Its repartition extends throughout the cells. In the myotube, it leaves the nuclei unstained and probably lines the membranes. Intact Up395 recognised by anti-UT31 against the rod domain (spectrin repeats R14-R16) presents a discrete, diffuse staining throughout the COS7 cells (Figures 8(d) and 8(f)). This indicates that Up395 probably also lines the cell membrane attached to the cortical actin cytoskeleton, but it does not mark any stress fibrelike structures. Yet another feature holds for EUT21 which distinctly marks the nuclei, probably invading them.
Taken together, the staining pattern of EUT12 for actin filaments and of EUT11 for membranes may be interpreted as follows. The actin-binding domain alone (UT12 and EUT12) binds to actin filaments with high affinity in a Ca 2+ -independent manner. The actin binding plus first two spectrin domains (UT11 and EUT11) associate with membranes preventing the N-terminus to bind to actin stress fibre-like structures, though the N-terminus may still bind strongly to the cortical actin network lining the membranes. In other words, the two spectrin repeats associated with membranes, prevent the N-terminus from going astray by following the actin filaments throughout the cell body. UT11 and EUT11 correspond to the N-utro isoform, whose genomic derivation and protein we have described here. Thus, N-utro (1-539) may function as an ultrashort linker between cortical actin and the membranes. These properties may gain functional significance when N-utro was present in sufficiently high concentration. In molecular terms, Nutro is ∼6 times smaller than full-length Up395 and could represent a significant molecular fraction. Moreover, its repartition in different tissues could be concentrated at specific subcellular structures such as neural synapses or in epithelial and endothelial cell systems involved in barrier, secretory, and resorptive function as in kidney nephrons, vasculature, or blood-brain barrier. In all these cases, scaffold and signaling platforms require highly specialised subcellular localisation for interaction with myriads of proteins and other components [11,12,16,17,45].
Conclusions
We describe here a novel type of utrophin isoform that derives from its N-terminus (N-utro). The various Cterminal isoforms of utrophin and dystrophin correspond to each other except for Dp260 which has no analogue in utrophin. N-Utro also presents an exception as no analogous N-terminal isoform from dystrophin is known so far. Nutro has no relation to the utrophin/dystrophin-associated protein complex. Consequently, the function of N-utro seems to differ from that of the full-length or C-terminal isoforms. Our immunocytochemical results indicate that Nutro could be responsible for the anchoring of the cortical actin cytoskeleton to the membranes. Possible association of N-utro with additional proteins other than actin needs further exploration. | 2018-04-03T01:49:54.878Z | 2011-11-14T00:00:00.000 | {
"year": 2011,
"sha1": "6e27b219d4caff588cea2f36088b9bc52e32109d",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2011/904547.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dbd992a5d44bcd757acdb296cf72d5026d55cfaa",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
53720210 | pes2o/s2orc | v3-fos-license | Naturally occurring compounds as pancreatic cancer therapeutics
Naturally occurring small molecule compounds have long been in the spotlight of pancreatic cancer research as potential therapeutics to prevent cancer progression and sensitize chemoresistant tumors. The hope is that terminal pancreatic cancer patients receiving aggressive chemotherapy can benefit from an increase in treatment efficacy without adding further toxicity by way of utilizing natural compounds. While preclinical studies on a number of natural compounds, such as resveratrol, curcumin, rapalogs and cannabinoids, show promising preclinical results, little has translated into clinical practice, though a number of other compounds hold clinical potential. Nevertheless, recent advances in compound formulation may increase the clinical utility of these compounds.
INTRODUCTION
Despite being the 12th most common cause of cancer diagnosis in the United States, pancreatic cancer is the 2nd most common cause of cancer death with a 5-year survival of 8.2% [1]. The high mortality rate in pancreatic cancer patients is attributed to the aggressive nature of the disease and a lack of effective treatment options [2]. While surgery in combination with chemotherapy is the most effective treatment and offers the highest chances of survival, only a minority of patients (15-20%) qualify [3], with the majority of patients receiving combination chemotherapy (gemcitabine, 5-FU (fluorouracil), Abraxane and platinum drugs) or chemoradiation [2]. There is little consensus regarding specific drugs or the sequence of treatment options [2,4] and clinical responses are low due to the high levels of chemoresistance [5].
Pancreatic cancers are characterized by a number of complex genomic alterations that differentiate pancreatic adenocarcinomas from other malignancies. Pancreatic cancer develops through a specific series of mutational events (KRAS > CDKN2A > TP53/SMAD4) that develop gradually and independently [6]. Recent studies suggested a higher impact of losses of alleles and chromotrypsis than previously anticipated [7,8]. To date, these mutations are believe not to represent suitable targets for therapeutic intervention in pancreatic cancer patients.
Because pancreatic cancer is typically a disease of older patients, treatments are limited by the patient's overall health and continuous aggressive treatments are often not an option in this patient population. Declining health in combination with the aggressive nature of pancreatic tumors and high levels of drug resistance limit clinical options for successful pancreatic cancer treatment and result in rapid disease progression with high mortality shortly after presentation [9].
Natural products and synthetic small molecule compounds derived from natural chemical structures have long been in the focus of the pancreatic cancer field due to reports suggesting anti-cancer efficacy in a number of different malignancies and a low toxicity profile [10,11]. Additionally, these compounds are often well received, Review www.oncotarget.com some even being generally regarded as safe (GRAS) and readily taken by most patients, though they as often tend to be plagued by low bioavailability. Nevertheless, several natural compounds are currently being explored for their potential in treating patients with pancreatic cancer [11][12][13][14][15][16][17][18][19][20].
TAXANES
Taxanes are microtubule-stabilizing drugs that disrupt the cell cycle and are effective treatments against a range of cancers, including breast, ovarian, prostate, urothelial, and lung cancer [21][22][23][24][25]. The most commonly used taxanes are paclitaxel, discovered in the 1970s and derived from the western yew tree, and docetaxel, discovered in 1981 and derived via esterification of 10-deacetylbaccatin II, which can be found in the European yew tree [26]. Paclitaxel and docetaxel are hydrophobic compounds characterized by a taxane ring core, estherification at the C-13 position with a complex ester group and an unusual fourth ring at the C-4,5 position, with docetaxel differing from paclitaxel by only two moieties. These slight chemical differences result in different effects on the cell cycle. While paclitaxel inhibits the cell-cycle progression at the G2-M phase checkpoint, treatment effects of docetaxel are most prominent in S phase. Taxanes promote the assembly of microtubules and prevent their depolymerization, thus interfering with a number of normal cellular functions that depend on changes in the microtubule network. Similar to other natural compounds, taxanes have been reported to display anti-tumor effects that are not directly related to microtubule stabilization, but results from enhanced phosphorylation of Bcl-2, release of tumor necrosis factor-α (TNF-α) and a decrease in expression of TNF receptors ( Figure 1) [27].
Taxanes have high activity in a wide spectrum of solid tumors (e.g. ovarian, breast, lung, head and neck, gastro-esophageal, bladder, testis, endometrium neoplasms) and are active as single agents or in combination chemotherapy. However, their clinical use is accompanied by significant side effects (neutropenia, mucositis and neuropathy).
In order to increase the tolerability of taxanes and reduce resistance, efforts have concentrated on new taxane formulations (e.g. albumin, nanoparticles, emulsions, liposomes), new taxane analogues and prodrugs. Compounds such as abraxane and docosahexenoic acid (DHA)-paclitaxel, are examples of new taxanes that have shown higher activity than paclitaxel. Both compounds display significant activity in taxane-resistant and unresponsive cancers while also exhibiting a safer toxicological profile than first-generation products.
Abraxane (nab-Paclitaxel) an albumin-paclitaxel formulation, in combination with gemcitabine was FDA-approved as a first-line treatment for pancreatic cancer based on results obtained from the MPACT phase III trial [12]. The results showed higher overall response rates (23% compared to 9%) and longer median progressionfree survival rates (5.5 months compared to 3.7 months) in patients treated with the combination of nab-Paclitaxel and gemcitabine when compared to gemcitabine alone [28]. This was supported by a trial performed by Goldstein et al. using the combination in a large cohort of pancreatic cancer patients (n = 861). Patients in that study receiving the combination of gemcitabine and nab-Paclitaxel displayed increased survival (8.7 months) when compared to gemcitabine alone (6.6 months) [29]. Abraxane in combination with gemcitabine is currently a routine firstline treatment for patients with pancreatic cancer [12].
RESVERATROL
Resveratrol is a non-flavonoid polyphenol, phytoestrogen and natural stilbene found in red wine, blueberries, cranberries and peanuts. It is known for its anti-inflammatory and antioxidant properties and has been consumed by a large part of the population in over-thecounter dietary supplements with few reports of safety issues. Studies performed in recent years also documented resveratrol as a potential anti-cancer therapeutic [30]. Resveratrol disrupts all stages of cancer development by preventing tumor initiation (antioxidant and antimutagen), reducing tumor promotion (anti-inflammatory effects and also cyclooxygenase and hydroperoxidase inhibition), inhibiting tumor growth and reducing metastatic potential ( Figure 1) [30].
Resveratrol has been shown to impact a wide variety of signaling pathways ( Figure 1) most of which are dependent on the microenvironmental context, such as the insulin-like growth factor system [31], Wnt signaling [32], Notch-1 signaling [33], STAT3 [33], the Akt/ mTOR pathway [34] and Sirt1/AMPK [35]. Due to the many different pathways impacted and the wide range of potential interactions involved in resveratrol's therapeutic properties, our understanding of the mechanisms at work are limited. An in vitro study conducted by Zou et al. found that resveratrol downregulated the expression of β-catenin, essential in the canonical Wnt signaling pathway [32]. Zhang et al. identified the Notch-1 signaling pathway as a resveratrol target in cultured vascular smooth muscle cells, with evidence of declining total and cytoplasmic levels [33]. Another study identified the STAT-1 pathway, in addition to Notch-1 and Wnt signaling, with all three signaling pathways inhibited by resveratrol in cervical cancer cells [33]. Yet another study, using T-cell leukemia cells, found that resveratrol induced apoptosis by inhibiting Akt/mTOR pathways and simultaneously upregulating p38-MAPK [34]. Still other research indicates the Sirt1/AMPK pathway as a www.oncotarget.com resveratrol target [35]. Due to the many different pathways impacted and the wide range of potential interactions involved in resveratrol's therapeutic properties, derived from ~10,000 publications on the molecule, understanding of the precise therapeutically relevant mechanisms at work are limited [30].
Several studies have shown resveratrol to be an effective anti-cancer agent in models of pancreatic cancer [13,36,37] and that this effect may be mediated through leukotriene B 4 inhibition and activation of FOXO transcription factors [38].
In a study examining the efficacy of resveratrol in combination with gemcitabine, the combination was found to inhibit, suggesting that resveratrol can improve chemotherapy outcomes, without adding to chemotoxicity [39]. Indeed, toxicity reports of resveratrol suggest that the compound is tolerated up to 1g or even 5g administered daily in humans [40]. Cell culture investigations suggest efficacy at 10 μM -50 μM [41] and anticancer properties have been observed with concentrations as low as 5 μM [42]. A consideration in any investigation of resveratrol bioavailability is whether to measure resveratrol alone or resveratrol and its metabolites, which generally measure much higher in circulating plasma. It is possible that the benefits derived from resveratrol are as much a result from its metabolites as from the compound itself in vivo [42].
Resveratrol is rapidly absorbed but demonstrates a low bioavailability profile with high levels of interindividual diversity, meaning that people metabolize resveratrol differently leading to notable differences in bioavailability between individuals. The greatest bioavailability numbers indicate a C max of approximately 4 μM when standard dosing (5 g) is used [40]. Though bioavailability can be increased by repeat dosing, a halflife of 2-5 h remains a problem for routine clinical use and rational combinations with standard of care agents [40]. With such a short half-life, even repeat dosing sees the availability of resveratrol fall more quickly over time than would be optimal for clinical use. Jupiter Orphan Therapeutics is a company working on a clinically useful formulation of resveratrol, they observe >8-fold increase in peak plasma resveratrol concentration in rats with their formulated drug compared to an equivalent unformulated reseveratrol dose (unpublished data; personal communication) creating a situation in which smaller, yet therapeutically relevant, doses can be achieved, however the clinical trials showing increased bioavailability in humans remain to be performed. This company has an open IND for phase I trials in healthy volunteers, so more bioavailability data may be available in the near future.
Li et al. showed that curcumin treatment downregulates NF-κB binding and IkappaB kinase activity in pancreatic cancer cell lines. This shift was associated with a time-dependent decrease in cancer cell proliferation and increased apoptosis [50], which was further supported by Zhao et al. who reported that this effect is associated with an upregulation of FOXO1 expression [48]. Additionally, Ning et al. identified curcumin as a potential therapeutic for use against pancreatic cancer stem cells [47].
Yoshida et al. showed that curcumin sensitizes pancreatic cancer cells to gemcitabine in a study using gemcitabine resistant pancreatic cancer cells, and that the combination inhibits the growth of gemcitabine-resistant pancreatic cancer xenografts [51].
Few clinical trials using curcumin (Table 1), either alone or in addition to other drugs, have been conducted to date. A small trial in 21 patients who were not responding to gemcitabine alone, administered the combination of gemcitabine paired and 8 g of daily oral curcumin showed that curcumin was well tolerated and increased mean survival (161 days, with 19% of patients surviving after one year) when compared to continuation of gemcitabine alone in patients who were not responding well, who averaged 10 weeks survival rate [52]. Though studies using FOLFIRINOX in the treatment of pancreatic cancer indicate that while FOLFIRINOX displays better tumor control, the gemcitabine-curcumin combination is better tolerated [12]. However, Dhillion et al. showed poor bioavailability in a cohort of 25 patients, with only two patients displaying clinically relevant biological activity following the daily 8 g oral curcumin administration [53]. Though [54]. The development of Theracurmin, a highly bioavailable form of curcumin shown to produce a 40-fold increase in maximal blood-concentration in rats and a 27-fold increase in humans, has increased bioavailability to clinically relevant levels [55,56] (Table 1). A phase I clinical study investigating the safety Theracurmin in cancer patients reported adverse effects associated with disease progression and not Thermacumin treatment. Though results could not confirm a corresponding decrease in NF-κB activity or cytokine levels, this study documented the safety of Theracumin [57]. A later clinical trial reported a number of adverse effects, with several patients reporting abdominal fullness and pain and showing signs of dilated colons, indicating that high bioavailability of curcumin may increase its toxicity profile. Despite a median estimated survival time of 4.4 months in the 14 clinical trial patients, three survived for more than twelve months following treatment [55,58].
RAPALOGS
The KRAS proto-oncogene is mutated in 90% of pancreatic cancers, leading to a constitutively active pathway resulting in rapid proliferation and increased survival [3]. KRAS mutant tumors display aberrant activation of a number of downstream signaling pathways, including phosphatidylinositol 3-kinase (PI3K) and AKT, linking KRAS mutation to activation of mammalian target of rapamycin (mTOR) [59,60]. The mTOR pathway is a key player in many biological processes including cell growth, regulation of actin cytoskeleton, transcription, translation, cell survival and proliferation ( Figure 1) [59,61], and inhibition results in reduced protein synthesis and cell growth [61]. A number of mTOR inhibitors have been clinically investigated [14,59]. Preclinical studies in pancreatic cancer cell lines displayed diverse effects of mTOR inhibitors on cell cycle progression, autophagy, reduced inflammation and inhibition of epithelial-to-mesenchymal transition [62][63][64][65]. However, rapid development of treatment resistance was observed in response to treatment with rapamycin through AKT phosphorylation and activation of a negative feedback loop [59]. Similar results have been observed in vivo, where mTOR inhibition resulted in reduced tumor growth and delayed progression in murine models of pancreatic cancer [60,62,66].
Clinically, rapalog monotherapy has not shown any treatment efficacy in pancreatic cancer patients, although the treatment was well tolerated [14,67]. A study evaluating the combination of capecitabine and everolimus demonstrated a survival benefit (12.4 months) when compared to capecitabine alone (5.9 months) [68]. However, a lack of patient stratification based on biomarkers established in preclinical studies, such as a loss of or low PTEN expression and hyperphosphorylation of AKT, limits the utility of these trials for the evaluation of treatment efficacy [69,70].
CANNABINOIDS
Cannabis originated in Central Asia but is now grown worldwide. The cannabis plant produces a resin containing psychoactive terpenophenolic compounds called cannabinoids with the highest concentration found in the female flowers of the plant. The FDA has not approved the use of cannabis as a treatment for any medical condition and clinical trials evaluating the benefit for patients with cancer are limited. www.oncotarget.com Commercially available cannabinoids, such as dronabinol and nabilone, are approved drugs for the symptomatic treatment of cancer-related side effects. Although cannabinoids have been shown to reduce proliferation and induce apoptosis [15] in a number of tumors, including pancreatic ductal adenocarcinoma, they are mainly used as supportive therapy to reduce pain, improve sleep and improve the nutritional state of pancreatic cancer patients (Figure 1) [71].
Cannabinoids have been shown to reduce chemotherapy-induced neuropathy in animal models exposed to paclitaxel, vincristine, or cisplatin [72]. Cannabinoids reduce tumor-associated and treatmentassociated pain symptoms through supraspinal, spinal, and peripheral modes of action, acting on both ascending and descending pain pathways [72,73]. The CB1 receptor is found in both the central nervous system (CNS) and in peripheral nerve terminals where high receptor concentrations in brain regions regulating nociceptive processing [72,73]. CB2 receptors affect mast cell receptors and keratinocytes to reduce the release of inflammatory signals and increase endogenous opioid release [72,73].
OTHER COMPOUNDS
Taxanes, resveratrol and curcumin are the furthest developed examples of natural compounds for the is a polyphosphorylated carbohydrate found in high-fiber foods. It has been investigated as a potential anti-cancer therapy in several different cancer types, including melanoma [73], colon cancer [72] and bladder cancer [74]. Preliminary studies have shown IP6 to be effective in decreasing pancreatic cancer cell proliferation and increasing apoptosis in vitro (Figure 1) [16] and described a potential therapeutic synergy between IP6 and catechin, a natural compound found in green tea [74]. In vivo studies on the potential benefit of IP6 to pancreatic cancer have yet to be conducted.
(-)-epigallocatechin-3-gallate (EGCG) is the most potent catechin found in green tea. In vitro studies have shown that EGCG inhibits cell cycle progression and induces apoptosis in pancreatic cancer cells, specifically in combination the chemotherapeutic bleomycin ( Figure 1) [75]. Additionally, EGCG inhibits tumor growth, angiogenesis, and metastasis in pancreatic cancer xenografts [76].
Leiodermatolide is a polyketide macrolide found in the deep-sea sponge. It is known for its antimitotic properties and is thought to utilize a novel mechanism when compared to compounds with similar results, though the particulars of this mechanism have not yet been identified. Currently used compounds such as vinca alkaloids and taxanes induce cell cycle arrest by affecting the microtubules required for spindle formation and chromosome segregation. Preliminary data suggest that Leiodermatolide is potentially a potent inhibitor of pancreatic cancer [18].
Quercetin, a flavonoid polyphenol closely related to resveratrol that is found in many fruits, vegetables and grains, has also shown promising results in vitro and in vivo, though it has not been studied as extensively as resveratrol. Studies have shown that that quercetin sensitizes cancer cells to tumor necrosis factor-related apoptosis-inducing ligand (TRAIL) induced apoptosis, causes apoptosis in vivo and reduces tumor proliferation in vivo (Figure 1) [20,77].
CLINICAL IMPACT
Natural compounds play a major role as antiproliferative agents in pancreatic cancer therapy. While taxanes have successfully transitioned into clinical use and are now part of clinical routine in pancreatic cancer treatment, this class of compounds remains one of the few examples to achieve this transition. Despite promising preclinical results using a number of natural compounds, little has translated into the clinical routine of pancreatic cancer treatment although a large number of clinical trials have been performed on various compounds. This can be attributed to unspecified mechanisms of action, low bioavailability and difficulty ensuring patients' compliance with the dosing regimen. This is further emphasized on the examples of curcumin and theracumin (Table 1), where clinical trials performed at different centers describe vastly different study outcomes. Similar results can be observed for the majority of clinical trial involving natural compounds. While different dosing regimens and daily doses are partially responsible for these differences, large numbers of target genes ( Figure 1) and yet to be elucidated mechanisms of action further complicate the development of consistent clinical trial protocols. Nutraceuticals, specifically resveratrol and curcumin are highly accessible to study participants as over-the-counter dietary supplements. Patients can consume large amounts of these compounds outside of the prescribed dosing regimen, which is particularly problematic for the analysis randomized clinical trial aiming to evaluate treatment effects and toxicity in combination with standard-of-care.
For natural compounds to become clinically relevant to pancreatic cancer treatment, these pitfalls need to be addressed. Taxanes are the only natural compound currently approved for clinical use in pancreatic cancer, though resveratrol and curcumin may be suggested as supplemental supportive care or taken by patents on their own accord.
With continuous advances in medicinal chemistry and drug formulation, will enable the improvement of natural compound-based anti-cancer drugs and facilitate a transition of these compounds into the clinic.
CONCLUSIONS
The wide variety of mechanisms of action associated with natural compounds is problematic in terms of isolating and confirming specific cellular targets and their impact on tumor cell survival. Though recent studies identified some mechanisms of action, we are far from understanding the full spectrum of effects that natural therapeutics have on normal and cancer cells. With recent drug development efforts aiming to increase the bioavailability of natural compounds, such as resveratrol and curcumin, the clinical use of these compounds become feasible by allowing the development of rational combinations with established chemotherapeutic agents.
The combination of natural products and standard of care chemotherapy has the potential to increase quality of life and lifespan in pancreatic cancer patients, even though a number of hurdles need to be overcome for routine clinical use.
ACKNOWLEDGMENTS
SPB receives support from the National Institutes of Health (R01NS092671 and R01MH110441). SPB and IL are supported by the University of Miami Sylvester www.oncotarget.com Comprehensive Cancer Center Molecular Therapeutics Shared Resource (MTSR) and the Jay Weiss Institute for Health Equity.
CONFLICTS OF INTEREST
SPB is a founder and shareholder of Jupiter Orphan Therapeutics, a University of Miami spinout company developing resveratrol for mucopolysaccharidosis I and Friedreichs Ataxia, but not for cancer. Otherwise the authors have no conflicts of interest to report. | 2018-12-02T16:19:45.290Z | 2018-10-23T00:00:00.000 | {
"year": 2018,
"sha1": "110d72f1f503440c35940f3eeeba3001c0caabca",
"oa_license": "CCBY",
"oa_url": "https://www.oncotarget.com/article/26234/pdf/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "110d72f1f503440c35940f3eeeba3001c0caabca",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245645643 | pes2o/s2orc | v3-fos-license | Social Support and Personal Resilience of Government Employees
The study primarily aims to determine which domain of social support significantly influences the personal resilience of government employees. The study utilized a descriptive correlational design and employed a survey method to attain the research objectives. The selected respondents were the 210 government employees who were selected through a stratified random sampling technique. The researchers made use of the pilot-tested and enhanced adapted questionnaires. The statistical tools used were mean, pearson r and regression. As reflected on the results, social support and personal resilience of government employees obtained different mean scores but both belong to high descriptive levels. Further, it was found out that there is a significant relationship between social support and the personal resilience of government employees. Furthermore, it was found that social support significantly influences personal resilience having three out of four domains contributed to its significant influence. Lastly, it was emotional support emerged as the best predictor in this study.
Keywords: public administration, social support, personal resilience, emotional support, Philippines
Introduction
Individual resilience involves behaviors, thoughts, and actions that promote personal well-being and mental health. People can develop the ability to withstand, adapt to, and recover from stress and adversity and maintain or return to a state of mental health well-being by using effective coping strategies. People develop resilience by learning better skills and strategies for managing stress and better ways of thinking about life's challenges.
To be resilient one must tap personal strengths and the support of family, friends, neighbors, and/or faith communities (SAMHSA, 2020). The COVID-19 pandemic has seen to have serious impact on individuals throughout the world (Brooks, et.al, 2020;Xiao, et.al, 2020), but its impact is greater on adolescents than adults because they are more vulnerable to the negative effects of stress (Chassin, et.al, 2003). Further, its associated social and economic stressors undermine adolescent's development and well-being (Bartlett & Vivrette, 2020) including physical, psychosocial, cognitive, and mental health, and on family relationships (Cluver, et.al, 2020). Since people respond to stress and adversity like that from the COVID-19 pandemic differently, hence, resilience can help get through and overcome hardship (Center on Developing Child, 2020). Studies suggest that exposure and experience with challenges or adversity are important for developing resilience processes and growing youths' capacity and skills for handling stressful experiences (Masten, 2015), and developing resilience in adolescents will help them in meeting the challenges and responsibilities of adulthood (Werner, 1995) and contribute to nation building. Theoretical perspectives and empirical research suggest that social support in some degree has to do with resilience. Social support acts as a shield against the negative effects of stressful life events and, thus, negative effects on mental health (Colarossi & Eccles, 2003;Jackson, 1992). Hence, the level and degree of social support affects the level of resilience as a protective factor in individuals (Cicchetti & Toth, 1998). On the other hand, spirituality promotes healthy development in adolescents, enhances the ability to cope, and leads to positive outcomes in mental health and psychological well-being (Kim & Esquivel, 2011). Considering the above context, the researcher decided to propose a study with the social support as independent variable and personal resilience during a pandemic as dependent variable. No study has looked at the effects of a pandemic on adolescent health and little research has been done on the characteristics of vulnerable groups and factors that promote resilience (Rome, Dinardo, & Issac, 2020;Tso,et.al, 2020). Further, most current measures of resilience have limited focus, only addressing individual characteristics (Connor & Davidson, 2003) (Jew, Green, & Kroger, 1999;Wagnild & Young, 1993), thus it is vital to examine resilience more broadly. Hence, this study will respond to the prevailing issues on limited studies of adolescents' personal resilience, especially during a pandemic. Also, this will be the first study on adolescent's personal resilience during a pandemic.
Research Objectives
This study aims to determine the domains of social support that significantly influence the personal resilience of government employees. Specifically, it dealt with the following objectives: To assess the level of social support in terms of emotional support, instrumental support, need for support and support seeking. To ascertain the level of personal resilience in terms of novelty seeking, emotional regulation and positive future orientation. To determine the significant relationship between social support and personal resilience of government employees. Lastly, to determine the domain of social support that significantly influence personal resilience of government employees.
Hypothesis
The following hypotheses were developed based on the above goals; there is no significant relationship between social support and personal resilience and social support does not significantly influence personal resilience.
Literature Review Social Support
The social support theory proposes two major models, the main effect model and the buffering effect model to explain the association between social support and well-being (Armstrong, Bernie-Lefcovitch & Ungar, 2005). The first, known as the main effect model of social support, is defined through social integration and has a general positive context and beneficial effect on well-being regardless of whether or not there is an actual stressful experience (Dumont & Provost, 1999). Secondly, the buffering model hypothesizes that social support protects individuals against the negative effects of stressful events (Helsen, Vollebergh & Meeus, 2000;Rowlinson & Felner, 1988). Based on the theory that not all social support is the same Colarossi and Eccles (2003) examined the effects of parent, teacher, and peer social support on the mental health of 217 adolescents. Support received from friends and teachers significantly and positively affected self-esteem. An important part of adolescence is identity formation and, in many ways, then on family sources are important for self-concept and a sense of worthiness outside parental support. Of all supports examined by Colarossi and Eccles, parental support was found to have the largest effect on levels of depression. It is suggested that this is due to the longer-term nature of the relationship on depressive systems (Colarossi & Eccles, 2003). It has been found that parental support may have a cumulative effect over time because of the relatively stable and long-standing parentchild relationship, which has notable effects on levels of depression (Garnefski & Diekstra, 1996). An individual's perception of support affects mental health outcomes by increasing beliefs of acceptance, selfworth and connectedness to others (Colarossi & Eccles, 2003). In one study of social support, 297 adolescents were classified into 3 groups: well-adjusted, resilient, and vulnerable based on crossing scores of depressive symptoms and frequency of daily hassles (Dumont & Provost, 1999). It was evident from the results of this study that resilient adolescents were better able to solve problems than those in the other groups. However, an important finding from this research was that social support did not significantly differentiate the groups of adolescents. The authors acknowledged that this was a very surprising result considering the literature (Colarossi & Eccles, 2003;Horton & Wallander, 2001) has placed so much emphasis on the buffering effects of social support on mental health.
Emotional Support is the first indicator of social support. Social support is often further broken into different types-for instance instrumental support and emotional support-as often people have preferences for different types of aid depending on the circumstances (Reblin & Uchino, 2008). Sarafino (1998) points out that emotional support is the feeling of affection, friendship, care, attention, love and confidence that others demonstrate to the individual and his sense of comfort and belonging. Catrona & Russell (1990) defined emotional support as the need for help and security in stressful times, resulting in an individual sense of caring for others. The concept of emotional support includes a wide range of behaviors such as empathy, confrontation, compassionate participation, caring, encouragement toward others, love that appear in caring and attention, valued feeling, and dependable bonds of friendship (Gregory, et al., 1996;Campbell & Wright, 2002). Gregory, Sarson, and Sarason (1996) note that potential emotional support providers include family, friends, co-workers, colleagues, and experts such as counselors and clergy. The emotional characteristics of personal life of the human being include communication, attention, moral guidance and trust, thus providing an opportunity to vent emotions (Cohen, 2004;Cohen & Wills, 1985). Receiving emotional support helps individuals to cope with problems, anxiety, and disappointments of hope and pain in their lives, but if left unchecked or treated, it will have serious negative effects that can affect the physical, psychological and emotional health of the individual (Burleson, 1990). Adolescences, in particular, requires providing emotional support to ensure arriving to a good level of psychological growth, good human interaction and close personal relationships such as friends, family, or emotional relationships (Burleson, 2003). The importance of emotional support emanates from the fact that individuals who receive more emotional support or realize that emotional support is available are happier, healthier, and able to cope with the problems and troubles of life (Catrona & Russell, 1990;Pierce, Sarason & Sarason, 1990). Instrumental support is the second indicator of social support. Instrumental support, the degree to which an individual receives assistance in the completion of daily life tasks, is an important but often neglected component of social support (Malecki & Demaray, 2003). Instrumental social support refers to overt behaviors that directly facilitate adolescents' involvement (Heaney & Israel, 2002). Instrumental support can be conceptualized as the provision of resources which can entail financial support, material resources, or support in performing tasks like child care or household chores (Cohen & Wills, 1985). Instrumental, nonfinancial support can come in the form of child care, respite care, transportation, home modifications, training, crisis intervention, faith-based services, and assistance with the transition to adult group homes (Johnson & Kastner, 2005). Instrumental support includes feelings of warmth and closeness with parents (Russek and Schwartz, 1997) and parental academic involvement (Westerlund et al., 2013). Additionally, support from parents provided to adult children has been shown to be greater for children who are considered by their parents to have more problems (Fingerman, Miller, Birditt, & Zaritt, 2009).
Personal Resilience
Research into resilience encompasses many areas including individuals' abilities of recovering to normal functioning during different stages of development after adversity (Alvord & Grados, 2005). This was demonstrated through a study of Romanian children that had experienced severe deprivation during infancy and were later assessed to show significant improvements both physically and cognitively after being adopted into nurturing homes (Rutter & The English and Romanian Adoptees (ERA) Study Team, 1998). The study examined a sample of 111 Romanian orphans that came to the United Kingdom (U.K.) for adoption before the age of 2 years. The extent of developmental deficit was assessed at time of entry to the U.K. and most children were severely developmentally impaired. Further physical and cognitive assessments were carried on the children at 4 years of age to examine the developmental catch-up. For those children adopted before 6 months of age both physical growth and cognitive levels were almost complete. The developmental catch-up was also very impressive, however, not complete for children adopted after 6 months of age (Rutter et al., 1998). Masten (2001) claims such a recovery-to-normal trajectory of development is evidence of resilience. Masten (2001) suggested that resilience stems from the healthy operation of basic human adaptational systems. Although, Miller (2002) acknowledges that consensus has not been reached in defining or describing what is meant by the term resilience. The outcomes or consequences of resilience that have been recognized are effective coping, mastery, and positive adaptation (Earvolino-Ramirez, 2007). Another phenomenon of resilience is that of the ability of some individuals to actively create experiences that foster SH-2021-873 competence (Armstrong, et al., 2005). In his study of students with disabilities, Miller (2002) aimed to identify several elements of resilience. A predominant difference between the resilient and non-resilient students was the ability of resilient students to identify their experiences of success and more importantly to take the deliberate steps that were necessary to attaining success (Miller, 2002). Resilience is therefore linked to self-efficacy in that both require the process of becoming aware of one's strengths (Lightsey, 2006). From a developmental perspective, a common theme in theoretical frameworks for adolescent resilience is the consideration of the individual's developmental level and functioning, the multiple levels of influence on developmental pathways, and the connection between the risk and protective factors and the individual's adjustment (Armstrong, Bernie, Lefcovitch & Ungar~ 2005). Resilience in adolescence occurs through nominal adaptive processes, including cognitive development, behaviour regulation and interactions with the environment (Masten, 2001). Novelty Seeking is the first indicator of resilience. It measures an emotional drive to activate behavior because of curiosity to explore and to enjoy what is new and complex (Eley, 2006). Novelty Seeking influences choice preference and enhances exploration during decision making (Wittmann, et.al, 2008). Some personality traits, including novelty seeking, are good predictors of vulnerability to stress-related mood disorders (Duclot & Kabbaj, 2003). Several clinical reports indicate that personality traits, including novelty seeking, can be used to predict further vulnerability to mood disorders (Josefsson et al., 2011;Black et al., 2012;Wu et al., 2012). Recent evidence suggests that early exposure to mild stress promotes the development of novelty seeking behavior (Parker, et.al, 2007). In many circumstances, humans and other animals are naturally inquisitive and have a characteristic tendency to explore novel and unfamiliar stimuli and environments (Wittmann, et.al, 2008). Cloninger proposed Novelty Seeking (NS) as a personality trait that refers to the tendency to be intensely exhilarated or excited in response to novel stimuli Cloninger, 1991). Human neuroimaging studies have reported that NS is associated with the activation elicited by emotional stimuli in the medial frontal gyrus (Bermpohl, F. et al.), and other personality traits such as harm avoidance (Naghavi, 2009). Emotional regulation is the second indicator of resilience. It has been defined as -all the extrinsic and intrinsic processes responsible for monitoring, evaluating and modifying emotional reactions, especially their intensive and temporal features, to accomplish one's goals‖ (Thompson, 1994, pp. 27-28). All strategies used to reduce, increase, or maintain positive or negative emotions are referred to as emotion regulation. Furthermore, being able to regulate emotions is associated with high levels of resilience. Artuch-Garde et al. (2017) exposed in their cross-sectional research that the ability to self-regulate behavior is associated with high levels of resilience in high-school students. Emotion research has demonstrated the importance of emotion regulation in adaptation, cognition, well-being, attention, and social interaction (f.i. Wyman et al. (1993) found that children with high future expectations had less anxiety/depression, more self-reported competence, higher reading achievement scores and were rated by teachers as more engaged and better adjusted socio-emotionally. Students with higher levels of ambition and optimism who expressed a desire to obtain employment that enabled them to get ahead earned more money as adults than teenagers with less ambition and optimism (Ashby & Schoon, 2010). In a study examining psychosocial resilience in rural adolescents, teens who had more positive expectations for their future were less negatively impacted when adverse events occurred and they displayed more active perseverance than those who expected worse outcomes (Tusaie, Puskar & Sereika, 2007).
SH-2021-874
The study employed a quantitative, non-experimental design of research using the descriptive correlational technique in which it is the most commonly employed approach in determining whether the independent and dependent has significant relationship using statistical data. Also, non-experimental quantitative research was utilized to determine the nature of a situation existing in this study. Likewise, non-experimental research was not generally directed toward hypothesis testing. Thus, this was an appropriate research design to use in the study to describe the relationship between the social support and personal resilience of government employees. The study surveyed 210 government employees in Region XI. The researcher utilized simple random sampling in selecting the appropriate respondents. A purely random is a set of a numerical population in which each sub-set respondents has an equivalent chance of being selected. A simple random test is intended to represent a group in an unbiased manner (Hayes, 2019). The survey information is generally assumed to follow quantitative probability distribution, so all of the detail is found in the means and regression coefficients equation. Inclusion criteria include ability of government employees to read and write in the consent form and survey instrument, comprehend instruction and those who voluntarily submit to the test. Additionally, those who are willing to give consent and lastly, those who are willing to participate were included in the study. Meanwhile, exclusion criteria include those who are not willing to participate. Lastly, withdrawal criteria include violation of the researcher to the privacy of the respondents and confidentiality of their identity that needed to be protected. The respondents are free to make decisions not to engage, refuse to take part or terminate involvement at any time without any punishment or loss of any advantage to which they are anything else obligated. It shall also take into account the definition or clarification of the existence and probability of the potential distress or negative effects, including cognitive dangers, if any, and what has been done to mitigate such hazards, and the measures to be taken where appropriate. The following numerical measures used in the computation of data and testing the hypotheses level significance of 0.05; the mean was used to determine the level of social support and personal resilience of government employees; the pearson r was utilized to establish the significant relationship between social support and personal resilience of government employees; the linear regression was used to determine the significant influence of social support on the personal resilience of government employees.
Results and Discussion
Presented in Table 1 is the level of social support of government employees. The overall mean score 3.99, verbally described as high. The result shows that, as described, social support was oftentimes manifested. The standard deviation was less than 1.00, which signified the consistency of responses among the respondents. Scrutinizing the individual results of the level of social support of government employees on the following indicators were as follows: instrumental support has a mean of 4.20 labelled as very high and got the highest mean score among indicators, next is emotional support which has a mean of 4.05 with a descriptive level of high, high support seeking has a mean of 3.90 with descriptive equivalent of high and the lowest mean score is the need for support which has a mean of score of 3.81 with descriptive level of high.
.99 High
The high-level result of the social support of the government officials is parallel to the findings of Colarossi & Eccles (2003) that social support affects mental health outcomes by increasing beliefs of acceptance, self-worth and connectedness to others. These findings also support the study of Osbay, et. al (2007), said that social support seems to moderate genetic and environmental vulnerabilities for mental illness, possibly by effects through other psychological factors, such as fostering effective coping strategies. Reflected in Table 2 is the level of personal resilience of government employees. The overall mean score was 4.08, labelled as high. The high-level result means that personal resilience is oftentimes manifested. Data revealed that the indicators got the highest mean is positive future orientation which has a mean score of 4.57 with descriptive equivalent of very high, followed by emotional regulation which has a mean score of 4.02 with descriptive equivalent of high and the lowe3st mean score is novelty seeking which has a mean score of 3.67 with descriptive equivalent of high.
.08 High
The high-level result of personal resilience is in consonance to the findings of Masten (2001) suggested that resilience stems from the healthy operation of basic human adaptational systems. Result of this study is also parallel to the findings of Gartland (2011) which claim that resilient people are more likely to be optimistic, have a positive sense of the future, and hold future attainment objectives than those who are not resilient. Shown in Table 3 is the significant relationship between the social support and personal resilience of government employees. Social support when correlated with the personal resilience, yielded an overall rvalue of .656 with p-value less than 0.05. Therefore, the two variables are significantly related to each other. Thus, the null hypothesis of no significant relationship between social support and personal resilience of government employees was, therefore, rejected. Further, the indicators of social support correlated with indicators of personal resilience yielded the following results: emotional support correlated with novelty seeking, emotional regulation, and future orientation got an overall r-value of .567at p-value less than 0.05. Instrumental support correlated with novelty seeking, emotional regulation, and future orientation got an overall r-value of .530 with p-value less than 0.05. Need for support correlated with novelty seeking, emotional regulation, and future orientation got an overall r-value of .539 with p-value less than 0.05. Support seeking correlated with novelty seeking, emotional regulation, and future orientation got an overall r-value of .513 with p-value less than 0.05. Therefore, all indicators of social support when correlated to all indicators of personal resilience are significant. Moreover, the indicators of personal resilience correlated with indicators of social support showed the following results: novelty seeking correlated with emotional support, instrumental support, need for support and support seeking obtained an overall r-value of.573 with p-value less than 0.05. Emotional regulation correlated with emotional support, instrumental support, need for support and support seeking obtained an overall r-value of .504 with p-value less than 0.05. Positive future orientation correlated with emotional support, instrumental support, need for support and support seeking obtained an overall r-value of .520 with p-value less than 0.05 .495 ** .000 .460 ** .000 .567 ** .000 Instrumental Support .391 ** .000 .416 ** .000 .477 ** .000 .530 ** .000 Need for Support .549 ** .000 .384 ** .000 .383 ** .000 .539 ** .000 Support Seeking .517 ** .000 .358 ** .000 .378 ** .000 .513 ** .000 Overall .573 ** .000 .504 ** .000 .520 ** .000 .656 ** .000 The findings of the study revealed that there is a significant relationship between social support and personal resilience of government employees. The findings support the idea of Cicchette & Toth (1998) that the level and degree of social support affects the level of resilience as a protective factor of individuals. On the other hand, the study is in consonance to that of Marques and Berry (2021) which offers an analytical resilience framework for examining and improving work life balance, supplemented with three mini-cases followed by work life balance and resilience analysis of the cases demonstrating the strength and benefits of the resilience framework. Displayed in Table 4 is the significance on the influence of social support on the personal resilience of government employees. As shown in the
Conclusion and Recommendation
As can be seen from the findings of the study, conclusions are drawn and presented in this section. The findings of this study confirm the significant influence of social support on the personal resilience of government employees. Moreover, findings provide evidence that the results showed that social support and personal resilience among government employees teachers are very high, meaning they are often manifested. Consequently, it indicates that there is a significant relationship between social support and personal resilience. Furthermore, it was found that among the four domains of social support, it was the emotional support emerged to be the best predictor for personal resilience. The conclusion can be associated to the results of Colarossi & Eccles, 2003 who found out that social support acts as a shield against the negative effects of stressful life events and thus negative effects on mental health. Hence, the level and degree of social support affects the level of resilience as a protective factor in individuals (Cicchetti & Toth, 1998). On the other hand, Marques and Berry (2021) offer an analytical resilience framework for examining and improving work life balance, supplemented with three mini cases followed by work life balance and resilience analysis of the cases demonstrating the strength and benefits of the resilience framework. Based on the foregoing findings and conclusions, a number of recommendations are offered. Since there is a high level of social support, it is recommended for them to have subjects focused on enhancing the social support of government employees to make it very high. The human resource department may design a program to increase the following: 1) whenever I am not feeling well, other people show me that they are fond of me; 2) when I am worried, there is someone who helps me; 3) I get along best without any outside help and 4) If I do not know how to handle a situation, I ask others what they would do. These are the items in the instrument that got the lowest mean in each domain of the social support. Likewise, there is a high level for personal resilience, the same recommendation is expressed. The following items are recommended to be the focused for inhouse training since these are the items in the questionnaire for personal resilience which got the lowest mean. Such as, 1) I find it bothersome to start new activities; 2) I think I have perseverance and 3) I have a clear goal for the future. The significant relationship and influence of social support to the personal resilience recommended that the policymakers, particularly the officials from the national government, review and revisit their existing policies if it addresses the demands and challenges in times of pandemic, mainly focusing on how to heighten employees' social support, hence, also improving their personal resilience. Finally, future studies toward examining other variables that can possibly influence between personal resilience which will be of utmost importance to the research community shall be taken into consideration. | 2022-01-03T16:12:10.105Z | 2021-12-24T00:00:00.000 | {
"year": 2021,
"sha1": "0c69e1eff7aa66f79ef54006080b5168c620c9ce",
"oa_license": "CCBY",
"oa_url": "https://ijsrm.in/index.php/ijsrm/article/download/3616/2469",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a4c53bb4587832b2d0a09dd0525b578e327935f7",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
14663602 | pes2o/s2orc | v3-fos-license | Investigation of the association of Apgar score with maternal socio-economic and biological factors: an analysis of German perinatal statistics
Purpose To examine the relationship of 5-min Apgar score with maternal socio-economic and biological factors. Methods We analyzed data from 465,964 singleton pregnancies (37–41 weeks’ gestation) from the German perinatal statistics of 1998–2000. Using a logistic regression model we analyzed the incidence of low (0–6) 5-min Apgar scores in relation to these maternal factors: body mass index (BMI), age, previous live births, country of origin, occupation, single mother status, working during pregnancy, and smoking. Results A low Apgar score was more common in overweight [adjusted odds ratio (OR) 1.24; 95% confidence interval (CI) 1.10–1.40; P < 0.001] and obese [OR 1.92 (95% CI 1.67–2.20); P < 0.001] compared to normal weight women. A low Apgar score was also more common for women aged >35 years compared to those aged 20–35 years [OR 1.35 (95% CI 1.16–1.58); P < 0.001]. Furthermore, odds of a low Apgar score were higher for women with no previous live births compared to those with one or more previous live births [OR 1.52 (95% CI 1.37–1.70); P < 0.001]. Socio-economic factors did not convincingly influence Apgar scores. Conclusions There was an influence of the biological maternal factors age, BMI, and parity on the 5-min Apgar score. There was no convincing effect of socio-economic factors on Apgar score in our study population. Possible reasons for this are discussed.
Introduction
The Apgar score [1] is commonly used to evaluate neonatal well-being immediately after birth. A total score of seven or more out of ten is considered an indication of a normal neonatal condition and a score of three or less is taken as a reason for especial concern. For its original use of predicting outcome in the neonatal period, the Apgar score is as useful today as it was when Virginia Apgar Wrst described it. Taken individually, the 5-min Apgar score has been shown to be a better predictor of neonatal outcome than umbilical-artery blood pH [2], although it may be useful to combine these two predictors.
Even though the Apgar score was not originally intended to predict long-term health outcomes, it does nonetheless inform about prognosis beyond the neonatal period. It has been known for a while that the 5-min Apgar score is a good predictor of survival in infancy [3]. Likewise, it has been known for some time that a low Apgar score (0-3) for a prolonged period of time is predictive of subsequent disability [4]. A recent meta-analysis conWrms that the outcome of neonates with a score of zero at 10 min is almost universally poor [5]. Interestingly, recent work showed that even transiently low Apgar scores are associated with lower IQ in later life [6].
Because the Apgar score is such an important indicator of subsequent outcome, knowing parameters associated with a low Apgar score is of clinical and epidemiological interest. Predicting low Apgar scores may allow the appropriate planning of neonatal care. Previous work showed that socio-economic as well as biological factors of the mother can be predictors of Apgar scores. For example, low social class, poor educational level and adverse social circumstances have been associated with lower Apgar scores in previous studies [7][8][9]. Of course adverse events during pregnancy and birth such as maternal infection can also result in a low Apgar score.
Our database constructed from data collected for German perinatal statistics provides a suitable means to investigate the inXuence of maternal biological and socio-economic factors on Apgar scores. The collection of perinatal statistics is mandatory in Germany. Data are compiled on a variety of maternal biological and socio-economic parameters throughout pregnancy. Data are also collected regarding the condition and well-being of the neonate following delivery, including Apgar scores. In this paper we aimed to analyze the relationship between 5-min Apgar scores and maternal biological and socio-economic parameters.
Materials and methods
We analyzed data of 465,964 singleton pregnancies and births from the perinatal statistics of eight German federal states: Bavaria, Brandenburg, Hamburg, Lower Saxony, Mecklenburg-Western Pomerania, Saxony, Saxony-Anhalt, and Thuringia. Data were collected between 1998 and 2000 and kindly passed on to us for analysis. We analyzed 5-min Apgar scores and their relation to a number of maternal biological and socio-economic factors: body mass index (BMI), age, previous live births, country of origin, occupation, whether the mother was a single parent or not, whether the mother worked during pregnancy or not, and number of cigarettes smoked per day (if any). To exclude length of gestation as a confounding factor, we restricted our analysis to cases with a duration of pregnancy of 37-41 completed weeks. This left 465,964 out of a total of 508,926 cases of singleton pregnancies in out database.
Nominal data are expressed as percent values. For bivariate analyses, the 2 test was used. Multivariable logistic regression was used to assess the association between Apgar scores and maternal biological as well as socio-economic factors. Odds ratios (OR) with regard to having a neonate with a 5-min Apgar score of 0-6 were calculated and adjusted for the following parameters: • BMI Underweight (BMI < 18.5), overweight (BMI 25-29.99), or obese (BMI¸30) women with reference to those of normal weight (BMI 18.5-24.99); • Age Women aged <20 or >35 years with reference to those aged 20-35 years; • Previous live births Women with no previous live births with reference to those with previous live births; • Smoking status Smokers consuming 10 or less cigarettes a day and smokers consuming 11 or more cigarettes a day with reference to non-smokers; • Maternal country of origin Women with a country of origin other than Germany with reference to women born in Germany (see Fig. 1); • Maternal occupation Women in one of the Wrst six occupational categories used in German perinatal statistics ( Fig. 2) with reference to those in the most qualiWed occupational category (high level public employee, very highly skilled employee, etc.); • Single mother status Women who described themselves as single mothers with reference to those who did not; • Working during pregnancy Women who worked during pregnancy with reference to those who did not work.
OR were calculated with 95% conWdence intervals (CI). A value of P < 0.05 was considered statistically signiWcant. All statistical analyses were performed with SPSS software, version 15.0. Table 1 illustrates that the distribution of 5-min Apgar scores is signiWcantly inXuenced by maternal BMI, age, the presence or absence of previous live births, smoking status and number of cigarettes smoked per day, maternal occupation, maternal country of origin, being a single mother or not, and working during pregnancy or not. This means that these factors need to be considered in our logistic regression analysis ( Table 2).
Socio-economic factors
Our regression analysis revealed that maternal occupation was not signiWcantly associated with 5-min Apgar scores (Table 2; Fig. 1). Regarding maternal country of origin, only the comparison of the group of "other" countries (i.e., countries not otherwise classiWed in German perinatal statistics) with Germany yielded a statistically signiWcant result [OR 1.73 (95% CI 1.05-2.86)], see Table 2 and Working during pregnancy or not and whether the mother was a single parent or not had no signiWcant impact on the odds of having low Apgar scores. Similarly, for cigarette smoking our full model regression analysis did not reveal a signiWcant association (Table 2). However, neonates of heavy smokers had signiWcantly worse Apgar scores when a simpler regression model was used for analysis (not taking maternal country of origin and occupation into account, data not shown).
Biological factors
BMI was signiWcantly associated with Apgar score. Both overweight [OR 1.24 (95% CI 1.10-1.40); P < 0.001] and obese women [OR 1.92 (95% CI 1.67-2.20); P < 0.001] had signiWcantly higher odds of low Apgar scores (0-6) compared to normal weight women, as seen in Table 2. Figure 3 shows the distribution of Apgar scores according to BMI. It is evident that lower Apgar scores are more common in the overweight and obese compared with the normal weight category. Maternal age and parity had signiWcant inXuences on Apgar scores. Older women (above 35 years) had higher odds of low Apgar scores compared to women aged 20-35 years [OR 1.35 (95% CI 1.16-1.58); P < 0.001]. Absence of previous live births was also associated with higher odds of low Apgar scores [OR 1.52 (95% CI 1.37-1.70); P < 0.001].
Discussion
Overall, we found no convincing associations between 5-min Apgar score and a variety of socio-economic factors of the mothers including country of origin, occupation, and smoking. Fig. 1 Five-minute Apgar score according to country of origin of the mother. The terms for the geographical regions were translated as closely as possible from the German data collection form. "Central and Northern Europe" includes Austria, Switzerland, France, Belgium, the Netherlands, Luxembourg, Great Britain, Denmark, Sweden, Norway, and Finland. "Eastern Europe" includes the countries east of Germany In our full regression model only the comparison of the heterogeneous group of "other" maternal countries of origin with Germany as a country of origin yielded a result that was just statistically signiWcant. However, we did Wnd a number of signiWcant associations between biological parameters such as age, BMI and parity and 5-min Apgar scores.
Some limitations to our study need discussion. Our analysis relied on self-reporting of socio-economic parameters OR were calculated with regard to the odds of having a neonate with a 5-min Apgar score of 0-6. References were: Apgar score of 7-10, BMI of 18.5-24.99, age of 20-35 years, ¸1 previous live births, non-smoker, classiWed in the most qualiWed occupational category (high level public employee, very highly skilled employee, etc.), mother born in Germany, not being a single mother, and not working during pregnancy * OR adjusted for the following parameters: BMI, age previous live births, smoking status, occupation of the mother, country of origin of the mother, single mother status, and working during pregnancy a P < 0.001
Fig. 3
Five-minute Apgar score according to maternal BMI by the pregnant women. Self-reporting of socio-economic status and lifestyle habits (smoking) will not always be accurate. We also relied on the pre-existing classiWcation of socio-economic parameters such as country of origin and occupation into rather broad categories. Some categories, for example, the German term "Mittlerer Osten" (Middle Asia) that was used in the classiWcation system for maternal country of origin, are not precisely deWned and thus may be open to interpretation. Likewise, the occupational categories are broad and accurate classiWcation might be diYcult in individual cases. We are not alone in Wnding that associations of Apgar score with socio-economic parameters or lifestyle factors may be diYcult to prove. For example, a recent study of a large cohort of over 50,000 children reported that the association between maternal smoking and Apgar score was eliminated after adjustment for confounders [10]. However, while we did not, with one exception, Wnd signiWcant associations of socio-economic factors with Apgar score in our full regression model, others did Wnd such associations. A Finnish study of 2,912 women demonstrated a link between belonging to a lower social class, as evidenced by occupation and years of education, and having more infants with poor Apgar scores [7]. Another Finnish study conWrmed the association between adverse social circumstances and low Apgar scores: children taken into custody and placed in foster care had lower Apgar scores compared with populationbased controls [8]. A Swedish study of 183,637 males born between 1973 and 1976 reported that mothers working in non-manual and self-employed occupations were less likely to have an infant with a low Apgar score than manual workers [9]. That study also found that the risk of a low Apgar score decreased with increasing maternal level of education.
The inXuence of socio-economic factors on Apgar scores seems to depend on the population studied and on precisely what socio-economic parameter is investigated. For example, a study from Spain found that perinatal complications, including an Apgar score of six or below, were not more frequent in the newborns of immigrant mothers compared to Spanish mothers [11]. An Australian study, however, reported that only 76.7% of babies born to indigenous Australian mothers fell into a "healthy baby" category, as characterized by being a live birth, a singleton, born after 37-41 completed weeks' gestation, having a birthweight of 2,500-4,499 g, and a 5-min Apgar score of at least 7 [12]. For non-indigenous mothers the rate of "healthy babies" was 85.0%. A study from Washington State compared newborns of Somali immigrant women with those of black and white US-born women. Neonates born to Somali women were at increased risk of lower 5-min Apgar scores [13]. Given this dependence on study population, it is perhaps not surprising that we could not demonstrate a convincing inXuence of socioeconomic factors on Apgar scores even though we analyzed a large dataset. Our inability to Wnd signiWcant correlations may have to do with the above-mentioned limitations inherent in our data or it may be that socioeconomic diVerences and their impact on perinatal outcomes are less in Germany than in some other populations mentioned above. Another possible explanation is that our regression analysis, by taking confounding factors into account, eliminated some apparent inXuences of socioeconomic factors. Confounding and lifestyle factors may also provide an at least partial explanation of the results of the other studies. For example, in the study of indigenous Australian mothers more than half of these were smokers [12]. Alcohol consumption, although not mentioned in that study, may also have played a role.
Our result that working during pregnancy did not signiWcantly increase the odds of having a newborn with a low Apgar score is encouraging. Employment during pregnancy does not seem to adversely aVect neonatal outcome. This is in agreement with other work. Marbury and colleagues compared pregnancy outcomes of 7,155 women who worked between 1 and 9 months of pregnancy with outcomes of 4,018 women who were not employed. They found no diVerences in a range of parameters of neonatal health, including Apgar score [14].
A comparison of women aged 20-30 years with women over 40 from Taiwan (n = 400) found that the incidence of 5-min Apgar scores below 7 was signiWcantly higher in the older group [15]. Similarly, we demonstrated increased odds of low Apgar scores for older mothers. A retrospective cohort study of nearly 3.9 million pregnancies and births from the USA found that infants born to teenage mothers of 17 or younger had a higher risk of low 5-min Apgar scores [16]. In contrast, we did not Wnd a signiWcant diVerence between mothers aged below 20 and those aged 20-35.
We also demonstrated an inXuence of maternal BMI on Apgar scores. This stands in contrast to some previous work. A population-based study of 60,167 deliveries from Wales found that a 5-min Apgar score below 7 was not signiWcantly more common in the obese compared to normal weight women [17]. Likewise, a Danish study of 8,092 women found no diVerences between normal weight, overweight, and obese women with regard to Apgar score [18].
In conclusion, we demonstrated an eVect of the biological parameters age, BMI, and parity on the 5-min Apgar score. We could not convincingly demonstrate an eVect of socio-economic factors on Apgar score in our study population.
ConXict of interest statement
We declare that we have no conXicts of interest.
Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. | 2017-08-02T19:41:49.410Z | 2009-08-28T00:00:00.000 | {
"year": 2009,
"sha1": "eabc13264b91cfd44654b711a941ddc808996412",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00404-009-1217-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "eabc13264b91cfd44654b711a941ddc808996412",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237697178 | pes2o/s2orc | v3-fos-license | Pulsed Ultraviolet Light Treatment of Chicken Parts
With increasing production and consumption of chicken, it is appropriate to investigate the functionality and effectiveness of microbial reduction interventions and the qualitative effects they have on food. The effectiveness of pulsed ultraviolet (PUV) light applied to chicken on a moving conveyor was evaluated for inactivation of Escherichia coli on the surface of raw boneless/skinless (B/S) chicken breasts, B/S chicken thighs, and bone-in/skin-on chicken thighs. The conveyor height (distance from the flashlamp) and speed were set to deliver total energy fluences of 5, 10, 20, and 30 J/cm2 to the surface of the products. The product type by energy fluence interaction was significant (P= 0.015) for microbial reduction of E. coli. Exposure to PUV light for 5 and 30 J/cm2 resulted in Log10 reductions of 0.29 and 1.04 for B/S breasts, 0.34 and 0.94 for B/S thighs, and 0.10 and 0.62 for bone-in/skin-on thighs, respectively. Lipid oxidation and changes in color of chicken samples were evaluated after 30 J/cm2 of PUV light treatment. Lipid oxidation wasmeasured at 0, 24, 48, and 120 h after the treatment. PUV light treatment did not produce significant (P> 0.05) changes in lipid oxidation values for each product type. International Commission on Illumination L*, a*, and b* parameters were used to report lightness and color of samples before and after treatment for B/S breasts and thighs and bone-in/skin-on thighs. Color parameters were not significantly (P> 0.05) affected by PUV light treatments. In conclusion, this study indicates that PUV light applied to the surface of raw chicken parts on a moving conveyor is an effective surface antimicrobial treatment while inducing minimal change in quality of the product over a 5-d storage period under aerobic conditions.
Introduction
Raw chicken provides all of the necessary conditions needed to harbor and support the growth of spoilage and pathogenic microorganisms during refrigerated transportation and storage. The most prevalent foodborne pathogens associated with raw chicken include Salmonella and Campylobacter (Haughton et al., 2011; United States Department of Agriculture [USDA], 2012;McLeod et al., 2018). A report by the Foodborne Disease Active Surveillance Network indicated that the numbers of foodborne illness outbreaks caused by Salmonella and Campylobacter reported in the United States in 2012 were 535 and 23, respectively (Centers for Disease Control and Prevention [CDC], 2017b). Between 1988 and 1992, the CDC reported 40 foodborne illness outbreaks associated with chicken, which accounted for 1.65% of all foodborne illness outbreaks in the United States (CDC, 1996). Between 2009 and 2015, the CDC reported a total of 123 chicken-associated foodborne illness outbreaks, which accounted for 9.60% of all outbreaks in the United States (CDC, 2017a;Dewey-Mattia et al., 2018). The apparent rise in chicken-associated outbreaks emphasizes the need to identify effective interventions to reduce the presence of the pathogens on chicken.
Current intervention steps used during poultry processing for the reduction of foodborne pathogens include the application of antimicrobial solutions in the form of diluted hypochlorite or organic acid (citric acid, propionic acid, peroxyacetic acid, or lactic acid) rinses (Bolder, 1997;Demirci and Ngadi, 2012). A review by Demirci and Ngadi (2012) reported that hypochlorite solutions reduced Salmonella and Campylobacter by 0.1 to 2.4 and 0.2 to 3.0 Log 10 colony-forming units (CFU)/cm 2 , respectively, when applied to chicken parts. Benefits of organic acids include their low cost and consumer acceptance. Killinger et al. (2010) reported greater than 2.0 Log 10 CFU/mL reduction of aerobic plate counts and coliform levels on carcasses after treatment with 2% lactic acid in a 3-min rinse. Regardless of their antimicrobial efficacy, higher concentrations of organic acid solutions can cause surface discoloration and other quality defects (Demirci and Ngadi, 2012).
Pulsed ultraviolet (PUV) light has been investigated as another alternative microbial reduction intervention. PUV light quickly achieves germicidal effects similar to those of conventional ultraviolet (UV) light applied for an extended time. In the UV light spectrum, wavelengths between 100 and 280 nm produce a germicidal response by altering DNA structure and damaging cellular membranes (Elmnasser et al., 2007;Koutchma, 2009). However, low energy output of conventional UV limits its use for food processing (Demirci and Ngadi, 2012).
PUV light uses a xenon flashlamp to produce a spectrum of 100 to 1,100 nm, which includes conventional UV light (100 to 400 nm). PUV light is emitted in short bursts of very high energy intensity (Dunn et al., 1997;Krishnamurthy et al., 2010;Demirci and Krishnamurthy, 2011). PUV light systems can be adjusted for the number and duration of pulses, but the current literature references 3 pulses per second with each pulse lasting 360 μs as the most common application (Demirci and Ngadi, 2012).
Previous research using a lab benchtop unit demonstrated the antimicrobial effects of PUV light on the surface of raw chicken (Keklik et al., 2009;Cassar et al., 2019). Keklik et al. (2009) studied the effect of PUV light for the reduction of Salmonella serovar Typhimurium on the surface of boneless/skinless (B/S) chicken breast. They reported Log 10 reductions of Salmonella (CFU/cm 2 ) ranging from 1.2 to 2.4 after a 5-s treatment at 13 cm and a 60-s treatment at 5 cm, respectively. Cassar et al. (2019) applied PUV light to inoculated lean and skin surfaces of chicken thighs for 5 and 45 s and reported 1.21 and 1.99 Log 10 reductions for Escherichia coli, 1.26 and 1.97 Log 10 reductions for Campylobacter, and 1.23 and 2.12 Log 10 reductions for Salmonella, respectively.
For PUV light applications to be effective in commercial settings, the technology needs to be validated on a pilot system that more closely represents commercial production. In the current study, the effectiveness of PUV light for microbial reduction and its effects on quality of chicken cuts have been investigated using PUV light applied to products on a moving conveyor, representative of those used in commercial settings.
Materials and Methods
Microorganism E. coli K12 was selected as a nonpathogenic surrogate microorganism to replace Salmonella and Campylobacter. Previous research indicates that E. coli K12 act similar to Salmonella and Campylobacter in different food systems (Keklik et al., 2009;Cassar et al., 2019). Cultures were acquired from the E. coli Reference Center at Pennsylvania State University (University Park, PA). An antibiotic-resistant strain of E. coli K12 was used in order to allow antibiotic suppression of the natural microflora. Nalidixic acid (Acros Organics, Geel, Belgium) and streptomycin sulfate (Thermo Fisher Scientific, Fair Lawn, NJ) were used to prepare nalidixic acid and streptomycin sulfate-resistant (NSR) E. coli K12 as described by Catalano and Knabel (1994). Stock culture was stored at −80°C in 20% glycerol and 80% tryptic soy broth (TSB; BD, Franklin Lakes, NJ). Working culture of E. coli K12 NSR was maintained at 4°C in TSB supplemented with 0.6% yeast extract and 100 mg/L each of nalidixic acid and streptomycin sulfate (TSBYE-NS) and subcultured every 14 d.
Inoculum preparation
E. coli K12 NSR inoculum was prepared as described in Cassar et al. (2019); working culture was transferred into 1,000 mL of TSBYE-NS and incubated 37°C for 24 h. After incubation, cultures were centrifuged (30 min at 3,300 × g and 10°C), the supernatant was removed, and 500 mL of sterile 0.1% peptone water (BD) was used to resuspend the cells. The suspension was recentrifuged under the same conditions, and the pellet was resuspended in sterile buffered peptone water (Oxoid, Hampshire, UK) with a 1× working concentration yielding a cell population of approximately 8.0 Log 10 CFU/mL. Chicken parts were kept frozen (ca. −17°C) until use and transferred to a refrigerator (ca. 4°C) to thaw 48 h prior to each trial. Samples were removed from the refrigerator and brought to room temperature (ca. 18°C) 2 h before each trial so as to not cold shock microbes during inoculation.
Chicken parts were inoculated with E. coli K12 NSR by means of total submersion (15 parts per 1,000 mL of inoculum). Chicken parts were held under submersion for 30 min at room temperature (ca. 18°C) to promote attachment, achieve even distribution, and obtain 5.0-6.0 Log 10 CFU/cm 2 of E. coli K12 NSR on the surface (Firstenberg-Eden, 1981).
Pulsed ultraviolet light conveyor system
A food product conveyor (350-cm-long and 38-cmwide stainless steel mesh belt) was equipped with 2 PUV flashlamps mounted in series above the long axis of the conveyor (Model RC-802, XENON Corporation, Wilmington, MA). The assembly included two 40.64-cm (16 in), linear "C" type xenon flashlamps, used to generate PUV light ( Figure 1). The flashlamps were positioned above the conveyor with the long axis of each lamp aligned parallel to the long axis of the conveyor to deliver the greatest possible amount of PUV fluence in a short period of time. Each lamp produced 3 polychromatic (100 to 1,100 nm) flashes per second with a flash duration of 360 μs each.
PUV light treatment
Inoculated chicken parts were individually subjected to PUV light treatment using the PUV light conveyor system as described. The parts were placed on the conveyor, and conveyor speed was adjusted to obtain the desired energy fluence. Total energy delivered to the surface of the chicken parts was controlled by adjusting the speed of the conveyor (meters per second) at a fixed proximity of 10 cm below the quartz windows of the PUV light units. Conveyor speeds were adjusted to 0.131, 0.065, 0.032, and 0.022 m/s to obtain fluences of 5, 10, 20, or 30 J/cm 2 , respectively. Chicken parts (n = 6) were treated in 2 passes with 180°top to bottom inversion of the chicken parts between passes to achieve complete PUV light exposure to all surfaces.
Microbial analysis
After treatment, 25 cm 2 were removed from each treated surface (top/bottom) of each chicken part using a scalpel, yielding a total of 50 cm 2 . Surface samples were weighed to ensure that approximately 50 g was collected from each part. The surface samples were then transferred to a filtered stomacher bag (Classic 400, Seward Limited, Worthing, UK) with 100 mL of sterile buffered peptone water (Oxoid). Samples were stomached (Model 400, Seward Limited) for 1 min at 260 rpm. Solutions filtered out of the samples were serially diluted in buffered peptone water and spirally plated on TSAYE-NS plates using an autoplater (Autoplate 4000, Spiral Biotech, San Diego, CA). Cultured TSAYE-NS plates were incubated at 37°C for 24 h prior to enumeration using an autocounter (Q-Count version 2.1, Spiral Biotech). Microbial reductions (Log 10 CFU/cm 2 ) were determined via comparison of treated and untreated (control) samples, all of which passed under the conveyor.
Energy and temperature measurements
Total energy (joules per square centimeter) delivered to the samples was determined using a Nova Laser Power/Energy Monitor (P/N 1J06013, Ophir Optronics Limited, Jerusalem, Israel) with a 46-mm aperture pyroelectric metallic absorber (P/N 1Z02860, Ophir Optronics Limited) to record energy at stationary 5-cm increments along the length of the conveyor belt. Energy recordings were averaged over 10 pulses and then calculated according to exposure duration to assess energy (joules per square centimeter) delivered to the sample. After plotting the total energy delivered at 5-cm increments along the length of the conveyor, total energy was calculated. To achieve total energy values of 5, 10, 20, and 30 J/cm 2 , conveyor speeds were set at 0.131, 0.065, 0.032, and 0.022 m/s, respectively.
Chicken parts surface temperatures were determined using a type K thermocouple (Omegaette HH306, Omega Engineering Incorporated, Norwalk, CT) with a 15-cm-long and 1-mm-diameter probe. The temperature probe was placed approximately 1 mm under the surface of the chicken thigh sample within no more than 3 s following treatment. The probe measurements were derived from the 1 × 1 mm sensing tip of the probe.
Lipid oxidation
Whole chicken parts (n = 3) were treated with 30 J/ cm 2 of PUV light to observe effects on lipid oxidation, if they existed. Samples (10 g) were collected from each part and blended with 50 mL of deionized water, 5 mL of ethylenediaminetetraacetic acid, and 5 mL of n-propyl gallate for 2 min. The blended solution was transferred to a Kjeldahl flask with 2.5 mL of hydrochloride in 47.5 mL of deionized water. The solution was brought to a boil in order to collect approximately 50 mL of distillate, 5 mL of which was mixed with 5 mL of thiobarbituric acid reagent and left in a boiling water bath for 35 min. After 10 min of cooling, samples were transferred to cuvettes, and absorbance was measured using a spectrometer at 538 nm. Lipid oxidation was assessed by measuring thiobarbituric acid reactive substances (TBARS), as described by Tarladgis et al. (1960). Using this analysis, lipid oxidation was reported as the amount of malonaldehyde per 10 g of meat as calculated from a standard curve prepared as described by Tarladgis et al. (1960) and Texas Tech University (2018). Chicken part sample TBARS values were measured immediately following PUV light treatment and repeated after 24, 48, and 120 h of refrigeration (ca. 4°C) in manually sealed Ziploc plastic bags.
CIELAB color measurement
Whole chicken parts (n = 3) were treated with 30 J/ cm 2 of PUV light to observe effects on surface color. Surface color of B/S breast, B/S thighs, and skin-on thighs were assessed using a Minolta Chroma Meter colorimeter with an 8-mm-diameter head with diffuse illumination and an observer angle of 0°(Model CR 300, Minolta Incorporated, Ramsey, NJ) to measure the International Commission on Illumination (CIE) L*, a*, and b* color parameters, where L* represents lightness of the sample and a* and b* are chromaticity coordinates, − a* and þ a* indicate green and red color, respectively, and − b* and þ b* indicate blue and yellow color, respectively. When evaluating the color of the chicken part samples, 3 random locations per part were scanned to provide average L*, a*, and b* values for each chicken part. Color measurement guidelines were completed as recommended by the American Meat Science Association (2012).
Statistical analysis
SAS software (version 9.4, SAS Institute Inc., Cary, NC) was used to carry out statistical analysis. A complete randomized design was used to evaluate microbial reduction on the surface of 6 independently evaluated chicken parts after treatment by PUV light. A 2-way analysis of variance was used to establish differences by main effects, chicken part, PUV fluence, and their interaction. Microbial reduction was established by comparing untreated control samples to treated samples and calculating microbial reduction prior to statistical analysis. When analyzing lipid oxidation (n = 3), a repeated measures general linear model was used to evaluate chicken part type and storage time as repeated measures and their interaction after treatment by 30 J/cm 2 of PUV light. Color values (n = 3) were analyzed using a paired t test to evaluate changes before and after treatment for each product and each CIE L*, a*, and b* color parameter. Tukey's multiple comparison test was used to separate means when the F-test was significant, P ≤ 0.05. The standard error of the mean was provided in tables, when necessary, to represent the deviation of the means within treatments (Steel and Torrie, 1960).
Microbial reductions
The Log 10 reduction of E. coli K12 NSR on the surface of B/S chicken thigh and breasts and bone-in/skinon chicken thighs after treatment by the PUV light on a moving conveyor was assessed at energy fluence values of 5, 10, 20, and 30 J/cm² (Table 1). The product type by energy interaction for microbial reduction on chicken samples was significant (P = 0.015) for microbial reduction of E. coli. Microbial reduction increased with exposure to greater total fluence and the absence of skin on the product surface. With the exception of B/S breast after 5 J/cm 2 of PUV light exposure, both B/S breast and thighs had significantly (P < 0.05) greater microbial reduction compared with bone-in/skin-on thighs at all other respective energy levels of exposure. As expected from a previous study (Cassar et al., 2019), microbial reduction on the surface of chicken parts after exposure to PUV light was significantly greater (P < 0.05) with increasing energy (joules per square centimeter) delivered. Nevertheless, microbial reductions throughout this study were generally less than 1.0 Log 10 . Microbial destruction by PUV light applied on the moving conveyor appears to be decreased when compared with previous work using a benchtop PUV light unit. The benchtop PUV light units, described in the literature, treated samples of chicken in a fixed position using duration of exposure and proximity to the PUV flashlamp to adjust for total energy exposure. Keklik et al. (2010) investigated the effect of PUV light for the reduction of Salmonella serovar Typhimurium on the surface of B/S chicken breast using a benchtop PUV light unit. They reported Log 10 reductions of Salmonella (CFU/cm 2 ) ranging from 1.2 to 2.4 after a 5-s treatment at 13 cm (5.6 J/cm 2 ) and a 60-s treatment at 5 cm (67.0 J/cm 2 ), respectively. Using a similar benchtop PUV light unit, McLeod et al. (2018) subjected B/S chicken breast fillets inoculated with spoilage and pathogenic bacteria to PUV light with fluences ranging from 1.25 to 18 J/cm 2 , leading to reductions ranging from 0.9 to 3.0 Log 10 (CFU/cm 2 ) of Salmonella enterica serovar Enteritidis, Listeria monocytogenes, Staphylococcus aureus, E. coli, Pseudomonas spp., Brochothrix thermosphacta, and Carnobacterium divergens.
Although no study has yet been designed to directly compare the performance of benchtop versus conveyormounted PUV systems, previous work from this laboratory provides pertinent insight. Using a benchtop PUV system delivering 20.5 J/cm 2 to lean surface of chicken thighs, Cassar et al. (2019) observed a microbial reduction of 1.70 Log 10 for E. coli K12 NSR. In the current study, using PUV lights mounted above a moving conveyor, an energy fluence of 20 J/cm 2 produced a much smaller microbial reduction of 0.74 Log 10 for E. coli K12 NSR on the lean surface of chicken thigh with nearly identical energy fluence. Additionally, differences in microbial reduction may be associated with PUV light shadowing due to the irregular shapes and sizes of whole chicken parts. Because PUV light is only effective when delivered directly to the microbes, shadowing would be expected to protect microorganisms from germicidal exposure. These specific observations and numerous others warrant continued investigation to better understand this discrepancy.
Temperature and energy measurement
After warming at room temperature (ca. 18°C) for 2 h, the initial surface temperature of the raw chicken thigh samples was ca. 17.8°C. The surface temperature was measured immediately following each PUV light treatment condition at 5, 10, 20, and 30 J/cm 2 . Rise in temperature was not significantly different (P > 0.05) between skin and lean surface of raw chicken parts but did significantly increase with greater total PUV light fluence (P < 0.0001). The final surface temperature for chicken parts after exposure to 5, 10, 20, and 30 J/cm 2 was 19.1°C, 20.8°C, 22.9°C, and 26.9°C, respectively. The final surface temperatures were the result of 1.4°C, 3.0°C, 5.0°C, and 9.0°C increases at 5, 10, 20, and 30 J/cm 2 , respectively (Table 2).
Other researchers have reported similar findings with numerically greater temperature rise in a benchtop PUV unit compared with that reported in the current study. Keklik et al. (2010) reported 3.9°C, 6.7°C, 11.5°C, and 14.1°C rise at 2.9, 8.7, 17.4, and 26.1 J/ cm 2 , and Cassar et al. (2019) reported 2.8°C, 4.5°C, 6.2°C, and 10.0°C rise at 3.38, 6.9, 10.2, and 20.8 J/ cm 2 , respectively, for chicken parts treated with PUV light in a benchtop system. PUV light studies using a benchtop unit reported temperature rise approximately twice that of the conveyor system. The apparent difference in temperature rise between the benchtop and conveyor-type PUV systems may be due to the specific designs of the 2 units. The treatment chambers of the benchtop and conveyor units create convection and reflection effects that trap the heat energy associated with PUV light spectrum. For the benchtop system, the chamber is completely enclosed, whereas the conveyor system has larger total volume below the lamps due to the width of the conveyor belt and is open to the outside on each end. These design differences should be investigated to determine whether they contribute to variation in temperature or microbial destruction.
Lipid oxidation
Lipid oxidation was assessed for B/S breasts, B/S thighs, and bone-in/skin-on thighs after treatment of 30 J/cm 2 of PUV light. Figure 2 depicts chicken product type TBARS values as a function of PUV light treatment at 4 time points: 0, 24, 48, and 120 h. The plot suggests that there is no significant difference (P > 0.05) between PUV light-treated and untreated chicken parts for each respective product type over time. Regardless of product type or treatment, refrigerated storage over time significantly contributed (P < 0.05) to increased TBARS values for all chicken product evaluated. Ultimately, PUV light treatment of 30 J/cm 2 applied to chicken parts in this study did not lead to a significant increase (P > 0.05) in lipid oxidation as measured by TBARS, developing, on average, 3.33 and 3.02 μg malonaldehyde per 10 g of meat immediately after PUV exposure and 6.24 and 5.95 after 120 h of refrigerated storage for control and treated samples, respectively.
A similar study by Keklik et al. (2010) reported the effects of PUV light treatment on lipid oxidation of unpackaged chicken breast. Reported values were 5.87 and 12.43 μg of malonaldehyde per 10 g of meat after a 5-s treatment at 13 cm and a 60-s treatment at 5 cm, respectively. Untreated controls were reported to have 5.42 μg of malonaldehyde per 10 g of meat.
In another study, Paskeviciute et al. (2011) treated the surface of chicken breast with high-powered pulsed light (200 to 1,100 nm with pulse duration of 112 μs) and reported 2.04 and 10.19 μg of malonaldehyde per 10 g of meat after exposure of 0 and 2.7 J/cm 2 of PUV light, respectively. In an additional study by Keklik et al. (2009), PUV light-treated unpackaged chicken frankfurters were evaluated for lipid oxidation. After a 5-s treatment at 13 cm and a 60-s treatment at 5 cm, values of 5.60 and 7.65 μg of malonaldehyde per 10 g of meat were reported, respectively, and 5.03 μg of malonaldehyde per 10 g of meat was reported for untreated frankfurters. The values in the current study are consistently lower than previously reported research, which could be attributed to differences between the PUV light benchtop units and treatment using the conveyor system. The greater temperatures reported in the benchtop systems may contribute to initiation of lipid oxidation that is not observed in treatment by the conveyor system; this idea needs to be further evaluated. Additionally, the difference in fat content of the evaluated products may explain the differences observed between chicken skinwhich has the greatest concentration of lipids-and other products.
CIELAB color measurement
The color parameters L*, a*, and b* were assessed for B/S breasts, B/S thighs, and bone-in/skin-on thighs immediately after treatment of 30 J/cm 2 of PUV light. L*, a*, and b* values were reported before and after treatment with PUV light (data not shown). Statistically, L*, a*, and b* values of the products did not significantly (P > 0.05) change after treatment with 30 J/cm 2 of PUV light. In a similar study, Keklik et al. (2010) reported the fluctuations in L*, a*, and b* values of B/S chicken breast after treatment with PUV light. Reported ΔL*, Δa*, and Δb* values after a 5-s treatment at 13 cm (2.7 J/cm 2 ) were þ0.59, −0.77, and þ0.70, respectively (P > 0.05). After a 60-s treatment at 5 cm (60.2 J/cm 2 ), significant changes in L*, a*, and b* values were reported as þ23.43, þ3.46, and þ7.70, respectively (P < 0.05). The energy values in the current study did not exceed 30 J/cm 2 and did not result in any changes to surface lightness and color.
Conclusions
To the best of our knowledge, this study is the first to report the effects of PUV light applied to chicken parts on a moving conveyor. The results of this study demonstrate that PUV light treatment is effective at modestly reducing E. coli K12 NSR on the surface of chicken thighs, breast, and skin. The research indicates that the highest exposure of PUV light evaluated results in the greatest microbial reduction. Results for lipid oxidation and color analysis in this study indicate that PUV light, applied at 30 J/cm 2 , does not have significant effects on these product quality attributes of fresh chicken parts. Furthermore, greater energy fluences resulted in greater temperature rise on the surface of the products. The increase in temperature is generally undesirable for a raw product but may lead to increased microbial reduction. Continued investigation is needed to refine the application of PUV light for microbial reduction in order to satisfy the needs of commercial poultry processors. | 2021-09-27T20:55:39.704Z | 2021-07-23T00:00:00.000 | {
"year": 2021,
"sha1": "ff07b55a2912cab6a68aa5f447d90add6c66cbdf",
"oa_license": "CCBY",
"oa_url": "https://www.iastatedigitalpress.com/mmb/article/12256/galley/12621/download/",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c850ba39eea65f87caf3fbb0c5259a83545b24d7",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
119207637 | pes2o/s2orc | v3-fos-license | On Wilson loops for two touching circles with opposite orientation
We study the Wilson loops for contours formed by a consecutive passage of two touching circles with a common tangent, but opposite orientation. The calculations are performed in lowest nontrivial order for ${\cal N}=4$ SYM at weak and strong coupling and for QCD at weak coupling. After subtracting the standard linear divergence proportional to the length, as well the recently analysed spike divergence, we get for the renormalised Wilson loops $\mbox{log}~W_{\mbox{\scriptsize ren}}=0$. The result holds for circles with different radii and arbitrary angle between the discs spanned by them.
Introduction
Ultraviolet divergences of Wilson loops for smooth contours, as well as for those with cusps and intersecting points, have been studied in much detail from the early eighties to present time. Especially the cusp anomalous dimension has drawn a lot of attention since it is also related to various other physical situations, see e.g. [1] and references therein. It diverges in the limit of vanishing opening angle. However, the removal of the regularisation does not commute with that limit, and only recently we have started the investigation of renormalisation in the presence of zero opening angle cusps. i.e. spikes [2].
A spike turned out to be responsible for a divergence proportional to the inverse of the square root out of the product of the dimensional cutoff times the jump in the curvature. The analysis has been performed in lowest order at weak coupling both for N = 4 SYM and QCD and at strong coupling via holography in the supersymmetric case. In addition, the spike generates in the SUSY case, at least at weak coupling, an additional logarithmic divergence, which could be related to the breaking of zig-zag symmetry [3], [4].
Although the lowest order setting in [2] was very simple, the safe extraction of terms beyond the leading divergence required some technical effort. In the present paper we go one step further and want to evaluate also the finite terms, which after subtraction of the divergences define the renormalised Wilson loops. This we will do for a special contour. It is formed out of two touching circles with a common tangent in the following way. After starting at the common point one traverses the first circle and then continues along the second circle in just the opposite direction. The discs related to the circles are allowed to form an angle β.
The paper is organised as follows. The next section is devoted to lowest order at weak coupling for the locally supersymmetric Wilson loop. Then section 3 contains the holographic analysis at strong 't Hooft coupling. In section 4 we comment on the situation without supersymmetry by subtracting the scalar contributions from the result in section 2. After the concluding section follow two appendices containing the technical details of the asymptotic estimates of the necessary integrals.
2 Lowest order at weak coupling in N = 4 SYM In N = 4 SYM the Euclidean local supersymmetric Wilson loop for a closed contour parameterised by x µ (τ ) is given by [5,6], [4] For simplicity we consider only the case of fixed θ I ∈ S 5 . Our contour of interest has been characterised in the introduction. Let the two circles with radii be parameterised by Then the contour to be used in (1) is given by The situation for a fixed larger circle and smaller partner circles at various values of the angle β is illustrated in figure 1. Then the perturbative expansion of this Wilson loop is given by The integrals I 1 and I 2 correspond to the contributions, where both endpoints of the propagators are on the same circle. In I 12 the propagators connect the two circles. This means (ǫ denotes a dimensionful parameter for UV regularisation) Performing one trivial integration one gets Furthermore I 12 is given by Then, performing the ϕ 2 -integration, we get 2 with and, using the abbreviation The indefinite integral over f (R 1 , R 2 , β, ϕ) is .
It is zero at both ends of the integration interval of the definite integral needed in (9). However one has to be careful, since for cosβ > R 2 2R 1 the argument of arctan-function passes infinity within the integration interval ϕ ∈ [0, π]. This leads to (Θ denoting the step function) For the integral over g(R 1 , R 2 , β, ǫ, ϕ) we change the integration variable via where we introduced the abbreviations and R 12 is just the distance between the centers of the two circles. It is also via related to the difference of the curvature vectors at the touching point. Then we arrive with (9), (11) and (12) at .
In the above equation use has been made of the following definitions with We are interested in the finite piece of I 12 at ǫ → 0. Therefore, we have to keep control also over the O( √ ǫ) contribution to the integral in (17). Now for each fixed x the nominator in the integrand of (17) is h = 1 + O(ǫ). But, unfortunately, this estimate does not hold uniformly in the whole integration range (0, B ǫ ). Hence the necessary analysis requires some detailed care and is put into appendix A. Inserting its result (70) for the integral into (17) we get 3 As one should have expected, the discontinuities at cosβ = R 2 2R 1 , i.e.R 12 = R 1 , showing up in both (12) and (70), cancel in the final result for I 12 .
Holographic evaluation at strong coupling
To generate the two circles as the image of two straight lines after an inversion on the unit sphere, we have to choose for these lines The distance between them is with R 12 from (15). As a result one gets the circles in the form The minimal surface in AdS, approaching the two straight lines (24) on the boundary, is given by (in Poincaré coordinates x 1 , x 2 , x 3 , z, with z = 0 as bound- The function r(σ) is defined via with r 0 fixed by The AdS isometry acts on the boundary (z = 0) as inversion on the unit sphere, mapping the straight lines (24) and circles (27) to each another. Therefore, the minimal surface in AdS, approaching the two circles (27) is given by the image of (28) under the map (31), i.e. by The regularised area A ǫ , needed for the holographic evaluation of our Wilson loop, is then just the area of that part of (32), for which z > ǫ. Its boundary, as parameterised by σ and τ , is given by Based on the isometric character of the map (31), we prefer as in [2] to calculate A ǫ on the preimage (28). There the induced metric is independent of τ and areas are given by r 2 0 dτ dσ r 4 (σ) . To change the integration variable from σ to r one has to keep in mind, that their relation is not one to one. Let σ(r) ≥ 0 be given by the integral in (29). Then we get The integration regions B ± ǫ are defined by r Performing the trivial τ -integration (see [2]) we arrive at The lower boundaries r ± ǫ are defined as solutions of The evaluation of these integrals for ǫ → 0 up to divergent and O(ǫ 0 ) terms is performed in appendix B. After applying some Γ-function arithmetic to the result (84) we get The leading divergent term is due to the standard 1/ǫ divergence proportional to the length of the boundary contour. The next-leading 1/ √ ǫ divergence is just twice the spike divergence analysed in [2]. After subtracting these divergences the remainder tends to zero for ǫ → 0, hence A ren = 0 .
Then via the holographic Wilson loop formula [6] we get at large N and strong 't Hooft coupling λ = g 2 N logW ren = 0 .
Before closing this section we have to mention a certain subtlety. There is still another potentially competing surface, the disconnected 5 one, built out of the surfaces for the two single circles. First of all it is discriminated by the fact, that the regularised 4 M depends on β via R 12 (R 1 , R 2 , β). For β = 0 this agrees with the formulas in [2] of course. 5 Up to the touching point on the boundary of AdS.
contour generated by cutting at z = ǫ is not connected. Furthermore, its regularised area is [4,7] A disconn For applications to the holographic evaluation of Wilson loops the common leading 1/ǫ-divergence is cancelled by a boundary term induced by a necessary Legendre transformation [4]. For small ǫ the disconnected surface is once more discriminated, since (43) both as it stands as well as after subtraction of the leading term is larger than (40). However it would win, if the values of the finite pieces would have to decide.
To my knowledge so far this alternative did not play any role in papers studying the Gross-Ooguri phase transition [8], since there the competing areas had the same divergent parts. Only in a recent paper [9] on the cross anomalous dimensions a comparison of areas with differing divergent terms was relevant and the decisions were based also on the full regularised areas.
Comment on the ordinary Wilson loop
The ordinary (not supersymmetric) Wilson loop is given by (1) without the coupling of the contour to the scalars. According to the recipe for its holographic evaluation, as formulated in [10,11], at leading order strong coupling it coincides with the supersymmetric Wilson loop as studied in the previous section.
To handle the leading order at weak coupling, we have to subtract the scalar contributions from those in section 2. The result is then valid both for the ordinary Wilson loop in N = 4 SYM and QCD.
There are the two trivial terms with both points of the propagator on the same circle For we get after performing the ϕ 2 -integration 6 After the change of integration variable as indicated in (13) this becomes Now an analysis analogously to appendix A yields 6 To keep formulas short, we write down only the case β = 0. Then Note that both (44) and (48) beyond the divergent terms contain no finite term remaining in the limit ǫ → 0. The QCD Wilson loop becomes (49) Then after subtraction of the standard 1/ǫ divergence proportional to the length and the QCD spike divergence [2] our final result for the renormalised Wilson loop is 7 log W QCD ren = 0 + O(g 4 ) . (50)
Conclusions
In N = 4 SYM we obtained for the locally supersymmetric as well as for the ordinary Wilson loop in lowest nontrivial order both at weak and strong coupling. This result holds also at weak coupling for QCD. Furthermore, it is independent of the angle between the discs spanned by the circles. Because no logarithmic divergences showed up 8 , it is free of any renormalisation group ambiguity.
Of course the main open question is, whether this result is an accident of the lowest orders or whether it extends to all orders. In further work in higher orders one has to take into account also the mixing with the correlation function for the two Wilson loops for the single circles.
Using modifications of AdS, proposed for holographic QCD, see e.g. [13] and references therein, it should be straightforwardly to get the strong coupling result for QCD.
In speculating about physical properties, which could be related to our issue, ones mind is crossed by zig-zag symmetry [3] and conformal invariance. Zig-zag symmetry means that a part of a contour which is backtracked contributes only a factor 1. Classically it is realised for the ordinary Wilson loops, i.e. gauge parallel transporters, but is violated for the local supersymmetric loop due to the coupling to the scalars, which is not sensitive to the orientation. It is expected to hold in all orders of perturbation theory for ordinary Wilson loops, and there are arguments, that for the local supersymmetric loops it should be restored in the strong coupling limit [4].
With this assumption (51) holds as an all order result in QCD for R 1 = R 2 and β = 0, i.e. the exact backtracking case. For R 1 > R 2 there is only local backtracking and the Wilson loop for the single circles become different due the scale dependence of the renormalised coupling constant.
On the other side, in N = 4 SYM conformal symmetry is unbroken. The Wilson loops for single circles are independent of their radius and known as an all order result [14,15].
A last comment concerns the relation of our result to the symmetry breaking under conformal transformations, which map one point of the contour to infinity. The seminal discussion of this issue in ref. [15] applies to cases where the respective point is on a smooth piece of the contour. In our case this point is just the singular point at the tip of the spikes, i.e. it is not of the type considered in [15] and one should not imperatively expect that their universal anomaly factor 9 also governs the relation between the touching circles and anti-parallel straight lines. Some details for the comparison with the case of two anti-parallel lines are collected in appendix C.
Acknowledgement:
I would like to thank the Quantum Field and String Theory Group at Humboldt University for kind hospitality.
Appendix A
This appendix is devoted to the evaluation of the integral for ǫ → 0. h and B ǫ are defined in (18)-(21) and (14), respectively.
We start with the integral J 0 where, compared to J, h is replaced by 1. It can be expressed in terms of the complete elliptic integral K of the first kind via It has been derived for Euclidean contours. Variations have been observed also for lightlike polygons [16]. K(y) is near y = 1/2 an analytic function . The deviation from 1/2 in the second line of (53) is proportional to ǫ, see (14). Then expressing K(1/2) in terms of the Gamma function, we get To proceed, we note that the square root factor in the definition of h in (18) allows an uniform estimate 1 + O(ǫ). The first factor does not, but is at least bounded in the whole integration interval. Let us define 10 andĥ as well asĴ Then we get Now we split the integration over x in two pieces viâ , with b > 0 a fixed number and 11 Then the deviation ofĥ from 1 inĴ lower is uniformly O(ǫ 2α ), hencê For the estimate ofĴ upper we use 10 Of course terms containing ǫ without a factor x 2 , or ǫ 3 x 2 are also irrelevant for our analysis. 11 Concerning onlyĴ lower , we could allow α even up to 1 2 . to get with (57), (60) where Adding (62) and (64) we can reinstall the factor 1/ √ 1 + x 4 instead of 1/x 2 in the first term on the r.h.s. of (64) and arrive with (60), (53) and (59) at The integrals in both V 1 and V 2 can be expressed in terms of inverse trigonometric functions, and after some algebra we get
This implies
Inserting this in (66) and using (54) as well as (61) we arrive at 12 12 By o( √ ǫ) we denote terms vanishing faster than √ ǫ.
where b 1 , b 2 , b 3 are given by different hypergeometric 5 F 4 's of argument δ 4 times some numerical factors. With (74) this implies (j(+) = 2, j(−) = 1) Concerning the estimate of A ± ǫ,rem we noted in [2], that f ± (r, r ± ǫ ) is bounded for ǫ → 0 uniformly with respect to r. This allowed to conclude But we can be more efficiently. Expanding the square root in (77) and using again the uniform boundedness of f ± (r, r ± ǫ ) we get The integral without the factor f ± (r, r ± ǫ ) would diverge for ǫ → 0 according to (78). But due to the behaviour for small ǫ, and near the lower boundary of the integral, it remains finite. This means and together with (79),(75),(34) Here use has been made also of the relations between r 0 , L and the curvature difference | k 1 − k 2 |, i.e. (30),(26) and (16).
Appendix C
Here we collect some details for the comparison of the two touching circles with two antiparallel straight lines. Performing the trivial integrations for lines at distance L one gets To control the infrared problem, the integration has been restricted to straight lines of length l, with the goal l → ∞. Contrary to the treatment of ultraviolet divergences, there is no recipe to give for infinitely extended contours the Wilson loop a finite meaning per se. Nevertheless it is the source for a meaningful physical quantity, the static quark-antiquark potential, via V (L) = lim l→∞ W parallel /l. Thus this potential is just given by the factor of the linear infrared divergence. The Wilson loop for the touching circles is from (5), (7), (16) and (22) Our ultraviolet regularisation parameter ǫ, as used in chapter 2, mimics a universal cutoff in the distance between the two endpoints of the propagator. The special situation near the touching point of the two circles could be regularised also by restricting the integrations to the image under inversion of the two straight lines of finite length l. Then the minimum of the allowed propagator distances would be min| x 1 − x 2 | = 2R 12 (1 + l 2 R 2 1 )(1 + l 2 R 2 2 ) Identifying this minimum with ǫ one finds, starting from (85), the spiky ultraviolet 1/ √ ǫ divergence as an image of the linear infrared divergence. But invariance under inversion is broken, resulting in different numerical coefficients. Furthermore, there are different finite terms and no logarithmic divergence for the circles.
Of course, this interplay between the IR for straight lines and the UV for the circles holds also for strong coupling. It is illustrated in an eye-catching manner in figure 3 of [2]. But due to symmetry breaking, also here the coefficients require independent calculations. | 2019-01-07T10:38:28.000Z | 2018-11-02T00:00:00.000 | {
"year": 2019,
"sha1": "c2b3670057da2dbdc094ebf598bd53d5c89833a4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1811.00799",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7eb82821a985e14d3c28d90673d3a62254c477e5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
72541853 | pes2o/s2orc | v3-fos-license | Evaluation of the Wind Power in the State of Paraíba Using the Mesoscale Atmospheric Model Brazilian Developments on the Regional Atmospheric Modelling System
This work aims to describe the wind power density in five sites in the State of Paraiba, as well as to access the ability of the mesoscale atmospheric model Brazilian developments on the regional atmospheric modeling system (BRAMS) in describing the intensity of wind in Sao Goncalo Monteiro, Patos, Campina Grande, and Joao Pessoa. Observational data are wind speed and direction at 10 m high, provided by the National Institute of Meteorology (INMET). We used the numerical model BRAMS in simulations for two different months. We ran the model for rainy months: March and April. It was concluded that the BRAMS model is able to satisfactorily reproduce the monthly cycle of the wind regime considered, as well as the main direction. However the model tends to underestimate the wind speed.
Introduction
As the using of wind power in the world grows, new technologies of generators and topology for wind power plants have been created in order to improve the utilization of energy from wind and its transmission. Numerical models of weather forecast are largely used in varied meteorological centers and find a range of applications in agriculture, water resources, tourism, and so forth. Forced by data and global models, it is common to local meteorological centers keeping systems of numerical forecast based on atmospheric models of limited area, with spatial resolutions of kilometers, typically.
Some researches related to wind behavior are concentrated on the problem of adjustment of statistics distribution to data of wind speed ( [1,2] and others). Results of these researches also indicate the distribution of Weibull as the one that fits better to these data.
According to Sauer et al. [3], Brazil offers excellent sites to install wind parks, and the best area is found along its coast. However, he indicates that in the countryside, particularly in northeast, where is located the State of Paraíba, there are found sites with capacity of wind power generation.
Various numerical models of mesoscale such as (regional atmospheric modeling system) RAMS described in Cotton et al. [4], (regional spectral model) RSM described in Juang and Kanamitsu [5], and MM5 described in Duhdia et al. [6] solved physical processes from the surface to high atmosphere. These models are applied from the weather forecast to the measurement of pollutants dispersion.
Among these, the (brazilian developments on the regional atmospheric modeling system) BRAMS model, developed from RAMS, whose basic structure is described by Pielke et al. [7]; Walko et al. [8]; and Cotton et al. [4]. However, an objective and brief description of this model can be found in Cavalcanti [9]. This model has a complete and sophisticated set of physical parameterization to simulate the leading processes of the evolution of the atmospherical state. It contains in its code various options of physical parameterizations. Parameterizations: radiation proposed by Chen and Cotton [10], and of deep convection of Kuo type [11,12], modified by Molinari [13] and Molinari and Corsetti [14], and the one by Grell and Dévényi [15] and other of shallow convection developed and implemented by Souza [16]. Turbulence on planetary limit layer is calculated according to Mellor and Yamada [17], and microphysics follow the scheme described by Walko et al. [8].
ISRN Renewable Energy
In its more recent version the model counts with a parameterization for the photochemical processes on the atmosphere [18], besides the adequate treatment for urban areas by means of scheme (town energy budget) TEB, introduced in its version 4.3 of RAMS model [19].
The general objective of this study is to evaluate the capacity of the numerical model BRAMS to simulate wind fields, aiming at the evaluation of wind power, in the State of Paraíba. Thus, it will be made an evaluation of wind power in the regions of São Gonçalo, Patos, Monteiro, Campina Grande, and João Pessoa, in State of Paraíba, using data observed and simulated by the regional model BRAMS, to generate primary maps of the wind power of the region.
Material and Methods
Data used in this research are clockwise observations of wind direction and speed, collected in five stations, located in the State of Paraíba, and they are meteorological stations of surface from Climatological Net of Northeast (RedeClimatológica do Nordeste) managed by the National Institute of Meteorology (Instituto Nacional de Meteorologia (INMET)). Sensors are at 10 meters high. The localization of the stations and its respective latitude, longitude, and height are shown in Table 1.
Numerical simulations were accomplished using the model BRAMS, aiming to evaluate the wind power in the State of Paraíba, in comparison to observational data. It was established two periods: March and April, 1977 and 1981, corresponding to the seasons of the year with less wind intensity, that is, the rainy period.
In wind simulation with the BRAMS aiming at the wind power generation it is necessary to use high resolution. Thus, it was adopted two rails. The main rail that has a rectangular is of 80 points in direction x and 80 points in direction y, with a space of 16 km between each point of the rail, corresponding to an area that covers almost all northeast region: Ceará, Rio Grande do Norte, Paraíba, Pernambuco, Alagoas, Sergipe, and part of Bahia. And the rail nestled, located between the limits of the small rectangle, in blue, having 122 points in direction x and 66 points in direction y, with a space of 4 km between each one, corresponding to the region of Paraíba. The frequency of the analysis made by the model was standardized to clockwise intervals and with two rails having 9 degrees of soil layers. Figure 1 presents the view of the rails. All rails present polar stereographic projection and the same vertical structure, which consists of 42 degrees with enlargement reason of 1, 2, and a space of, at most, 1000 m. Lateral Newtonian relaxation is activated using 5 points, reaching 1800 s, or 30 min, as a constant value for all the simulated period. Top Newtonian relaxation has a scale of 21.600 s, or 6 h. The Newtonian relaxation in the interior of the domain reached the same value of the Newtonian relaxation of the top. For the parameterization of the radiation the scheme of Chen type is used. The parameterization of microphysics used is of the level 2 in the model. The parameterization of convection was also activated, being the Kuo type [11,12,21], the one chosen. For the parameterization of turbulent the scheme of Mellor-Yamada was chosen.
To adjust data obtained by BRAMS (representative of the railed area) with the data observed (punctual) the statistic method was used, according to Reis Junior [22] that is based on the use of the mean and of the standard deviation of the series observed and simulated given as where φ i represents a value of the simulation, φ i the mean of the simulated values, σ o the mean of the observed series, σ i the standard deviation of the simulated series, and, finally, φ o represents the mean of the observational data. By the studies of Weber et al. [23], Maria [24], and Cunha [25], it is concluded that the best way to evaluate the model is using a set of statistic indexes, aiming to minimize interpretation mistakes. For this reason, in this work a set of 3 statistic indexes is used: mean absolute error, mean-square error and correction index.
Mean absolute-error (EA) is given as By the definition, EA only can reach positive values; thus, the less is the value, the bigger is the similarity between the series. Mean square error (EQM) is given by the sum of the squares of the differences between the results of the model and the observations: It can reach any positive value and has the same units of measurement of the series. The similarity of the simulated and observed series is bigger as the near of zero is the error measurement. In Table 2 there is a more refined interpretation of the coefficient of correlation by Pearson.
The value of a correlation coefficient is not a guarantee that the variables involved are really correlated although before any conclusions about the values of the correlation coefficients, the application of a statistical test is necessary in order to know the real degree of relation between the variable analyzed. In order to test the equality between two means, Student's t-test by is largely used [26,27]. The test of significance t by Student was applied having the following parameterization values, t: where N is the number of data. From (4) the critical coefficient of correlation (r c ) was extracted, that is, a value which accepts or not the statistical hypothesis, r c , given by In this work the variable used has a series of data of 31 and 30 days, respectively. The critical correlation indexes r c will be calculated. For the correlation N = 31, that is, 31 days corresponding to the month of March, N −2 = 29, which are the degrees of freedom. So the values of t and r c are (a) for 99% of significance, that is, with an error of 1% (α = 0.01), t = 2.462; r c = 0.42; (b) for 95% of significance, that is, with an error of 5% (α = 0.05), t = 1.699; r c = 0.30; (c) for 90% of significance, that is, with an error of 10% (α = 0.10), t = 1.311; r c = 0.24.
For the correlation N = 30, that is, 30 days corresponding to the month of April, N − 2 = 28, which are the degrees of freedom. So the values of t and r c are (i) for 99% of significance, that is, with an error of 1% (α = 0.01), t = 2.467; r c = 0.42; (ii) for 95% of significance, that is, with an error of 5% (α = 0.05), t = 1.701; r c = 0.30; (iii) for 90% of significance, that is, with an error of 10% (α = 0.10), t = 1.313; r c = 0.24.
It means that, for the correlation coefficients obtained with 29 and 28 degrees of freedom, the statistical significance that shows the correlation between the variables is of 99%, 95%, and 90%, for r equal or superior to 0.42, 0.30, and 0.24, respectively.
The result obtained can suggest the acceptance of the hypothesis of null coefficient or not. In case of the correlation coefficient calculated is equal or superior to the value of t critical for a determined degree of freedom the percentage of significance, the null hypothesis is rejected, and the trend observed is true for that degree of significance obtained [27,28]. there are the data observed and the data simulated by the model with statistical correction.
Results of the Simulations
In Figure 2 is observed a correlation between the values of mean speed simulated and observed. In general, for the city of São Gonçalo the model overestimates the series of data observed in the year of 1977, Figure 2 In Figure 4, for the city of Patos, it is observed that the simulated data follows a monthly cycle well defined, where it can be observed that, for the months of March and April of 1977, Figure 4 the simulated and corrected speed values and the ones observed in all this period. Maybe this happens due to the reconfiguration of the limit layer, due to the rainy period. In Figure 5, related to Campina Grande, it can be seen that in March 1977 there is an interruption in the cycle when compared to the month of April, Figure 5 In Figure 6, it is observed in João Pessoa the lack of a cycle well defined for the month of March 1977 and the existence of a monthly cycle a little more defined for the month of April of the same year, in Figure 6 Aiming to evaluate the performance of the results of the simulations between the series of data simulated by the model of mesoscale BRAMS and the data observed, the the direction in the observational period and the period of the simulation, to identify if the model simulated well the sites studied, in relation to the variable direction. In Figure 7, it is observed that, along the period studied, 1977 and 1981, the wind was predominantly east, varying from 45 • to 135 • , that is, varying from northeast to southeast. In North direction, it is observed that there is a high percentage for the São Gonçalo station, probably due to problems in obtaining the data, or calm. The higher percentages of wind speed occurred in the rate from 3.6 m/s to 5.7 m/s. A resemblance can be observed between the simulated and observed data, especially in 1981. In Figure 8, it can be observed that wind was predominantly east, varying from 45 • to 135 • , that is, varying from northeast to southeast, as observed in São Gonçalo. In north direction a high percentage for the observational data for the year of 1981 is verified, Figure 8(c), probably due to problems in obtaining data or calm. The higher percentages of wind speed occurred in the rate from 3.6 m/s to 5.7 m/s and 5.7 m/s to 8.8 m/s. It can be observed that there is a resemblance between the simulated and observed data, that is, in both São Gonçalo and Monteiro stations the model seems to be simulated well the wind direction.
In Figure 9, it is observed that wind was predominantly east, varying from 90 • to 135 • , that is, varying from east to southeast. In north direction, it is observed that there is a high percentage for the observational data, for the year of 1977, Figure 9(a). The higher percentage of wind speed occurred in the rate from 2.1 m/s to 3.6 m/s and 3.6 m/s to 5.7 m/s, and it can be observed that there is a resemblance between the simulated and observed data.
In Figure 10 is the Campina Grande station, where the wind were predominantly east, varying from 45 • to 135 • , that is, varying from northeast to southeast, for the observational data, Figures 10(a) and 10(c), while for the data simulated by the model the wind was predominantly southeast, varying from east to southeast. The higher percentages of wind speed occurred in rates from 2.1 m/s to 3.6 m/s, 3.6 m/s to 5.7 m/s, and 5.7 m/s to 8.8 m/s. It can be observed that there is a resemblance between the simulated data and the observed data, having in mind that the observational data only reach exact values, such as 0 • , 45 • , 90 • , 135 • , 180 • , 225 • , 270 • , 315 • and, at last, 360 • , while the model can reach any value between 0 • and 360 • .
In Figure 11 is the João Pessoa station where the wind was predominantly southeast, varying from 90 • to 180 • , that is, varying from east to southeast. The higher percentages of wind speed occurred in rates from 2.1 m/s to 3.6 m/s, 3.6 m/s to 5.7 m/s, and 5.7 m/s to 8.8 m/s. It can be observed that there is a resemblance between the simulated and observed data. It is important to highlight that, in order to have a better notion in relation to the adjustment between the observational data and the data simulated by the model, it is necessary to have a quantitative analysis of the data. Thus, it is necessary to use the statistics indexes described in the methodology of this research.
In Figures 12 and 13 there is the map of the State of Paraíba for the mean wind power density with the localization of the stations used for the period considered rainy in the region, that is, in the months of March and April, Figure 12, and for the dry period in the region, that is, for the months of September and October, Figure 13, in years 1977 and 1981.
It can be observed in Figure 12 that the higher values of density of wind power were found in the mesoregion of In Figure 13 there is the configuration of the State of Paraíba for the density of wind power in W/m 2 . As expected, the period considered dry obtained the best configuration for the wind power, in the region, in relation to the period considered rainy; that is, the results for the months of September and October were superior to the results observed in the months of March and April, as can be seen comparing Figure 12 to the Figure 13. The best sites for wind power density were again in Planalto da Borborema, as observed in the previous graphic, but with some focus in the shore of the State where there is the João Pessoa station, and in the Sertão region of the State of Paraíba, where there are the São Gonçalo and Patos stations.
Conclusions
It was concluded that the numerical simulations to estimate the wind speed at 10 m high distant from the soil showed, in general, a satisfactory performance with good relation between the series of simulated data, in comparison to the observational data. And, as seen in the statistical indexes with high correlation for the rainy period, that is, the model simulated well the period considered rainy in the region.
In all cases, the model had difficulty in reproducing the variation of a short-time scale. Thus, it can be concluded that the local factors are not well represented in the model, what could be corrected with the utilization of a microscale model. As well as the use of surface data from different sources from the ones used in this research, having better spatial resolutions and high quality.
In further works in this line of investigation, wind power provision, it is proposed the use of a model of microscale, 16 ISRN Renewable Energy as the WASP, in order to detect the phenomena that occur in short-time intervals, as well as the use of better surface data in the model BRAMS, to redo the simulations, amplifying the simulation period. Data of great scale of varying sources to initialize the model can also be used. | 2019-03-10T13:08:40.360Z | 2012-10-16T00:00:00.000 | {
"year": 2012,
"sha1": "5e5ed13490bccb96e2b45c3349d43e1e26447e7a",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/archive/2012/847356.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "366f86cd11a07413a88ea488773a575680808341",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
234482102 | pes2o/s2orc | v3-fos-license | Subdiffusion in a 1D Anderson insulator with random dephasing: Finite-size scaling, Griffiths effects, and possible implications for many-body localization
We study transport in a one-dimensional boundary-driven Anderson insulator (the XX spin chain with onsite disorder) with randomly positioned onsite dephasing, observing a transition from diffusive to subdiffusive spin transport below a critical density of sites with dephasing. This model is intended to mimic the passage of an excitation through (many-body) insulating regions or ergodic bubbles, therefore providing a toy model for the diffusion-subdiffusion transition observed in the disordered Heisenberg model [1]. We also present the exact solution of a semiclassical model of conductors and insulators introduced in Ref. 2, which exhibits both diffusive and subdiffusive phases, and qualitatively reproduces the results of the quantum system. The critical properties of both models, when passing from diffusion to subdiffusion, are interpreted in terms of"Griffiths effects". We show that the finite-size scaling comes from the interplay of three characteristic lengths: one associated with disorder (the localization length), one with dephasing, and the third with the percolation problem defining large, rare, insulating regions. We conjecture that the latter, which grows logarithmically with system size, may potentially be responsible for the fact that heavy-tailed resistance distributions typical of Griffiths effects have not been observed in subdiffusive interacting systems.
I. Introduction
It has long been known that non-interacting quantum systems possess a transportless phase in the presence of sufficiently strong disorder, a phenomenon known as Anderson localization [3][4][5].More recent work on onedimensional, interacting quantum systems with strong disorder suggests the existence of a many-body localized (MBL) phase in which all quasiparticle transport is suppressed and entanglement spreads only logarithmically fast in time [6][7][8][9][10][11][12].In the thermalizing phase, preceding the localization transition at weak disorder, several numerical studies have found evidence for anomalous subdiffusive transport of particles [1,2,13,14] and energy [15][16][17], and in general a violation of the Wiedemann-Franz law.
Despite widespread interest [14,[18][19][20][21][22][23][24][25][26], the debate on the microscopic origin of this subdiffusion is still not definitively settled.The prevailing theory, first proposed in Ref. 2, is that subdiffusion is caused by "Griffiths effects", where rare regions of exceptionally strong disorder result in bottlenecks that slow down transport.The phenomenological picture in the case of DC transport is that the system may be modelled by a chain of independent random resistors with resistances r i distributed as P (r) ∝ r −ν at large r.For 1 < ν ≤ 2 the average of r diverges and the total resistance, given by the sum of individual resistances R = L n=1 r n , no longer has a well defined average.In this regime R is dominated by the largest r n in the chain, and so the typical value of the total resistance scales as R ∝ L 1/(ν−1) , indicating a breakdown of Ohm's law, R ∝ L [27].However, while there is evidence of Griffiths effects in the structure of slow operators in the subdiffusive phase [28], the essential ingredient of heavy-tailed resistance distributions has not been observed in numerical studies of large systems, casting doubt on this as the true origin of subdiffusion in the paradigmatic toy model for MBL, the Heisenberg spin chain [14].Griffiths effects are also a key feature in theories of the MBL transition and its critical properties, with thermalization proposed to result from a runaway growth of thermal inclusions [29][30][31][32][33][34][35][36][37][38][39][40][41].
In this paper we introduce a microscopic quantum system with a diffusion-subdiffusion transition consistent with the Griffiths effects picture: the disordered XX spin chain with random onsite dephasing.This model is an Anderson insulator in the absence of the dephasing terms, equivalent to a system of non-interacting particles hopping on a disordered lattice.We also present a solvable semiclassical model of conductors and insulators, possessing a subdiffusive phase driven by Griffiths effects, that captures the essential physics of the microscopic model.This model is an example of the random-resistor systems introduced in Ref. 2 and discussed above.
We show that the finite-size corrections to the asymptotic behavior of the microscopic model (be it diffusive or subdiffusive) are regulated by the interplay of three characteristic lengths: a dephasing length, a localization length, and the size of the largest insulating clusters.We discuss the interplay of these lengths by making use of a resistance beta function and we show how the interacting case, the Heisenberg model with disorder discussed in Ref. 1, shows a similar phenomenology.This makes our results relevant for the study of the MBL transition, and potentially offers a resolution of the discrepancy between some of the predictions of the model of subdiffusion presented in Ref. 2 and the distributions observed in the more recent Ref. 14.We discuss how our microscopic model could loosely mimic a many-body localizable system with Griffiths effects, using the random dephasing as a controllable substitute for the dissipation caused by interactions, although naturally our non-interacting model cannot capture a MBL transition.
The paper is organized as follows: in Section II we present the microscopic model with numerical results, including a discussion of its relevance to MBL systems and an analysis of finite-size effects; in Section III we explore the semiclassical model both analytically and numerically; and we discuss our conclusions in Section IV.
II. Random dephasing model
The model we consider is the one-dimensional disordered XX chain, driven at the boundaries with random onsite dephasing.This system has the Hamiltonian: where σ µ n are Pauli matrices and h n ∈ [−W, W ] are independent uniformly distributed random variables.The Jordan-Wigner transformation maps this Hamiltonian exactly to non-interacting spinless fermions hopping on a disordered lattice [42], and a spin current in the XX model corresponds to a particle current in the fermionic language.The driving and dephasing are described by the Lindblad master equation: The spin current is driven by the jump operators: and the onsite dephasing by the jump operators: where for each site γ n = 0 with probability p and γ n = γ with probability 1 − p.
Similar setups have been used to study transport in both non-interacting [14,[43][44][45] and interacting [1,14,16,17,46,47] quantum systems.After solving the Lindblad equation to find the non-equilibrium steady state (NESS) for a given realization of the disorder and dephasing, one can calculate the spin current j ∞ and in turn the resistance R ∝ 1/j ∞ .The spin current from site n to site n + 1 is given by the expectation value of the operator j n = 2(σ x n σ y n+1 − σ y n σ x n+1 ), as defined by the continuity equation for the local magnetization, and is independent of n in the NESS.The nature of the transport can then be determined by the scaling of the typical resistance with the system size, R ∝ L β , where β = 1 indicates diffusion and β > 1 indicates subdiffusion (localization is signalled by R ∝ exp(L/ξ), with ξ the localization length, implying a divergence of β).Similarly, as discussed earlier, the distribution of resistances can reveal the mechanism for subdiffusion, with the Griffiths effects picture necessarily implying the existence of heavy-tailed distributions.
The advantage of studying this non-interacting model is that the NESS current can be found exactly by manipulating matrices with dimensions equal to the system size L, rather than 4 L as would be the case with the full many-body state space.This allows for the efficient numerical solution of large systems with L ∼ 1000, without the need for approximations based on matrix-product operator methods [43,44,48].Details of the numerical method can be found in Appendix A. These large system sizes are essential when studying transport and localization phenomena in disordered quantum systems due to strong finite-size effects [1,[49][50][51], and are beyond what is achievable in interacting systems even using approximate methods.
In the limit of no dephasing, p = 1, the system is Anderson localized (i.e. an insulator), and the resistance grows with system size as R ∝ e L/ξ [3,14,44].In the opposite limit with dephasing on every site, p = 0, the system is a diffusive conductor with R ∝ L [43,44].For intermediate p, the system is made up of a series of these insulating and conducting regions, and as p becomes large there will be an increasing number of long insulating segments.This results in regions of the system with exponentially large resistances, and one might therefore expect subdiffusive transport as described by the Griffiths effects picture.We explore this argument more thoroughly in Section III.
The interplay of conducting and insulating inclusions has been the focus of numerous works, including studies of how a single ergodic region can thermalize an otherwise localized system [30][31][32], and renormalization group studies of the MBL transition [33][34][35][36][37][38][39][40][41].In another work, subdiffusion due to Griffiths effects was studied in a toy model where a collection of Anderson insulators were coupled by random matrices [52].To the best of the authors' knowledge, we are presenting the first exact analysis of transport in a large quantum system with many conducting and insulating regions, and by employing an open setup we can directly access DC transport properties as studied in similar works on interacting systems [1,14,16,17].
In our numerical study we use the parameters Γ = 1 and γ = 0.2, and for a fixed disorder strength W = 1, 2, 3, 4 we vary the dephasing fraction p to probe the different regimes of transport.For each parameter combination we sample many realizations of the disorder and dephasing (a minimum of 5,000 realizations for L < 256, 500 for L ≥ 256, and 200 for L = 1024), and we ensure that at least 95% of the realizations converge to the correct NESS.We define a beta function ∂ ln R/∂ ln L and also perform numerical fits to the median resistance R (L) to determine the asymptotic scaling exponent β, including finite-size corrections.We examine the finitesize flow of β using this beta function, and we also compute it for the interacting XXZ model studied in Ref. (the admittedly noisy data are extracted from that paper, and are presented in Fig. 2).We find that different finite-size corrections match the data more accurately in different parameter regimes.To study the finite-size flow to the asymptotic functional form, R = aL β , we define x = ln L and y = ln R , and we use a fit of the form y = a + βx + b/x for W > 1 and R = a(1 + b/L)L β for W = 1 (in both cases these forms outperform a simple fit to R = aL β with the smallest system sizes omitted).These regimes are summarized in Table I.Applying different fits can change the values of β and the location of a potential transition to subdiffusion.
A. Relationship with the interacting model
A key question is how the physics of this dephasing model is relevant to subdiffusion in interacting systems such as the disordered Heisenberg model.In a manybody localizable system, Griffiths effects would be generated by complicated interactions between particles, and the presence or absence of rare insulating regions could only be inferred by measurements of related physical observables.In the random dephasing model the insulating and thermal regions are introduced in a simple and controlled way by means of dephasing operators (see the explanation below), so this study may provide insight into the nature and origins of finite-size effects that one might observe in an interacting system with Griffiths effect.
In order to build an approximate mapping, consider that in the absence of dephasing the XX chain is simply the Heisenberg model in the limit of no interactions, so single-particle excitations move independently from one another.When the interactions are included, apart from a renormalization of the hopping and of the value of the disorder (which do not qualitatively change the motion in one dimension), from the point of view of a single excitation, a qualitatively new phenomenon can occur: If the particle goes through an "ergodic bubble" (a cluster of sites that is locally thermal) [38,41,53] it dephases and can exchange energy with its surroundings, while if it passes through a localized region it does not dephase and effectively propagates without disturbance (see Fig. 1).By "dephasing" here we mean that the particle acquires a random phase which depends on the state of the other particles in the system at the moment when the particle passes through the bubble.If we average over the random 6 L t y J t v Z J E 0 z i q e W / F u z 8 v V 0 7 y t A j p E R + g E e e g C V d E N q q E 6 o k i j Z / S C X p 0 n 5 8 1 5 d z 6 m q 0 t O f n O A Z u B 8 / g C + 9 p + W < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 6 L t y J t v Z J E 0 z i q e W / F u z 8 v V 0 7 y t A j p E R + g E e e g C V d E N q q E 6 o k i j Z / S C X p 0 n 5 8 1 5 d z 6 m q 0 t O f n O A Z u B 8 / g C + 9 p + W < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 6 L t y J t v Z J E 0 z i q e W / F u z 8 v V 0 7 y t A j p E R + g E e e g C V d E N q q E 6 o k i j Z / S C X p 0 n 5 8 1 5 d z 6 m q 0 t O f n O A Z u B 8 / g C + 9 p + W < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 6 L t y J t v Z J E 0 z i q e W / F u z 8 v V 0 7 y t A j p E R + g E e e g C V d E N q q E 6 o k i j Z / S C X p 0 n 5 8 1 5 d z 6 m q 0 t O f n O A Z u B 8 / g C + 9 p + W < / l a t e x i t > Localized < l a t e x i t s h a 1 _ b a s e 6 4 = " w e k a D v 9 n P r 1 o w 3 4 9 3 4 m K 0 W j P z m k M z B + P w B 9 b e g t Q = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " w e k a D v 9 n P r 1 o w 3 4 9 3 4 m K 0 W j P z m k M z B + P w B 9 b e g t Q = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " w e k a D v 9 n P r 1 o w 3 4 9 3 4 m K 0 W j P z m k M z B + P w B 9 b e g t Q = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " w e k a D v 9 n P r 1 O 8 r a K 5 I g c k 1 P i k k t S I 7 e k T h q E E U G e y Q t 5 t Z 6 s N + v d + p i t F q z 8 5 p D M w f r 8 A b u t n O I = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " Z g 6 O 8 r a K 5 I g c k 1 P i k k t S I 7 e k T h q E E U G e y Q t 5 t Z 6 s N + v d + p i t F q z 8 5 p D M w f r 8 A b u t n O I = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " Z g 6 O 8 r a K 5 I g c k 1 P i k k t S I 7 e k T h q E E U G e y Q t 5 t Z 6 s N + v d + p i t F q z 8 5 p D M w f r 8 A b u t n O I = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " Z g 6 O 8 r a K 5 I g c k 1 P i k k t S I 7 e k T h q E E U G e y Q t 5 t Z 6 s N + v d + p i t F q z 8 5 p D M w f r 8 A b u t n O I = < / l a t e x i t > E < l a t e x i t s h a 1 _ b a s e 6 4 = " Z g 6 O 8 r a K 5 I g c k 1 P i k k t S I 7 e k T h q E E U G e y Q t 5 t Z 6 s N + v d + p i t F q z 8 5 p D M w f r 8 A b u t n O I = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " Z g 6 O 8 r a K 5 I g c k 1 P i k k t S I 7 e k T h q E E U G e y Q t 5 t Z 6 s N + v d + p i t F q z 8 5 p D M w f r 8 A b u t n O I = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " Z g 6 A toy model connecting our random dephasing model with the theory of a particle propagating on a random background, and in a sequence of "ergodic bubbles" and localized regions (the blue and pink regions respectively).In the dephasing model, whether a site belongs to an ergodic or localized region is chosen randomly with probability p, which defines the distribution of sizes of the ergodic regions.
phase, we go from a unitary to a Lindblad equation of the kind studied in the present paper.As we will show in this paper, the dephasing needs to exceed a certain threshold, which depends on the size of the ergodic bubbles and the strength of the disorder, in order to turn a localized particle into a delocalized one.
Of course, in the full interacting model the problem has to be treated self-consistently: the rate of dephasing, the strength of the disorder, the size of the ergodic bubbles, and the effective hopping all depend on a few microscopic quantities defining the model (one in the Heisenberg model: W/J).The situation we have here, with random phases but the particles localized everywhere, is not self-consistent.Localization appears together with the disappearance of dephasing, as it is found in the distribution of the imaginary part of the self-energies [6].
With this toy picture in mind, we see that on the thermal side of the MBL transition, the relatively small insulating regions of strong disorder in an otherwise thermal background are the cause of the subdiffusive transport (in accord with the Griffiths effects hypothesis).We may, in this light, reexamine results from earlier studies on interacting disordered quantum systems for comparison with the random dephasing model.
Work on the scaling of resistance with system size in the disordered Heisenberg model has failed to find definitive evidence for Griffiths effects being the cause of subdiffusive transport (i.e.subdiffusive scaling of the resistance was observed but the resistance distributions did not have heavy tails) [14].This may be due to strong finite-size effects, as it is known that large systems are required to observe the asymptotic transport properties in interacting systems [1].In Ref. 14 it was shown that accurately simulating subdiffusive dynamics requires high bond dimensions in the time-evolving block decimation (TEBD) algorithm, so characterizing the subdiffusion in a large system has a restrictively high computational cost.These TEBD studies on subdiffusion in large systems [1,14,16,17] are limited to L 100 in the subdiffusive phase, with the maximum achievable L decreasing as the disorder strength increases and the transport becomes slower.We will also find subdiffusive resistance scaling without heavy-tailed resistance distributions in some parameter regimes of the random dephasing model.
We will also observe similarities between the finite-size scaling of the resistance in the interacting system and in our dephasing model.The scaling properties of the NESS current with system size in the thermalizing phase of the disordered Heisenberg spin chain were first presented in Ref. 1, and we have reproduced the results in Fig. 2 (we maintain the convention from the original pa-per of stating disorder strength in relation to onsite fields that couple to spin operators, not Pauli matrices, so the diffusion-subdiffusion transition occurs at W c ≈ 0.55).In the original paper the authors examined the average current j rather than the resistance, but the quantity R = 1/ j should behave in the same way as the typical (median) resistance, as the distribution of currents does not have a large tail.In the upper panel of Fig. 2 the points show numerical data and the lines indicate fits of the form R = aL β evaluated on the largest three system sizes available.For weak disorder R approaches the asymptotic power-law scaling from above (note that the transport in the clean isotropic Heisenberg model is superdiffusive but not ballistic), while for stronger disorder it approaches the asymptotic scaling from below.In the lower panel of Fig. 2 we show the resistance beta function, calculated using the discrete derivative, which we have normalized by its asymptotic value, β, to better compare the diffusive (W 0.55) and subdiffusive (W 0.55) data.For weak disorder the beta function approaches its asymptotic value from below (a fit of the form ( 12) is shown for W = 0.25, see the discussion of finite-size effects with weak disorder in Section II B), while for stronger disorder the asymptotic behavior is approached from above (these results are noisy because the TEBD algorithm is too computationally expensive to collect data as extensively as is possible for the noninteracting system).In Section II C we will see similar behavior for the random dephasing model, both in the scaling of the resistance with L (in Fig. 3) and the resistance beta function (in Fig. 4).This suggests that the physics of the dephasing model, and therefore this work, may be relevant to fully interacting, disordered systems (which can eventually be many-body localized) with weak disorder.
B. Finite-size effects: Three lengths
As described above, the behavior of the finite-size corrections are markedly different for W = 1 and W = 4, and different fitting functions work better for R (L).This is evident in Fig. 3: the upper panel shows the scaling of the resistance with system size for a system with dephasing on every site, p = 0, where for W = 1 we see very similar behavior to that in the disorder-free case.In the absence of dephasing and disorder the system exhibits ballistic transport, and the diffusion in the clean system with dephasing is a result of scattering due to the dephasing.At stronger disorder (see Ref. 44) we see a change of behavior, with R approaching the asymptotic diffusive behavior from below, initially increasing faster than linearly with L. Note the similarity with the results for the disordered Heisenberg model with weak and strong disorder, shown in the upper panel of Fig. 2. Our task is now to introduce length and resistance scales which separate the different behaviors.These different flows with L are caused by the interplay of three length scales: a length associated with the disorder strength W (the localization length ξ), one with the density of dephasing sites p (the largest cluster of sites without dephasing s ), and the third with the dephasing strength γ (which we will call ).
Our first length is the localization length in 1d, which is known to be ξ = 24 W 2 [54,55].This is shown in the lower panel Figure 3, where for LW 2 24 the behavior of R is indeed exponential, while for LW 2 24 the behavior is power-law, R ∼ L β with β 0.12 (in other words, to observe the exponential scaling of R with system size we need L > ξ.).Notice that the law ξ ∼ 24/W 2 is only valid for W 2, while for large W it is substituted by ξ ∼ 1/ ln W [56].
Next, we will consider the largest "insulator" size s , which depends only on the density of dephasing sites p. From percolation theory in one dimension it is known that the typical value of the largest cluster of sites with no dephasing is s = log 1/p (L[1 − p]), to lowest order in 1/L [57].For the Griffiths picture to apply the resistance of these rare clusters must be exponentially large in s to create the power-law tail of P (R).The resistance of the insulating cluster must therefore be in its asymptotic scaling regime (i.e. the cluster is truly localized), and so we need s ξ.If this condition is satisfied, then the contribution to the resistance of the largest cluster grows superlinearly with system size: R ∝ e s /ξ = L β with β = 1/(ξ ln[1/p]).
The condition that the largest cluster is localized reads: To see that this is not always satisfied in our numerics, consider the parameter combination p = 0.8, W = 1, L = 1024: in this case ξ = 24 and (the average) s = 25, so the largest cluster is about one localization length.
The second largest cluster is on average 21 lattice sites, so it is even smaller than a single localization length.Moreover, the logarithmic dependence on L means that if we want more than one localization length we must change L enormously.Let us say we require with a minimum confidence of, say, c = 3 (i.e. the largest cluster is at least 3 localization lengths).We see that and, for p = 0.8, W = 1 as before, this condition implies L > 10 7 .For W = 3, on the contrary, for p = 0.5 we get L > 500, which is still within our reach.We therefore conclude that, even for the values of L = 10 3 reached in our numerics, the data with W = 1 are deep in the pre-asymptotic regime, while for W = 3, 4 the data are representative of the asymptotic behavior (for p not too close to 1).Clearly, an awareness of s is vital when trying to determine the asymptotic behavior of the system from numerical results.In a given realization of the system, if the longest string of sites without dephasing, s 0 , is larger than the localization length, s 0 ξ, then we can show that the distribution of the resistance has a heavy power-law tail.This can be seen by noting that the length of the longest insulating cluster s 0 obeys the Gumbel distribution for extreme values, with mode s = log 1/p (L[1−p]) and standard deviation π/( √ 6 ln[1/p]) (this problem is equivalent to studying the longest run of consecutive heads when repeatedly tossing a biased coin) [57].Inserting these values into the Gumbel cumulative distribution function (CDF), we find the CDF for the length s 0 [58]: Assuming that the resistance is dominated by this single long insulating cluster, and writing q = e 1/ξ , we have , where R 1 is a constant.The distribution function of the resistance is therefore: where β = log 1/p (q) is the resistance scaling exponent.For R R 1 , this distribution decays with a tail P (R) ∝ R −1−1/β , exactly as the Griffiths picture would predict for the subdiffusive scaling R ∝ L β .If β ≤ 1 then this argument does not hold, as the total resistance is not dominated by the longest insulating cluster.However, if s 0 ξ, then we are in the small LW 2 region in the lower panel of Fig. 3 (say s 0 W 2 < 24).As discussed above, in this region the law R = q s is not valid: it is replaced by a law of the form R ∼ s β where from the numerics β 0.12 (or at least β 1).Using this relationship between s and R, from (8) we find that the distribution of R decays like a stretched exponential faster than a power law.We will observe exactly this in the numerics discussed in Section II C. The third length, , is associated with a string of consecutive sites with dephasing.We study this case in more detail and present numerics in Fig. 3.We know that if dephasing is applied to every site of the chain, asymptotically one finds a resistance R ∝ L. To a first approximation, if ξ is large we can consider the situation in the absence of disorder.In this case the resistance of a chain of length L with dephasing on every site has been calculated exactly in Ref. 43: which defines the asymptotic resistivity ρ = γ/4, or analogously the diffusion coefficient D ∝ 1/γ.From the relation D = v , where v is the velocity of excitations of the clean system (independent of γ), we see that ∝ 1/γ.The same length dominates the finite-size effects for γ Γ (Γ = 1 in our numerics) since we can write R = ρL 1 + 2 L .
For system sizes smaller than , or resistances smaller than R 0 ≡ ρ = γ /4, the resistance grows slower than L 1 , since the system goes from ballistic to diffusive transport.This can be seen by looking at the resistance beta function: On the other hand, if the disorder is much larger than the dephasing, ξ , for systems with size L such that ξ < L the resistance will scale exponentially with L. So, writing R/R 1 = e L/ξ , in this regime: For L > , however, it must reach the condition Putting everything together we see that we can distinguish two regimes, depending on whether we have ξ L or L ξ (or in terms of resistances, whether we have R 0 R 1 or R 1 R 0 ).Fixing γ, we have large-disorder and small-disorder finite-size scaling behaviors which are completely different, as described in Table I.We find, however, that an extremely good, phenomenological, two-parameter fit function is given by where R a,b are two fitting parameters.This form fits all the data we have for any L, W, γ with good accuracy.The weak disorder case is obtained by ln(R a /R * )R b = −R 0 (for some R * of the size of the observed resistances) while the large disorder case comes from the region 4 shows examples of the beta function from our numerical data (calculated using a discrete derivative), showing good agreement with the phenomenological form (14) in the strong disorder, strong dephasing, and intermediate regimes.There are similarities between the results of Fig. 4 and the beta function of the disordered Heisenberg model in Fig. 2, with both approaching their asymptotic values from below in the case of weak disorder, and from above in the case of strong disorder.
We notice that the definition of this beta function is the same (except for an overall sign and the identification g ∝ 1/R) as the typical conductance beta function which is amply described in the literature on disordered systems [59].It can be computed in perturbation theory in the weak localization regime and in the strongly localized regime for a variety of cases.However, in the literature we have not found a discussion of this function in the setup of open system dynamics as presented above.
We are now ready to discuss the general scenario, with both random dephasing and disorder.
C. Results: Diffusion-subdiffusion transition and critical point
Fig. 5 summarizes the behavior of the resistance for a system with W = 3.The lower panel shows an example of the power-law scaling of the median resistance R with system size for several dephasing fractions p; the statistical uncertainties are smaller than the symbols and have therefore been omitted.It is clear that for p 0.4 the lines are parallel, indicating the same scaling with L (we will show later that this corresponds to the diffusive phase), whereas for larger p the resistance grows more steeply with an exponent that increases with p.The black dashed line indicates the diffusive behavior R ∝ L. Numerical fits to these data, including finitesize corrections as described above, are indicated by the lines.
The resistance beta function for various disorder and dephasing values with p = 0 (discrete derivatives of the data in Fig. 3).We go from W = 4, γ = 0.2 which is large disorder and small dephasing to W = 1, γ = 0.2 which is small disorder.Intermediate cases W = 2, γ = 0.05, 0.1 are shown.Numerical results are fitted with a phenomenological function of the form ( 14), indicated by the lines of the corresponding color.
Histograms of the resistance for various parameter combinations are shown in the upper panels of Fig. 5.We find that, in the diffusive phase, the resistance distributions P (R) for different system sizes can be collapsed by a rescaling (R − R )/σ, where R ∝ L is the average or typical value and σ ∝ √ L is the standard deviation or width.Contrastingly, in the subdiffusive phase the collapse can be achieved by a rescaling of the form R/R , indicating that both the typical value and the width of the distribution grow like L β .Deep in the diffusive phase the distribution is well approximated by a Gaussian (see the black dotted line on the p = 0 histogram), but as the system approaches the transition to subdiffusion a tail develops at large R (see the p = 0.3 histogram).In the subdiffusive phase we see heavy power-law tails in P (R), as shown in the p = 0.7 and p = 0.8 histograms (the black dotted lines indicate the P (R) ∝ R −2 tail that signals the onset of subdiffusion).
The two phases can be described by the asymptotic behavior of the typical resistance R ∝ L β : In the lower panel of Fig. 6 the connected filled points show the scaling exponent β, found from a fit to the median resistance beta function (see discussion below) for W = 2, 3, 4, and from a direct fit of R as a function of L for W = 1 as described in Table I.For each W the transport is diffusive for small p (i.e.β = 1), but upon increasing p above a critical value p c (W ) the transport becomes subdiffusive (i.e.β > 1).The critical dephasing fraction p c (W ) decreases with W , as the closed system is more strongly localized, and therefore the transport is weaker for a given p.
The upper panel of Fig. 6 shows histograms of R/R for a range of p values with L = 128 and W = 3, and the black dashed line shows the R −2 tail that signifies the onset of subdiffusion in the Griffiths effects picture.We see that in the subdiffusive phase (p 0.5) the distribution tails decay more slowly than R −2 and become heavier as the transport becomes slower.
If the Griffiths effects picture is correct, the exponent of the histogram decay ν should be related to the exponent of the resistance scaling as β = (ν − 1) −1 .The values of (ν − 1) −1 from the L = 128 histograms are shown by the unfilled points in the lower panel.We see that in the subdiffusive phase there is reasonable agreement for W = 3 and 4 when p is not too large, as discussed in .The discrepancy at p 0.8 and disorder W 3 is attributed to the fact that the largest insulating cluster has a size s which is smaller than the localization length ξ, therefore the distribution of resistance does not show the appropriate power-law tail.
Section II B, while for weaker disorder and large p the agreement becomes poor.
To see how β increases from 1, it is convenient to look at the discrete beta function, as done previously for the case p = 0 in Figure 3; here, however, we fix W and change p.In Fig. 7 we show the beta function as a function of 1/ ln L for several dephasing fractions p with W = 4 near the diffusion-subdiffusion transition.It is clear that the discrete derivative β decreases linearly in 1/ ln(L) until one of two things happens: it either saturates to β = 1, or it reaches a thermodynamic limit β ∞ > 1.Since the slope of the lines is approximately constant (or at most it changes slowly with p: for W = 4 it is between 1.6 and 2.0), we find that For W = 4 we find p c = 0.54 ± 0.01, and the coefficient c 1.8.Similarly, for W = 3 we find p c = 0.667 ± 0.007 and for W = 2 we find p c = 0.830 ± 0.001.These results are consistent with those found from the histogram tail exponent ν (as can be seen from the comparison shown in the lower panel of Fig. 6).
The critical exponent (p − p c ) 1 , which is also observed in the interacting model [1], looks typical of a mean-field scenario.We present and solve a semiclassical model of subdiffusion in Section III; there we also find a critical exponent of one, and we can determine c explicitly in terms of the microscopic parameters of the model.
The last thing one can extract from this analysis is a critical length scale for p < p c , which is the length L at which the curves start bending upward and transport becomes diffusive.This crossover length can be defined, roughly, as the intersection point between the linear extrapolation (as a function of 1/ ln L) and the line β = 1.One finds: where, for W = 4, p c = 0.54 as before, b ≈ 1 and L 0 = O(1).To give an idea of how quickly this function grows, consider that for p = 0.3 we have L 1, 600, while for p = 0.4 we find L 10 6 .Notice that this is in line with the more complex Griffiths scenarios given by strong disorder renormalization group (SDRG) [26,60,61], which predict an infinite dynamical exponent z (according to the definition L ∼ (p c − p) −z ).
III. Exactly solvable semiclassical model
In order to understand the results shown in the previous section, we now examine a related phenomenological semiclassical model.Consider a chain of L units, where each unit may be either an insulator with probability p, or a conductor with probability 1 − p. Conductors combine linearly, with each conductor contributing a resistance of R 0 , so a string of conductors of length s has a resistance of R 0 s.Because of phase coherence, insulators combine multiplicatively, so a string of insulators of length s has a resistance of R 1 q s , where q > 1.The total resistance of the chain is then equal to where n c (s) (n i (s)) is the number of strings of conductors (insulators) of length s.The relationship with the microscopic dephasing model is as follows: strings of sites without dephasing are modelled by strings of insulators (a shorter localization length due to stronger disorder corresponds to a larger q), and strings of sites with dephasing are modelled by strings of conductors.This semiclassical model is equivalent to the system of random resistors introduced in Ref. 2.
A. Analytical results
We will now analyse the statistical properties of the total resistance R. In order to determine the average resistance across configurations, we note that the quantities n c (s) and n i (s) are Poisson-distributed random variables: where µ = n(s) , with the angled brackets denoting an average over different configurations of conductors and insulators.This is subject to the constraint: The mean values are The constraint ( 20) is then satisfied on average for L 1: The average resistance is therefore given by When pq < 1 the sum (23) converges, and the average total resistance grows linearly with system size, meaning the system is diffusive: On the other hand, for pq ≥ 1, the sum (23) does not converge and the average does not exist.In this regime, the total resistance for a given configuration is dominated by the longest string of consecutive insulators, which has a typical length of s ≈ log 1/p (L[1 − p]) ≈ log 1/p (L) for large L [57].This then results in a typical resistance of with a subdiffusive scaling exponent β = log 1/p (q).The system therefore has a transition from a diffusive phase to a subdiffusive phase at p c = 1/q: Close to the transition on the subdiffusive side it follows that indicating a critical exponent of 1.In the subdiffusive phase we expect the total resistance to be distributed according to (9), as the arguments leading to this expression are identical to those described above.Therefore we expect the subdiffusive phase to be described by the physics of Griffiths effects, with heavy-tailed resistance distributions: P (R) ∝ R −1−1/β (note that in this phenomenological model the resistances of the insulating clusters are always exponentially large in their size, so the finite-size effects leading to (10) do not apply).We now examine the properties of the distributions of R more carefully and show that this is true.The Laplace transform of the distribution of the insulating part of the resistance R i = L s=1 n i (s)q s (i.e. its moment generating function) is equal to: where the second line follows from evaluating the average for the single n i (s) with L → ∞.The cumulant generating function for R i is therefore given by φ(ρ) = L s=1 µ i (s)(1 − e −ρq s ).Examining the lowest few cumulants we find: If each sum converged as L → ∞, then the distribution would have a limit where every cumulant is proportional to L. However, for q > 1 and any p > 0 there always exists an n such that pq n > 1. Defining τ = log 1/p (q), the smallest integer larger than 1/τ corresponds to the lowest cumulant that scales superlinearly with L, and all subsequent moments will scale with a different power of L (note that β = τ in the subdiffusive phase).In other words, when p > q −n the n th cumulant stops growing linearly with L, and begins to scale like L nτ .
To analyse the distribution of R i , we extend the sum to L → ∞, therefore neglecting terms exponentially small in L: where We evaluate the sum after taking the Mellin transform: where Γ(z) is the gamma function.The inverse transform therefore gives where the integration path is the Bromwich contour B shown in the left panel of Fig. 8.The expansion for small ρ can be obtained by moving the contour of the z-integration to the left (see the right panel of Fig. 8), picking up the leading-order terms with each pole.The gamma function has simple poles on all the negative integers, with the pole at −m giving a contribution of to the integral.There is another simple < l a t e x i t s h a 1 _ b a s e 6 4 = " H U X x M 1 w f v p e q z t q m S 0 e s r H f r Y 7 q 6 Z G U 3 B z A D 6 / M H / v 6 b a w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " H U X x M 1 w f v p e q z t q m S 0 e s r H f r Y 7 q 6 Z G U 3 B z A D 6 / M H / v 6 b a w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " H U X x M 1 w f v p e q z t q m S 0 e s r H f r Y 7 q 6 Z G U 3 B z A D 6 / M H / v 6 b a w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " H U X x M 1 w f v p e q z t q m S 0 e s
2
< l a t e x i t s h a 1 _ b a s e 6 4 = " f r l Z 0 5 P S 8 q W j 9 G u p q x A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " f r l Z 0 5 P S 8 q W j 9 G u p q x A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " f r l Z 0 5 P S 8 q W j 9 G u p q x A = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " f r l Z 0 5 P S 8 q W j 9 G u p q < l a t e x i t s h a 1 _ b a s e 6 4 = " H U X x M 1 w f v p e q z t q m S 0 e s B 6 e 4 Q V e n S f n z X l 3 P m a r K 0 5 2 c w R z c D 5 / A P 1 P m 2 o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " B 6 e 4 Q V e n S f n z X l 3 P m a r K 0 5 2 c w R z c D 5 / A P 1 P m 2 o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = "
2
< l a t e x i t s h a 1 _ b a s e 6 4 = " f r l Z 0 5 P S 8 q W j 9 G u p q . The contours used to evaluate the integral in (33) for small ρ.Poles of the integrand are indicated in red (poles of the Gamma function as circles and the other pole as a star) and the integration contour is indicated in purple.Left: The Bromwich contour used in (33).Right: Deformation of the contour by pushing it to the left, picking up the leading orders in ρ as each pole is enclosed.
pole located at z = −1/τ < 0, which gives a contribution of ρ 1/τ Γ(−1/τ ) ln(q) (there is also a sequence of image poles at z = −1/τ + 2πin/ ln(q) for n ∈ Z, however, their contribution is strongly suppressed by their distance from the real line for reasonable values of q 10).The leading-order terms depend on the value of τ , resulting in several regimes.If 1/τ > 2 we find: stopping at quadratic order, we recognize the cumulant generating function of a Gaussian distribution: ) However, if 1 < 1/τ < 2 the pole at z = −1/τ contributes, giving: Stopping at this order, we recognize the result as consistent with a Lévy alpha-stable distribution with average R i , and a scale that grows as δR 2 i ∝ L 2τ L. The stability parameter is equal to 1/τ , resulting in a distribution with a tail decaying asymptotically as . If 1/τ < 1 then the distribution has a heavy tail and the average no longer exists, so we must instead consider the typical value of R i .Noting that in this regime β = τ , we recognize the heavy-tailed distribution from the Griffiths effects argument,
B. Numerical results
We now study the system numerically in order to confirm the accuracy of the analysis above.The results shown correspond to the parameters: R 0 = 1, R 1 = 1.5, and q = 1.5.By changing p we can tune the system across the diffusion-subdiffusion transition, which should be found at p c = 2/3.We introduce additional randomness by making R i (s) the product of s random variables q n , each drawn independently from a narrow uniform distribution in the range q ± 0.1, so R i (s) = R 1 s n=1 q n (note that on average R i (s) is still equal to R 1 q s ); this has no effect on the results other than to smooth the histograms of R. The numerical analysis of this model requires no solution of matrix equations, unlike the microscopic model in Section II, allowing us to extensively sample very large system sizes: the results below include systems of up to L ∼ 10 6 and ∼ 10 5 samples.
A comparison between the numerical results and the analytical predictions is shown in Fig. 9, using results on systems of up to L = 128, 000.The lower panel shows a good agreement between the median resistance scaling exponent β and the theoretical prediction: β = 1 for p < p c = 2/3 and β = log 1/p (q) for p > 2/3.We also see that the width of the distributions, measured by the inter-quartile range (IQR), also scales as expected: for IQR ∝ L β we find β = 1/2 for p < q −2 = 4/9 and β = log 1/p (q) for p > 4/9.The numerical values of β and β were extracted from fits to the data of the form aL β using L ≥ 1600 (near p c , where finite-size effects are strongest, we used L ≥ 6400).The predicted values for β and β are indicted by the dashed and dotted black lines respectively (the line becomes dot-dashed for p > p c , where β = β ).The upper panels show histograms of the resistance over a range of p values.Deep in the diffusive phase (p = 0.1 in the figure) the distribution is approximately Gaussian (indicated by the black dotted line), with the average and median growing linearly with L and the standard deviation growing like √ L. Closer to the subdiffusion transition (p = 0.3 in the figure) the distribution starts to develop a tail (these parameters correspond to the point where the third cumulant has started to scale faster than linearly with L, pq 3 = (0.3)(1.5) 3 = 81/80).Close to the subdiffusion transition on the diffusive side (p = 0.5 in the figure, where the average is still defined but the variance is not, pq 2 = (0.5)(1.5) 2 = 9/8) we see that the distribution has developed the predicted weak power-law tail which is indicated by the black dotted line.In the subdiffusive phase (p = 0.9 in the figure) the distribution has a strong power-law tail, which agrees well with the prediction (9) that R is dominated by the longest string of insulators, shown again by the black dotted line.The discrepancy at small R is due to the fact that these realizations have unusually short longest strings of insulators, which therefore have a less dominant contribution to the total resistance.
In Fig. 10 we show the discrete resistance beta function for the semiclassical model, plotted as a function of 1/ ln L for comparison with the results from the dephasing model shown in Fig. 7.In this plot we have collected data for very large systems, up to L = 1, 024, 000, in order to examine the finite-size effects, and we see that 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16
IV. Discussion
In this paper we have studied DC spin transport in a disordered, non-interacting spin chain with dephasing on random sites.Using this model we can study transport in a system with insulating and thermal regions at much larger system sizes than is possible for interacting models (even when employing powerful matrix-product operator methods).We have shown that the system exhibits a phase transition from diffusive to subdiffusive transport when the density of sites with dephasing decreases below a critical value.In the subdiffusive phase the distributions of resistances across different realizations of the disorder and dephasing have heavy tails, suggesting that the subdiffusion is caused by Griffiths effects.
We have also presented a related, exactly solvable semiclassical model, where the system is formed of randomly chosen sequences of insulators and conductors.We have shown that this system also undergoes a transition from diffusion to subdiffusion due to Griffiths effects when the density of conductors decreases below a critical value.This model captures the qualitative features seen in the microscopic quantum model, including the Gaussian distributions of resistances deep in the diffusive phase, which develop tails as the transition to subdiffusion is approached, and eventually become heavy-tailed in the subdiffusive phase.
The behavior of the quantum model is most similar to that of the semiclassical model (i.e.most consistent with the physics of Griffiths effects) when the disorder is strong and the subdiffusion weak.We have argued that this discrepancy is due to the finite lengths of the clusters of sites with and without dephasing: the semiclassical model is constructed using the asymptotic scaling properties of these clusters, and we have shown that for certain parameter combinations they are certainly not in their asymptotic regimes.At very large system sizes we expect that the behavior of the two models will become For p < pc the system is diffusive, with β → 1 in the termodynamic limit, while for p > pc the system is subdiffusive with β > 1.In the limit p → 1 the system is localized and β ∝ L diverges in the thermodynamic limit.Above this line, if it existed for this model, would lie the MBL phase, where one has the very fast divergence β = 1 ξ exp(1/[1/ ln L]) with ξ the many-body localization length.increasingly similar.
After looking at sufficiently many figures for the beta function ∂ ln R/∂ ln L, one observes that the flows for different p values do not interesect.This is a signature that ln R is indeed a good scaling function, and one can discuss an underlying renormalization group of sorts (most probably a kind of SDRG).Therefore, one can infer a flow diagram for such an RG.In Fig. 11 we show such a phase diagram for the random dephasing model and the semiclassical model, based on a schematic of the finitesize flow of the beta function, summarizing the results of Sections II and III.As described above, in the diffusive phase the beta function always flows to β = 1, while in the subdiffusive phase β > 1 in the thermodynamic limit.For large p and small enough L the system is effectively an insulator, with a resistance scaling like R ∝ q L , resulting in a beta function that increases linearly with L. When L increases above a length scale that grows like (1 − p) −1 this growth reverses and the beta function begins to decrease towards its asymptotic value (approximately linearly in 1/ ln L, as described above).Only for p = 1 will the growth of the beta function continue all the way to the thermodynamic limit, signalling localization.If an MBL region existed in this model, it would lie in the pink region, as indicated.
The physics of these models is relevant to our understanding of subdiffusion in interacting quantum systems, which is also believed to be caused by the presence of rare insulating regions.Presumably in interacting systems, if the regions of strong disorder are not large enough to act as bottlenecks to transport, the nature of the subdiffusion may be concealed by finite-size effects similar to those described in Section II B: the subdiffusive scaling of R with L in the absence of heavy-tailed distributions, as seen for W = 1 in Fig. 6, is reminiscent of the results in Ref. 14.In this paper we have demonstrated in a fully quantum mechanical model how these heavy-tailed distributions can be hidden by the unconventionally slow finite-size flow.This offers a potential reconciliation of the results in Ref. 14 with that of [53] (i.e. the predicted distributions may have been observed if it were possible to study the interacting systems at sizes such as those investigated here for the random dephasing model).
Determining the sizes of the rare insulating regions in interacting models, and how this affects their effectiveness as bottlenecks, would be an important step in con-firming or refuting the Griffiths effects hypothesis, and this could potentially be achieved using probes of local thermal properties such as those employed in Refs.62 and 63.It could also be enlightening for the theory of the transition, helping in supporting and discriminating between the various renormalization groups scenarios [37,39,41,[64][65][66] which have been proposed and which lead to different critical properties of the dynamical MBL transition.
The NESS current corresponding to the system described by equations ( 1)-( 4) can be calculated exactly, and here we briefly outline the method to do so.Detailed discussions and derivations of these equations can be found in Refs.43, 44, and 48.We use the correlation matrix in the NESS to calculate the quantities of interest, namely the expectation values of the onsite magnetization σ z n and the current through the bond leaving site n in the positive direction j n = 2 σ x n σ y n+1 − σ y n σ x n+1 (which can be derived from the continuity equation for the local magnetization).The correlation matrix is an L × L matrix from which we can calculate our quantities of interest: σ z n = −C n,n and j n = 4 (C n,n+1 ), where the NESS current j ∞ should be independent of n.
The correlation matrix is found by numerically solving the matrix equation where C is the correlation matrix with the diagonal elements removed (note that for uniform dephasing, When the disorder is strong and the system is large, the current j ∞ becomes small and imperfect numerical precision can result in the solution of (A1) being unphysical.This is easily diagnosed by studying the properties of the solution, such as the spatial invariance of the current, and whether the magnetization profile is real and bounded by −1 ≤ σ z n ≤ 1.An alternative method of solving (A1) was presented in Ref. 45 for a system without dephasing, which can be generalized to a system where γ = 0. Defining the non-Hermitian matrix T = A + G, we numerically find its complex eigenvalues λ n and left and right eigenvectors |ψ
1 )
R = a(1 + b/L)L β Strong disorder, ξ < (W > 1) ln R = a + β ln L + b/ ln LTable I. Summary of the different resistance scaling fits used in the various regimes of the random dephasing model.The disorder strengths corresponding to each regime are indicated in parentheses.
Figure 2 .
Figure 2. Scaling properties of the inverse NESS current with system size in the disordered Heisenberg spin chain.Upper panel: R as a function of L for several disorder strengths in the thermalizing phase.The points show the numerical data, and the lines indicate a fit of the form R = aL β using the three largest system sizes.Lower panel: The (discrete) beta function of the same data, normalized by its asymptotic value at L → ∞.The solid black line indicates a fit of the form (12) to the W = 0.25 data.Data reproduced from Ref. [1], for comparison with Fig. 4.
5 Figure 3 .
Figure 3. Finite-size effects in the resistance scaling.Upper panel: Median resistance R as a function of system size L for several disorder strengths W in a system with dephasing on every site (p = 0).Dotted lines indicate the asymptotic diffusive scaling and the dashed line shows the exact result in the absence of disorder.Lower panel: Median resistance R as a function of the rescaled length LW 2 for several disorder strengths W in a system with no dephasing (p = 1), showing the scaling of the localization length at weak disorder ξ ∝ W −2 .
9 Figure 5 .
Figure 5. Numerical results on the resistance in the random dephasing model with W = 3. Upper panels: Histograms of the rescaled resistance in the diffusive phase p = 0 (top left) and p = 0.3 (top right), and the subdiffusive phase p = 0.7 (middle left) and p = 0.8 (middle right).The diffusive rescaling (R − R )/σ is compared to a Gaussian distribution (black dotted line) for p = 0.In the subdiffusive phase the resistance is rescaled as R/R , and the black dotted lines indicate a R −2 tail.Lower panel: Scaling of the median resistance R with system size L for several dephasing fractions p.The lines in the corresponding color indicate the numerical fits to the data described in the text.The black dashed line indicates diffusive scaling R ∝ L.
Figure 6 .
Figure 6.Upper panel: Histograms of the rescaled resistance R/R for W = 3, L = 128, and several dephasing fractions p.The black dotted line indicates a heavy R −2 tail.Lower panel: Comparison of the resistance scaling exponent β (connected filled points) with the predicted value based on the power-law tail of the histogram (ν − 1) −1 (unconnected unfilled points in the corresponding color).The discrepancy at p 0.8 and disorder W3 is attributed to the fact that the largest insulating cluster has a size s which is smaller than the localization length ξ, therefore the distribution of resistance does not show the appropriate power-law tail.
7 Figure 7 .
Figure 7. Behavior of the resistance beta function at large L, shown for W = 4 and several values of p near the diffusionsubdiffusion transition.The lines show fits to the data up to linear order in 1/ ln L, with the y-intercept equal to the resistance scaling exponent β in the thermodynamic limit.
F S b G 9 p e 3 H X m L j S y T x n n Z c 8 v e 7 U W x U s 3 a y s E x n E A J P L i E C t x A D e p A 4 B 6 e 4 Q V e n S f n z X l 3 P m a r K 0 5 2 c w R z c D 5 / A P 1 P m 2 o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " F S b G 9 p e 3 H X m L j S y T x n n Z c 8 v e 7 U W x U s 3 a y s E x n E A J P L i E C t x A D e p A 4 B 6 e 4 Q V e n S f n z X l 3 P m a r K 0 5 2 c w R z c D 5 / A P 1 P m 2 o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " F S b G 9 p e 3 H X m L j S y T x n n Z c 8 v e 7 U W x U s 3 a y s E x n E A J P L i E C t x A D e p A 4 B 6 e 4 Q V e n S f n z X l 3 P m a r K 0 5 2 c w R z c D 5 / A P 1 P m 2 o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = "
1 < l a t e x i t s h a 1 _ b a s e 6 4 =
F S b G 9 p e 3 H X m L j S y T x n n Z c 8 v e 7 U W x U s 3 a y s E x n E A J P L i E C t x A D e p A 4 B 6 e 4 Q V e n S f n z X l 3 P m a r K 0 5 2 c w R z c D 5 / A P 1 P m 2 o = < / l a t e x i t > " 4 e E I q p I I YK P 4 1 / D + v V d S 6 8 7 o v f A = " > A A A C C H i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g x j I j g i 4 L b n R X x T 6 g H U o m z b S h m W R I M k I Z + g P i Vr / D n b j 1 L / w M / 8 B M O 4 J t P X D h c M 6 9 c O 4 J Y s 6 0 c d 0 v p 7 C y u r a + U d w s b
3 h 1 n p w 3 5 9 3 5 m
K 0 W n P z m E O b g f P 4 A 8 V G Z w w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 4 e E I q p I I YK P 4 1 / D + v V d S 6 8 7 o v f A = " > A A A C C H i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g x j I j g i 4 L b n R X x T 6 g H U o m z b S h m W R I M k I Z + g P i Vr / D n b j 1 L / w M / 8 B M O 4 J t P X D h c M 6 9 c O 4 J Y s 6 0 c d 0 v p 7 C y u r a + U d w s b
3 h 1 n p w 3 5 9 3 5 m
K 0 W n P z m E O b g f P 4 A 8 V G Z w w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 4 e E I q p I I Y K P 4 1 / D + v V d S 6 8 7 o
3 h 1 n p w 3 5 9 3
5 m K 0 W n P z m E O b g f P 4 A 8 V G Z w w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 4 e E I q p I I Y K P 4 1 / D + v V d S 6 8 7 o
3 <
e h l 4 r 9 e p i g d 6 l + T q P k g J r z y U y b jx F B J Z j n C h C M T o a w V 1 G e K E s P H l m C i m H 0 F k S F W m B j b X d F 2 5 C 0 2 s k y a 1 Y r n V r y 7 i 3 L t N m + r A M d w A m f g w S X U 4 A b q 0 A A C I T z D C 7 w 6 T 8 6 b 8 + 5 8 z F Z X n P z m C O b g f P 4 A 8 v S Z x A = = < /l a t e x i t > l a t e x i t s h a 1 _ b a s e 6 4 = " m y g S u d 5 6 R T Z R B y j s z s
1 <
F S b G 9 p e 3 H X m L j S y T x n n Z c 8 v e 7 U W x U s 3 a y s E x n E A J P L i E C t x A D e p A 4 B 6 e 4 Q V e n S f n z X l 3 P m a r K 0 5 2 c w R z c D 5 / A P 1 P m 2 o = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " V c v r q u 2 I N z h W T B U s X z / X i 2 b V y x 4 = " > A A A C D H i c b V D L S g M x F L 3 j s 9 Z X 1 a W b Y B H q p s y I o M u C G 7 u r Y B / Q l p J J M 2 1 s k h m S j F C H / o K 4 1 e 9 w J 2 7 9 B z /D P z D T j m B b D 1 w 4 n H M v n H v 8 i D N t X P f L W V l d W 9 / Y z G 3 l t 3 d 2 9 / Y L B 4 c N H c a K 0 D o J e a h a P t a U M 0 n r h h l O W 5 G i W P i c N v 3 R d e o 3 H 6 j S L J R 3 Z h z R r s A D y Q J G s L F S o 1 M V p c e z X q H o l t 0 p 0 D L x M l K E D L V e 4 b v T D 0 k s q D S E Y 6 3 b n h u Z b o K V Y Y T T S b 4 T a x p h M s I D 2 r Z U Y k F 1 N 5 m m n a B T q / R R E C o 7 0 q C p + v c i w U L r s f D t p s B m q B e 9 V P z X S x W l A / 1 r E j U f x A R X 3 Y T J K D Z U k l m O I O b I h C h t B v W Z o s T w s S W Y K G Z f Q W S I F S b G 9 pe 3 H X m L j S y T x n n Z c 8 v e 7 U W x U s 3 a y s E x n E A J P L i E C t x A D e p A 4 B 6 e 4 Q V e n S f n z X l 3 P m a r K 0 5 2 c w R z c D 5 / A P 1 P m 2 o = < / l a t e x i t > l a t e x i t s h a 1 _ b a s e 6 4 = " 4 e E I q p I I YK P 4 1 / D + v V d S 6 8 7 o v f A = " > A A A C C H i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g x j I j g i 4 L b n R X x T 6 g H U o m z b S h m W R I M k I Z + g P i V r / D n b j 1 L / w M / 8 B M O 4 J t P X D h c M 6 9 c O 4 J Y s 6 0 c d 0 v p 7 C y u r a + U d w s b W 3 v 7 O 6 V 9 w + a W i a K 0 A a R X K p 2 g D X l T N C G Y Y b T d q w o j g J O W 8 H o O v N b j 1 R p J s W D G c f U j / B A s J A R b K x 0 f + b 1 y h W 3 6 k 6 B l o m X k w r k q P f K 3 9 2 + J E l E h S E c a 9 3 x 3 N j 4 K V a G E U 4 n p W 6 i a Y z J C A 9 o x 1 K B I 6 r 9 d J p 0 g k 6 s 0 k e h V H a E Q V P 1 7 0 W K I 6 3 H U W A 3 I 2 y G e t H L x H + 9 T F E 6 1 L 8 m U f N B T H j l p 0 z E i a G C z H K E C U d G o q w V 1 G e K E s P H l m C i m H 0 F k S F W m B j b X c l 2 5 C 0 2 s k y a 5 1 X P r X p 3 F 5 X a b d 5 W E Y 7 g G E 7 B g 0 u o w Q 3 U o Q E E Q n i G F 3 h 1 n p w 3 5 9 3 5 m K 0 W n P z m E O b g f P 4 A 8 V G Z w w = = < /l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 4 e E I q p I I YK P 4 1 / D + v V d S 6 8 7 o v f A = " > A A A C C H i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g x j I j g i 4 L b n R X x T 6 g H U o m z b S h m W R I M k I Z + g P i V r / D n b j 1 L / w M / 8 B M O 4 J t P X D h c M 6 9 c O 4 J Y s 6 0 c d 0 v p 7 C y u r a + U d w s b W 3 v 7 O 6 V 9 w + a W i a K 0 A a R X K p 2 g D X l T N C G Y Y b T d q w o j g J O W 8 H o O v N b j 1 R p J s W D G c f U j / B A s J A R b K x 0 f + b 1 y h W 3 6 k 6 B l o m X k w r k q P f K 3 9 2 + J E l E h S E c a 9 3 x 3 N j 4 K V a G E U 4 n p W 6 i a Y z J C A 9 o x 1 K B I 6 r 9 d J p 0 g k 6 s 0 k e h V H a E Q V P 1 7 0 W K I 6 3 H U W A 3 I 2 y G e t H L x H + 9 T F E 6 1 L 8 m U f N B T H j l p 0 z E i a G C z H K E C U d G o q w V 1 G e K E s P H l m C i m H 0 F k S F W m B j b X c l 2 5 C 0 2 s k y a 5 1 X P r X p 3 F 5 X a b d 5 W E Y 7 g G E 7 B g 0 u o w Q 3 U o Q E E Q n i G F 3 h 1 n p w 3 5 9 3 5 m K 0 W n P z m E O b g f P 4 A 8 V G Z w w = = < /l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 4 e E I q p I I YK P 4 1 / D + v V d S 6 8 7 o v f A = " > A A A C C H i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g x j I j g i 4 L b n R X x T 6 g H U o m z b S h m W R I M k I Z + g P i V r / D n b j 1 L / w M / 8 B M O 4 J t P X D h c M 6 9 c O 4 J Y s 6 0 c d 0 v p 7 C y u r a + U d w s b W 3 v 7 O 6 V 9 w + a W i a K 0 A a R X K p 2 g D X l T N C G Y Y b T d q w o j g J O W 8 H o O v N b j 1 R p J s W D G c f U j / B A s J A R b K x 0 f + b 1 y h W 3 6 k 6 B l o m X k w r k q P f K 3 9 2 + J E l E h S E c a 9 3 x 3 N j 4 K V a G E U 4 n p W 6 i a Y z J C A 9 o x 1 K B I 6 r 9 d J p 0 g k 6 s 0 k e h V H a E Q V P 1 7 0 W K I 6 3 H U W A 3 I 2 y G e t H L x H + 9 T F E 6 1 L 8 m U f N B T H j l p 0 z E i a G C z H K E C U d G o q w V 1 G e K E s P H l m C i m H 0 F k S F W m B j b X c l 2 5 C 0 2 s k y a 5 1 X P r X p 3 F 5 X a b d 5 W E Y 7 g G E 7 B g 0 u o w Q 3 U o Q E E Q n i G F3 h 1 n p w 3 5 9 3 5 m K 0 W n P z m E O b g f P 4 A 8 V G Z w w = = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " 4 e E I q p I I Y K P 4 1 / D + v V d S 6 8 7 ov f A = " > A A A C C H i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g x j I j g i 4 L b n R X x T 6 g H U o m z b S h m W R I M k I Z + g P i V r / D n b j 1 L / w M / 8 B M O 4 J t P X D h c M 6 9 c O 4 J Y s 6 0 c d 0 v p 7 C y u r a + U d w s b W 3 v 7 O 6 V 9 w + a W i a K 0 A a R X K p 2 g D X l T N C G Y Y b T d q w o j g J O W 8 H o O v N b j 1 R p J s W D G c f U j / B A s J A R b K x 0 f + b 1 y h W 3 6 k 6 B lo m X k w r k q P f K 3 9 2 + J E l E h S E c a 9 3 x 3 N j 4 K V a G E U 4 n p W 6 i a Y z J C A 9 o x 1 K B I 6 r 9 d J p 0 g k 6 s 0 k e h V H a E Q V P 1 7 0 W K I 6 3 H U W A 3 I 2 y G e t H L x H + 9 T F E 6 1 L 8 m U f N B T H j l p 0 z E i a G C z H K E C U d G o q w V 1 G e K E s P H l m C i m H 0 F k S F W m B j b X c l 2 5 C 0 2 s k y a 5 1 X P r X p 3 F 5 X a b d 5 W E Y 7 g G E 7 B g 0 u o w Q 3 U o Q E E Q n i G F 3 h 1 n p w 3 5 9 3 5 m K 0 W n P z m E O b g f P 4 A 8 V G Z w w = = < / l a t e x i t >
3 <
4 1 e 9 w J 2 7 9 C z / D P z D T j m B b D 1 w 4 n H M v n H u C m D N t X P f L W V l d W 9 / Y L G w V t 3 d 2 9 / Z L B 4 d N H S W K 0 A a J e K T a A d a U M 0 k b h h l O 2 7 G i W A S c t o L R d e a 3 H q n S L J I P Z h x T X + C B Z C E j 2 F j p / r z a K 5 X d i j s F W i Z e T s q Q o 9 4 r f X f 7 E U k E l Y Z w r H X H c 2 P j p 1 g Z R j i d F L u J p j E m I z y g H U s l F l T 7 6 T T p B J 1 a p Y / C S N m R B k 3 V v x c p F l q P R W A 3 B T Z D v e h l 4 r 9 e p i g d 6 l + T q P k g J r z y U y b jx F B J Z j n C h C M T o a w V 1 G e K E s P H l m C i m H 0 F k S F W m B j b X d F 2 5 C 0 2 s k y a 1 Y r n V r y 7 i 3 L t N m + r A M d w A m f g w S X U 4 A b q 0 A A C I T z D C 7 w 6 T 8 6 b 8 + 5 8 z F Z X n P z m C O b g f P 4 A 8 v S Z x A = = < /l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " f r l Z 0 5 P S 8 q W j 9 G u p q1 d C Q Z H W 8 C g = " > A A A C C H i c b V D L S g M x F L 3 j s 9 Z X 1 a W b Y B H c W G a K o M u C G 9 1 V s Q 9 o h 5 J J M 2 1 o k h m S j F C G / o C 4 1 e 9 w J 2 7 9 C z / D P z D T j m B b D 1 w 4 n H M v n H u C m D N t X P f L W V l d W 9 / Y L G w V t 3 d 2 9 / Z L B 4 d N H S W K 0 A a J e K T a A d a U M 0 k b h h l O 2 7 G i W A S c t o L R d e a 3 H q n S L J I P Z h x T X + C B Z C E j 2 F j p / r z a K 5 X d i j s F W i Z e T s q Q o 9 4 r f X f 7 E U k E l Y Z w r H X H c 2 P j p 1 g Z R j i d F L u J p j E m I z y g H U s l F l T 7 6 T T p B J 1 a p Y / C S N m R B k 3 V v x c p F l q P R W A 3 B T Z D v e h l 4 r 9 e p i g d 6 l + T q P k g J r z y U y b j x F B J Z j n C h C M T o a w V 1 G e K E s P H l m C i m H 0 F k S F W m B j b X d F 2 5 C 0 2 s k y a 1 Y r n V r y 7 i 3 L t N m + r A M d w A m f g w S X U 4 A b q 0 A A C I T z D C 7 w 6 T 8 6 b 8 + 5 8 z F Z X n P z m C O b g f P 4 A 8 v S Z x A = = < /l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " f r l Z 0 5 P S 8 q W j 9 G u p q1 d C Q Z H W 8 C g = " > A A A C C H i c b V D L S g M x F L 3 j s 9 Z X 1 a W b Y B H c W G a K o M u C G 9 1 V s Q 9 o h 5 J J M 2 1 o k h m S j F C G / o C 4 1 e 9 w J 2 7 9 C z / D P z D T j m B b D 1 w 4 n H M v n H u C m D N t X P f L W V l d W 9 / Y L G w V t 3 d 2 9 / Z L B 4 d N H S W K 0 A a J e K T a A d a U M 0 k b h h l O 2 7 G i W A S c t o L R d e a 3 H q n S L J I P Z h x T X + C B Z C E j 2 F j p / r z a K 5 X d i j s F W i Z e T s q Q o 9 4 r f X f 7 E U k E l Y Z w r H X H c 2 P j p 1 g Z R j i d F L u J p j E m I z y g H U s l F l T 7 6 T T p B J 1 a p Y / C S N m R B k 3 V v x c p F l q P R W A 3 B T Z D v e h l 4 r 9 e p i g d 6 l + T q P k g J r z y U y b j x F B J Z j n C h C M T o a w V 1 G e K E s P H l m C i m H 0 F k S F W m B j b X d F 2 5 C 0 2 s k y a 1 Y r n V r y 7 i 3 L t N m + r A M d w A m f g w S X U 4 A b q 0 A A C I T z D C 7 w 6 T 8 6 b 8 + 5 8 z F Z X n P z m C O b g f P 4 A 8 v S Z x A = = < /l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " f r l Z 0 5 P S 8 q W j 9 G u p q1 d C Q Z H W 8 C g = " > A A A C C H i c b V D L S g M x F L 3 j s 9 Z X 1 a W b Y B H c W G a K o M u C G 9 1 V s Q 9 o h 5 J J M 2 1 o k h m S j F C G / o C 4 1 e 9 w J 2 7 9 C z / D P z D T j m B b D 1 w 4 n H M v n H u C m D N t X P f L W V l d W 9 / Y L G w V t 3 d 2 9 / Z L B 4 d N H S W K 0 A a J e K T a A d a U M 0 k b h h l O 2 7 G i W A S c t o L R d e a 3 H q n S L J I P Z h x T X + C B Z C E j 2 F j p / r z a K 5 X d i j s F W i Z e T s q Q o 9 4 r f X f 7 E U k E l Y Z w r H X H c 2 P j p 1 g Z R j i d F L u J p j E m I z y g H U s l F l T 7 6 T T p B J 1 a p Y / C S N m R B k 3 V v x c p F l q P R W A 3 B T Z D ve h l 4 r 9 e p i g d 6 l + T q P k g J r z y U y b j x F B J Z j n C h C M T o a w V 1 G e K E s P H l m C i m H 0 F k S F W m B j b X d F 2 5 C 0 2 s k y a 1 Y r n V r y 7 i 3 L t N m + r A M d w A m f g w S X U 4 A b q 0 A A C I T z D C 7 w 6 T 8 6 b 8 + 5 8 z F Z X n P z m C O b g f P 4 A 8 v S Z x A = = < / l a t e x i t > l a t e x i t s h a 1 _ b a s e 6 4 = " m y g S u d 5 6 R T Z R B y j s z s
Figure 10 .
Figure 10.The resistance beta function for the semiclassical model, plotted against 1/ ln L for a range of p values.The lines indicate fits to linear order in 1/ ln L.
Figure 11 .
Figure 11.The dynamical phase diagram of the random dephasing model and the semiclassical model, showing a schematic of the finite-size flow of the beta function.For p < pc the system is diffusive, with β → 1 in the termodynamic limit, while for p > pc the system is subdiffusive with β > 1.In the limit p → 1 the system is localized and β ∝ L diverges in the thermodynamic limit.Above this line, if it existed for this model, would lie the MBL phase, where one has the very fast divergence β = 1 ξ exp(1/[1/ ln L]) with ξ the many-body localization length.
malized such that ψ(Lm) |ψ Rn = δ m,n .The eigenvectors are complex conjugates of each other ψ rewrite (A1) as:T C + CT † = P , (A2)where P is a diagonal matrix with elements equal to Pn,n = P n,n + 2γ n C n,n , which has the formal solution:C = ∞ 0 dt e −tT P e −tT † .(A3)Rewriting this in the eigvenbasis of T and evaluating the integral we find a set of equations:C m,n = Γ (θ m,n,1 − θ m,n,L ) + k γ k θ m,n,k C k,k , (A4) | 2020-07-29T01:00:46.443Z | 2020-07-27T00:00:00.000 | {
"year": 2020,
"sha1": "2b4b83d6ae9d24f2eb90306ddbd1b63b1c4259a4",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/2007.13783",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "65f1abe5702d5977ce9a0b6d9e69c0491c3d2163",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
10946983 | pes2o/s2orc | v3-fos-license | Effective rates from thermodynamically consistent coarse-graining of models for molecular motors with probe particles
Many single molecule experiments for molecular motors comprise not only the motor but also large probe particles coupled to it. The theoretical analysis of these assays, however, often takes into account only the degrees of freedom representing the motor. We present a coarse-graining method that maps a model comprising two coupled degrees of freedom which represent motor and probe particle to such an effective one-particle model by eliminating the dynamics of the probe particle in a thermodynamically and dynamically consistent way. The coarse-grained rates obey a local detailed balance condition and reproduce the net currents. Moreover, the average entropy production as well as the thermodynamic efficiency is invariant under this coarse-graining procedure. Our analysis reveals that only by assuming unrealistically fast probe particles, the coarse-grained transition rates coincide with the transition rates of the traditionally used one-particle motor models. Additionally, we find that for multicyclic motors the stall force can depend on the probe size. We apply this coarse-graining method to specific case studies of the F1-ATPase and the kinesin motor.
I. INTRODUCTION
In many single molecule experiments beads that are attached to molecular motors are used to infer properties of the motor protein from the analysis of the trajectory of these probe particles. In particular, external forces can be exerted on the motor via such a probe particle [1,2]. In the theoretical analysis of such assays, the motor is usually modelled as a particle hopping on a discrete state space with transitions governed by a master equation [3][4][5][6][7][8]. Alternatively, the so called ratchet models combine continuous diffusive spatial motion with stochastic switching between different potentials corresponding to different chemical states [9,10]. These approaches often comprise only one particle explicitly, representing the motor. The contribution of external forces which in the experiments act on the motor only via the probe are then included in the transition rates [5,6,[11][12][13][14][15][16][17] (or Langevin equation for the spatial coordinate [18,19]) of the motor particle directly. However, theoretical models that are used to reproduce the experimental observations should comprise at least two (coupled) degrees of freedom, one for the motor and one for the probe particle. Such models consisting of one degree of freedom hopping on a discrete state space representing the motor coupled to a continuously moving degree of freedom representing the probe are discussed in [20][21][22][23][24][25][26][27]. While multi-particle models are more precise and better represent the actual experimental setup, one-particle models are widely-used toy models often applied to illustrate basic ideas.
Simplifying the description of systems consisting of many degrees of freedom with a concomitant large state space while still maintaining important properties is commonly known as coarse-graining. In the context of stochastic thermodynamics [28], various coarse-graining methods have been applied, e.g., lumping together states of a discrete state space among which transitions are fast [29][30][31][32], averaging over states for discrete [33] or continuous processes [32,34], eliminating single states from a network description [35][36][37] or eliminating slow (invisible) degrees of freedom [38][39][40]. It was found that, in general, coarse-graining has implications on the entropy production and, in particular [41], dissipation. In the context of biological systems and especially molecular motors, coarse-graining procedures mostly focus on eliminating selected states of the motor [37,42] or on reducing continuous (ratchet) models to discrete-state models [43][44][45][46][47].
In the present paper, we introduce a coarse-graining procedure that allows to reduce molecular motor-bead models to effective one-particle models with discrete motor states with the external force acting directly on the effective motor particle. We eliminate the explicit dynamics of the probe particle completely still maintaining the correct local detailed balance condition for the effective motor transition rates and preserving the average currents of the system. As a main result, we find that the coarse-grained rates show a more complex force dependence than the usually assumed exponential behaviour and a more complex concentration dependence than mass action law kinetics.
The paper is organized as follows. In section II, we introduce our coarse-graining method on the basis of a simple motor-bead model with only one motor state and apply it to a model for the F 1 -ATPase [26]. In section III, we generalize the procedure to motor models with several internal states and apply it to both a refined model for the F 1 -ATPase and to a kinesin model. A possible experimental implementation of our method is presented in section IV. We show that entropy production and efficiency remain invariant under this coarse-graining procedure in section V, discuss implications on the stall conditions in section VI and conclude in section VII.
A. Explicit motor-bead dynamics
The general model for motor proteins with only one chemical state consists of one degree of freedom representing the motor which jumps between discrete states n(t) separated by a distance d. The motor is coupled with the second degree of freedom representing the probe particle via some kind of elastic linker, see Fig. 1 [26]. The motion of the probe particle with continuous coordinate x(t) is described by an overdamped Langevin equation with friction coefficient γ and constant external force f ex ,ẋ including the potential energy of the linker V (n − x) and thermal noise ζ(t) with correlations ζ(t 2 )ζ(t 1 ) = 2δ(t 2 − t 1 )/γ. Throughout the paper, we set k B T = 1. This choice implies that the product of force f ex and distance d appearing in the figures below is measured in units of k B T . The (instantaneous) distance between motor and probe is denoted by y. The system is characterized by the pair of variables (n,x) and is "bipartite" in these variables since transitions do not happen in both variables at the same time. The transition rates of the motor fulfill a local detailed balance (LDB) condition The free energy change of the solvent ∆µ ≡ µ T − µ D − µ P with µ i = µ eq i + ln(c i /c eq i ) and nucleotide concentrations c i is associated with ATP turnover. The probability density p(y) for the distance y obeys a Fokker-Planck-type equation ∂ t p(y) = ∂ y ((∂ y V (y) − f ex ) p(y) + ∂ y p(y)) /γ + w + (y − d) p(y − d) + w − (y + d) p(y + d) − w + (y) + w − (y) p(y). ( For constant nucleotide concentrations, the system reaches a non-equilibrium stationary state (NESS) with and stationary distribution p s (y).
B. Coarse-graining procedure
In the coarse-grained description of the model we want to map the motor-bead system to one effective motor attached via an elastic linker to the probe particle (large red sphere). An external force fex is applied to the bead. The transition rates of the motor are denoted by w + (n, x) and w − (n, x). The load sharing factors θ + and θ − indicate the position of an underlying unresolved potential barrier relative to the minimum of the free energy landscape of the motor.
particle hopping between states separated by d. We thus have to eliminate the x-coordinate from the (n,x)description resulting in a system characterized only by n.
For the coarse-grained model, we impose the following conditions. The coarse-grained transition rates Ω ± which advance the effective particle by d should obey a LDB condition as the force is now assumed to act directly on the effective motor particle. Furthermore, we require that the coarsegrained particle moves with the same average velocity in the steady state as the motor and the probe in the original model, i.e., Solving the linear system of equations (5, 6) yields the coarse-grained rates The coarse-grained rates can be interpreted as effective transition rates that correspond to a transition process after which both particles, motor and probe, have advanced a distance ±d. In principle, there are (for any y) many possible displacement processes to advance both particles by d, including ones with l forward and l − 1 backward motor jumps. The coarse-grained rate corresponds to the rate with which one such effective displacement will happen.
In general, the coarse-grained rates depend (via v) on all model parameters, including the friction coefficient of the probe particle and the specific potential of the linker. If one had chosen coarse-grained rates by just averaging over the positions of the probe particle, i.e., by one would have obtained rates that yield the correct average velocity but do not fulfill the LDB condition, as discussed in section II E below. For a more explicit analysis, we must specify the forward and backward rates of the motor. We choose [26] w where θ + and θ − are the load-sharing factors with θ + + θ − = 1 and µ + = µ T , µ − = µ D + µ P . We assume an exponential dependence of the transition rates on the potential difference of the linker according to Kramers' theory. This exponential dependence on the potential difference is similar to one-particle models where the rates of the motor typically depend exponentially on the external force with a corresponding load-sharing factor [3,5].
C. Time-scale separation
In this section, we will investigate under which conditions the coarse-grained rates (7,8) can be expressed using a single exponential dependence on the external force as typically assumed for mechanical transitions within one-particle models [3,5].
Inserting eqs. (10,11) in eq. (3) in the NESS shows that the contribution due to motor jumps is weighted with a (dimensionless) prefactor Here, w 0 exp[µ eq T ] determines the timescale of the transitions of the motor while γd 2 determines the timescale of the dynamics of the probe particle. The latter is mainly governed by the size of the bead and the step size of the motor whereas w 0 exp[µ eq T ] is determined by the attempt frequency and also by the absolute nucleotide concentrations.
If the dynamics of the bead is much faster than the transitions of the motor, time-scale separation holds with ε → 0 [31,48]. In this limit of fast bead relaxation, denoted throughout by a caret, the stationary solution of eq. (3) in the NESS becomeŝ This expression inserted into eqs. (7,8) yieldŝ independent of any specific linker potential V (y). Since this force dependence is purely exponential with the correct load sharing factor, these expressions represent exactly the rates typically used in one-particle models. We notice that within this approximation Ω + = w + (y) and Ω − = w − (y) holds true, which is in agreement with other coarse-graining procedures in the time-scale separation limit, e.g., [31][32][33].
Note that only transition rates of the motor whose dependence on the linker potential is chosen accordingly in the Kramers form (eqs. (10,11)) lead generically to consistent coarse-grained and averaged rates when using the fast-bead limit of p s (y).
D. Example: F1-ATPase
In general, a strong time-scale separation between motor and probe is not necessarily realistic. In this case, eq. (3) must be solved numerically. We will use the model introduced in [26], see Fig. 1, with a harmonic potential V (y) = κy 2 /2 as a simple example to illustrate our coarse-graining procedure.
In Fig. 2, the results for Ω + and Ω − are shown for various values of the friction coefficient γ. With decreasing γ, the rates approach their corresponding fast-bead limits, Ω + andΩ − . These values are upper bounds because decreasing γ implies smaller probe particles which exert less drag on the motor. For finite γ, the coarse-grained rates do not show a single exponential dependence on f ex over the whole range of external forces. Such a dependence, however, is usually assumed to hold within one-particle models. Moreover, the coarse-grained rates depend on γ, which is a parameter not incorporated explicitly in many one-particle models.
The experimentally accessible values of γ cover a wide range of the values chosen in Fig. 2. A dimer of polystyrene beads (≃ 280 nm) as used in [49][50][51][52] corresponds to γ = 0.5 s/d 2 (red (dark gray) line with squares) while a 40 nm-gold particle [52][53][54] corresponds to γ = 5 · 10 −4 s/d 2 (yellow (light gray) line with triangles). Especially for large external forces, the coarsegrained rates deviate strongly from their asymptotic values even for a probe as small as the gold particle.
The average velocity as shown in Fig. 2 also strongly depends on the friction coefficient of the probe particle, especially for large external forces. In this regime, for large γ, the velocity is dominated by the friction experienced by the probe while for small γ the probe relaxes almost immediately and the velocity is dominated by the timescale of the motor jumps.
Another option to reach the fast-bead limit is to use very small nucleotide concentrations. In Fig. 3, we show the coarse-grained rates for various ATP and ADP concentrations. With decreasing nucleotide concentration (at fixed ∆µ), the rates approach the asymptoticΩ + and Ω − . However, it is very hard to do experiments at concentrations smaller than ≃ 10 −7 M as jumps of the motor are then very rare. In Fig. 2 and in Fig. 3 the dependence of the coarsegrained rates on the external force exhibits two different regimes. Up to values of the external force of roughly 15/d, the coarse-grained rates can be well approximated by a single exponential dependence on f ex with the same slope as in the fast-bead limit, dθ + or dθ − , respectively. However, for large γ and large c T , even in this regime, the absolute values of the coarse-grained rates deviate up to two orders of magnitude from their fast-bead approximation. For such parameters, assuming a mono-exponential dependence on f ex with the above slope would not be appropriate either.
For large external forces, all coarse-grained rates deviate significantly from their fast-bead limits. We find again a mono-exponential decay for Ω + but now with slope −d whereas Ω − grows only linearly with increasing f ex . This so far unaccounted for behaviour can be understood by considering the limit f ex → ∞ as discussed in detail in the Appendix. The crossover from one regime to the other occurs beyond the stall force f ex = ∆µ/d. In summary, we find that for the F 1 -ATPase under realistic experimental conditions the rates in a coarsegrained description comprising only one effective particle that satisfy the LDB condition eq. (5) and reproduce the correct average velocity v can not be written in the form of a single exponential dependence on the external force.
E. Comparison of coarse-grained with averaged rates
Instead of defining the coarse-grained rates according to eqs. (7,8), one might be tempted to use the averaged rates (9) as a definition for the coarse-grained rates. In Fig. 4, we show the averaged rates of our F 1 -ATPase model as well as their ratio corresponding to the LDB condition. We find that both w + and w − (for the latter less visible in the plot) exhibit non-monotonic dependence on the external force. For external forces slightly larger than the stall force, w + increases with increasing f ex due to the fact that in this region the system moves backward with motor jumps following the probe which leads to a peak at small y in p s (y). On the other hand, w − exhibits a minimum around stall conditions for large γ since in this region, p s (y) misses a peak at large y ≃ 1.
A severe issue appears regarding the LDB condition. The corresponding ratio of the averaged rates is also plotted in Fig. 4 where it can be clearly seen that the LDB condition is not fulfilled (except in the fast-bead limit).
F. Without external force
Even though we have motivated this paper by emphasizing that external forces are typically applied to probe particles, it should be obvious that our approach holds true for molecular motors transporting cargo subject to Stokes friction in the absence of external forces.
With decreasing γ, the rates approachΩ + ,Ω − (solid black lines). Bottom: Ratio of + and − rates. In contrast to Ω + , Ω − (large red dots), the averaged motor rates do not fulfill the LDB condition (solid black line). The parameters are the same as in Fig. 2.
For one-particle models, the friction coefficient of the probe can not be taken into account explicitly. One rather has to incorporate the drag effect of the bead into the motor rates [46]. If one wants to analyze experimental data obtained from probe particles of different sizes, one then has to use different values of the motor rates for each data set.
For the rather dilute solutions used in experiments [49,51,55] one generally assumes that the motor dynamics is subject to mass action law kinetics, i.e., that the transition rates depend linearly on the corresponding concentration of nucleotides. Obviously, this linear dependence holds for all concentrations and beads of all sizes for one-particle models. When keeping c D and c P fixed, the average velocity of a one-state motor will show a purely linear dependence on c T .
The experimental analysis of the average velocity of the F 1 -ATPase as function of c T (for fixed c D , c P ) reveals a saturation of the velocity for large ATP concentrations which sets in earlier for large beads [53]. While such a saturation is usually attributed to the hydrolysis step, we find that a sub-linear dependence of the velocity can also be caused by the drag of the probe particle.
In Fig. 5, the coarse-grained rates as well as the velocity are shown as a function of the ATP concentration. With decreasing γ, the coarse-grained rates approach the fast-bead limit and the mass action law kinetics. The ve- Since cD and cP are fixed, ∆µ also increases with cT. The rates and the velocity approach the fast-bead approximation (solid black lines). Parameters: locity is then linear in c T as in a one-particle model. For large γ, eliminating the cargo by coarse-graining yields coarse-grained rates that are not linear in the concentrations although the motor rates are still subject to mass action law kinetics. Moreover, the velocity then exhibits a sub-linear dependence reminiscent of the typical saturation effect for large c T .
G. Comparison of full and coarse-grained trajectories
Trajectories of motor and probe generated by a simulation of the complete model of the F 1 -ATPase are shown in Fig. 6. Additionaly, Fig. 6 contains a trajectory obtained from simulating the corresponding coarse-grained model. The average velocity of both models is the same (by definition, see eq. (6)), whereas the coarse-grained model produces trajectories that are "more random". This behavior occurs since the coarse-grained rates are constant (for fixed parameters) and produce a simple biased random walk. The motor transition rates of the complete model, however, depend on the actual position of the probe and are therefore implicitly time-dependent. Since fast successive motor jumps are suppressed, the trajectory of the complete model is less random [21,56]. The influence of parameters like the probe size or the ATP concentration on the dynamics is visible in the bottom panels of Fig. 6. While the average velocity is almost the same, the trajectories of the complete model differ significantly. Using a small probe with a small friction coefficient, the probe relaxes to the potential minimum of the linker before the next motor jump occurs, whereas the large probe cannot relax [25]. Large ATP concentrations induce many forward and succesive backward motor jumps that are absent at lower ATP concentrations. These details are not captured in the coarse-grained trajectories.
III. MOTOR MODELS WITH SEVERAL INTERNAL STATES
A. Explicit motor-bead dynamics and coarse-graining procedure In this section, we will generalize the model taking into account several different internal states of the motor labelled by i. The motor states represent the nodes and the transitions the edges of a network. Transitions between the motor states i and j change the free energy by where F j − F i is the free energy difference of the internal states of the motor and ∆µ α ij = −∆µ α ji is the free energy change of the solvent. Depending on the transition, ∆µ α ij is given by µ T , µ D , µ P or any combination thereof or 0. Transitions may also advance the motor a distance d α ij = −d α ji . Since we allow for several transitions connecting two states, we assign an additional index α to the transitions indicating which link between i and j is used. An example for the network of a full system comprising motor and probe particle is shown in Fig. 7, where the state space of the probe is discretized for better presentation.
The Fokker-Planck-type equation for such models is given by with transition rates of the motor that obey a LDB condition The coarse-grained version of such a model should take into account the different states of the motor as well as the several possible α-transitions between i and j. Thus, the motor network (including all motor cycles) should be conserved under coarse-graining. To account for the several internal states, we require that the coarse-grained rates should obey a LDB condition and the operational current [57] from motor state i to motor state j via edge α should be conserved. The operational current is the sum over all y-dependent net transition currents that contribute to the transition i → j. Conserving the operational currents corresponds to the condition of reproducing the correct mean velocity for the one-state model. The above conditions read and with the operational current (22) and the marginal distribution These equations can be solved for Ω α ij and Ω α ji using simple algebra which yields the rates In principle, it is sufficient to use only eq. (24), since Ω α ji takes exactly this form with j α ij = −j α ji , ∆F α ij = −∆F α ji and d α ij = −d α ji . This equivalent procedure would be more symmetric and treat all transition rates on an equal footing but the LDB condition is then less obvious. Note that without the LDB condition (20), the stated conditions of P i and j α ij would also be compatible with coarsegrained rates like the ones in, e.g., [31,33].
Transitions whose rates are independent of the linker elongation y and hence have d α ij = 0 retrieve their original rate constants through this coarse-graining procedure. For such a transition, j α ij is given by with rates fulfilling the LDB condition w α ij /w α ji = exp[−∆F α ij ]. Inserting j α ij in eqs. (24,25) and using the LDB condition and d α ij = 0 immediately yields Transitions with rates depending on y but with d α ij = 0 have coarse-grained rates that depend on f ex only implicitely via j α ij and P i,j as will be discussed below in section III D for the chemical transition rates of kinesin.
The rates determined from the LDB condition eq. (20), the populations P i and the operational currents are algebraically consistent with the fact that a full set of rates Ω α ij will uniquely determine the populations P i on the coarse-grained network. Consistency can be seen by integrating the Fokker-Planck equation (18) over y yielding the coarse-grained master equation whose stationary solution in the NESS can be expressed as a function of the rates Ω α ij [57,58]. Thus, the expression of any current observable in terms of the operational currents is consistent with its expression in terms of cycle currents on the coarse-grained network.
B. Time-scale separation
Similar to the one-state-model, we explore the consequences of a putative time-scale separation between the dynamics of motor and probe for each motor transition. In the limit γ → 0 (formally equivalent to ε → 0 but here one would have several ε ij within the Fokker-Planck equation and all go to 0) the solution of eq. (18) in the NESS becomes, analogously to [29,32], The marginal distribution can be obtained using eq. (18) with its solution for fast bead relaxation For Kramers-type transition rates like eqs. (10,11), with µ α,+ ij − µ α,− ij = ∆µ α ij and k α ij /k α ji = exp[−F j + F i ], the y-averaged rates w α ij y and w α ji y become The change of chemical free energy ∆µ α ij is split into µ α,+ ij and µ α,− ij indicating that both directions of the transition can involve binding and release of the chemical species that account for ∆µ α ij . The free energy change arising from changing the motor state, F j − F i , is incorporated in the attempt frequencies k α ij of the corresponding states. Inserting the operational current in the form of eq. (30) with these averaged rates, simple calculus shows that the coarse-grained rates (24) and (25) reduce tô which is again consistent with transition rates of oneparticle models that assume a purely exponential dependence on the external force.
C. Example: F1-ATPase with intermediate step
With external force
The 120 • step of the F 1 -ATPase is known to consist of a 90 • and a 30 • substep [53]. Such a stepping behavior can be modelled with a unicyclic motor with two internal states. A schematic representation of a system comprising a probe particle and a motor with two internal states is shown in Fig. 8. The two different pathways for transitions between the states 1 and 2 correspond to the 90 • and 30 • substeps of the F 1 -ATPase, respectively.
Like in section II D for the one-state model, we examine the coarse-grained rates for the 90 • and 30 • steps and the velocity which are shown in Fig. 9. Similar to the 120 •scenario, the rates approach their fast-bead limit with decreasing γ.
As in the one-step model, the dependence of the coarsegrained rates on the external force shows two regimes. For small external forces, the rates can be well approximated by a single exponential dependence on f ex with slope ±d α ij θ α,± ij in most cases. For large probe particles, however, the rates neither match the absolute value nor show mono-exponential dependence on f ex with the above slope. For large forces, the forward rates decay faster whereas the backward rates grow more slowly than in the fast-bead limit.
Concerning the average velocity, strong deviations from the fast-bead limit occur only for the largest friction coefficients. Using small beads, the force-velocity relation resulting from our coarse-graining procedure coincides well with the one obtained from a one-particle model due to the fact that the velocity involves only differences of the rates multiplied with the marginal distribution rather than the rates themselves. For large external forces and small γ, the velocity is significantly smaller than in the one-state model since the motor has to take two successive steps to cover the full d. The force-velocity relations for the two-state as well as for the one-state model reproduce very well the exerimentally determined force-velocity relation from [51] for the corresponding value of the friction coefficient γ.
The limiting cases f ex → ±∞ are more involved here than in the one-state model since one has to account for the dependence of the P i 's on the external force. However, as long as the P j 's do not decay faster than exp[−f ex d α ij ], it is still possible to approximate the rates (24,25) by since P i is bounded by 1.
For the F 1 -ATPase model, the numerical analysis in the f ex → ∞ limit yields a linear dependence of y and j α ij on f ex . We also find that P 2 decays exponentially while P 1 approaches 1. Hence, Ω 90 12 and Ω 30 21 decay exponentially with slope −d 90 12 = −0.75d and −d 30 21 = −0.25d, respectively, like in the one-state model but Ω 90 21 now grows exponentially with a smaller exponent while Ω 30 12 still grows linearly.
Without external force
Just as for the one-state-model, we examine the dependence of the coarse-grained rates on the ATP concentration in the absence of external forces. Fig. 10 shows the coarse-grained rates for the 90 • and the 30 • substeps as well as the avergage velocity. With decreasing γ, the coarse-grained rates approach the mass action law kinetics for the corresponding one-particle rates. In contrast to the one-state model, even in this limit, the velocity shows saturation. This is due to the fact that the timescale of the hydrolysis reaction is independent of the ATP concentration and represents the limiting effect for the velocity. The dependence of the average velocity on the ATP concentration is reminiscent of a Michaelis-Menten kinetics and coincides well with experimenal results for several different probe particles as shown in [53].
For large beads, the coarse-graining process yields rates that are no longer linear in the corresponding concentrations. In this regime, the sub-linear dependence of the velocity on the ATP concentration appears already for smaller ATP concentrations. Comparing the velocity curves of the two-state model with the one-state model, we find that for large beads the velocity curves almost coincide since in this regime the limiting effect for the velocity is the friction experienced by the bead. Thus, using large probe particles it is not possible to infer the underlying motor dynamics from the characteristics of the velocity as a function of the ATP concentration [25]. Fig. 11 shows the coarse-grained forward rates for three different nucleotide concentrations and for various γ chosen as in the experiment [52]. We find that the 90 • rate depends only weakly on γ for small ATP concentrations which is reminiscent of the experimental observation that the ATP binding rate to the motor depends only weakly on the size of the probe [52]. However, for large ATP concentrations that were not investigated in the experiment, the 90 • rate shows a strong dependence on γ. This is due to the fact that for small ATP concentrations the relaxation times of all probe particles are in the order of, or even faster than, the motor jump rates. The results for the 30 • rate are consistent with experimental results for the hydrolysis rate [52]. Increasing c P decreases the P i release rate in the experiment as it decreases the 30 • rate here.
D. Example: Kinesin
As a final more complex example, we apply our coarsegraining method to a model with a multi-state motor. We choose the well-studied 6-state-model representing a kinesin motor introduced in [5], see Fig. 12. Implementing the probe particle and an elastic linker V (y), we adopt the transition rates of the motor from [5] and replace the dependence on the external force by the dependence on the elongation of the linker, The first two rates belong to the mechanical transition, the lower two rates represent the chemical transitions which depend on the instantaneous force exerted by the linker with a chemical load-sharing factor χ ij , see [5]. The change of chemical free energy µ ± ij = µ T , µ D , µ P depends on which transition involves binding of the corresponding nucleotide. We choose again V (y) = κy 2 /2.
The coarse-grained rates for the mechanical transition are shown in Fig. 13. With decreasing γ, the rates approach their fast-bead limit which corresponds to the rates used in [5] while strong deviations occur for finite γ especially for assisting external forces. The friction coefficient of a probe of size 500 nm as in [55] can be calculated using Stokes' law yielding γ ≃ 7.7 · 10 −5 d 2 /s. For friction coefficients in this range (light green (light gray) line with triangles), our coarse-grained rates show a distinct deviation from the one-particle rates (solid black lines). [5]. The transition between states 2 and 5 is purely mechanical and corresponds to a step of length d whereas all other transitions are pure chemical transitions. The motor model includes three cycles: F which, in the + direction, includes ATP hydrolysis and forward stepping, B which includes ATP hydrolysis and backward stepping in its + direction and a pure chemical cycle (around the circle) that includes hydrolysis/synthesis of two ATP. (Color online) Coarse-grained rates for the mechanical transitions (with y-dependence) for various γ in the range 0.077s/d 2 ≥ γ ≥ 7.7 · 10 −10 s/d 2 (from bottom to top). The rates approach the one-particle rates from [5] (solid black lines). Parameters: κ = 10d −2 [55], cT = 0.001M, cD = cP = 10 −9 M (estimated), θ + = 0.65, χij = 0.25, 0.15, k12 exp[µ eq T ]/c eq T = k45 exp[µ eq T ]/c eq T = 2 · 10 6 (Ms However, the average velocity (obtained from our coarsegrained rates) as function of the external force coincides very well for almost all γ with the velocity curve obtained from the bare motor model, see Fig. 14. Like for the F 1 -ATPase model discussed in section III C, this agreement is due to the fact that the velocity involves only the difference of the rates multiplied with the marginal distribution. If one investigates only force-velocity curves, the discrepancies between the coarse-grained rates and the one-particle rates are hardly visible.
In contrast to the coarse-grained rates of the F 1 - Average velocity for the model with probe particle (colored lines with symbols) an the oneparticle model from [5] (solid black line). Right: Coarsegrained rates for chemical transitions (with y-dependence) for γ = 0.077s/d 2 . Other parameters as given in Fig. 13.
ATPase models, the coarse-grained rates for the mechanical transition of the kinesin model show more structure especially for negative, i.e., assisting external forces. Since the kinesin model contains several internal motor cycles, depending on the external force the dominant cycle can change leading to crossover regimes with changing weight of the probabilities P i . The dependence of the coarse-grained rates for chemical transitions on the external force is visible in Fig. 14. Although there is no explicit dependence on external forces for pure chemical rates since d α ij = 0, j α ij and P i depend on f ex via y. The operational current for transitions within the F -cycle in + direction decreases with increasing f ex whereas the operational currents within the Bcycle in + direction slightly increase with f ex which can be explained intuitively since the motor prefers "backward" cycles for large opposing forces. However, all coarse-grained rates decrease with increasing f ex similar to the bare motor rates (41,42) which decrease with larger y; a situation that is more likely to appear for large external forces.
IV. EXPERIMENTAL IMPLEMENTATION
In order to practically apply the coarse-grained description, one has to determine the marginal distributions P i , the operational currents j α ij and the free energy differences ∆F α ij . For multi-state motors, this is a rather challenging task since only a few quantities can be extracted reliably from the experimentally measured trajectory of the probe. Note, however, that this problem does not happen exclusively in our approach but is inevitable whatever method is used to infer motor properties from such trajectories.
In the following, using the 90-30 model for the F 1 -ATPase, we will illustrate how these quantities can be estimated. If all motor transitions involve mechanical transitions with different step sizes, the plateaus in the probe trajectory can be assigned to specific correspond-ing motor states. Since after a large enough time interval all possible transitions will have occured, one is also able to reconstruct the links connecting the states. The marginal distributions P i are then given as the fraction of time that the corresponding motor state is occupied. For the two-state model of the F 1 -ATPase, we assign plateaus in the probe trajectory that are followed by a fast 90 • forward or 30 • backward displacement to motor state i = 1 and plateaus that are followed by fast 90 • backward or 30 • forward displacement to i = 2. In principle, there are several possibilities to reconstruct hidden variables from partially visible trajectories [59][60][61]. Here, we will use a simple algorithm which sets i = 2 if four consecutive data points are within a specific range around 90 • and otherwise i = 1. The marginal distributions P 1 , P 2 are then represented by the fraction of data points with assigned i = 1, 2.
If the motor is not very complex, the operational currents j α ij can be obtained rather easily since they are precisely the net currents between two motor states. For unicyclic motors, all operational currents are equal to the average velocity divided by d, the operational current of an ATP binding transition is the net disappearance rate of ATP in the solution (given that there are no other ATP binding reactions), etc.. If all motor transitions involve mechanical transitions with different step sizes, the operational currents between any two states can be obtained by counting the number of transitions of a specific step size from i → j, n α ij , and j → i, n α ji . The (time) average of this current using one long trajectory of length t tot is then given by In our example, in order to estimate j 90 12 we have to count the number of sudden displacements of "size" 90 • either from the trajectory of the probe directly or from the reconstructed trajectory of the motor using the assignment rule mentioned above. If the time resolution of the trajectory is very coarse or if the reconstruction method is rather inaccurate, jumps that consist of fast consecutive 90 and 30 jumps with apparent step size 120 • will appear which have to be included in the number of 90 • (and also 30 • ) jumps. Fig. 15 shows a reconstructed motor trajectory obtained with the algorithm mentioned above. We have used a trajectory of the probe from our simulations as "experimental data". Compared to the original motor trajectory, this reconstruction captures the average dynamics quite well. Large fluctuations of the probe can generate additional apparent motor jumps in the reconstructed trajectory that are absent in the original one.
Finally, the estimation of the free energy difference ∆F α ij = F j − F i − ∆µ α ij is slightly more involved. In equilibrium (∆µ = 0, f ex = 0), detailed balance holds, with the Boltzmann distribution p eq i (y) = P eq i exp[−V (y)]/N . Inserting this expression yields for the marginal distributions in equilibrium. Note that ∆µ α,eq ij = 0 if the corresponding transition comprises only binding or release of nucleotides. Thus, the equilibrium free energy difference ∆F α ij (which explicitely depends on the equilibrium concentrations) can be obtained from the ratio of the marginal distributions under equilibrium conditions. Using µ i = µ eq i + ln(c i /c eq i ), we find that with k = T, D, P and the sign depending on which binding or release event corresponds to the transition ij, α [62]. Hence, the free energy difference ∆F α ij needed for the coarse-grained rates can be expressed by the equilibrium free energy difference ∆F α ij obtained from experimental data at equilibrium conditions and the nucleotide concentrations with respect to the equilibrium concentrations corresponding to the conditions used to obtain ∆F α ij . For the 90-30 model, we have P eq 2 /P eq 1 = exp[−∆F 90 12 ] = exp[−∆F 30 12 ] with −∆F 90 30 12 since ∆µ = 0 in equilibrium.
Once these quantities have been estimated, there are no additional fit parameters needed or left. All concentrations as well as the external force are usually known from the experimental setup. To obtain the coarsegrained rates from the probe trajectory of our 90-30 model, we then proceed as follows. First, we choose equilibrium conditions and obtain ∆F 12 from the ratio of marginal distributions. Then, we change to nonequilibrium concentrations and estimate P 1 , P 2 and the operational current j 90 12 . The coarse-grained rates, according to eqs. (24,25), are then given by A comparison of the coarse-grained rates and related quantities obtained from the full theoretical model and from the reconstructed one estimated using the probe trajectory is shown in table I. We find quite good agreement between the original and the reconstructed quantities with a maximum error of 14% except for the Ω 30 ij rates which have a maximum error of 24%. The 90-30 model thus provides a useful demonstration of the experimental applicability of the coarse-graining method showing that it is possible to estimate the coarsegrained rates from experimental accessible data if the underlying motor network is not too complex. Considering the simplicity of the applied reconstruction method, the accuracy of the estimates is rather encouraging.
V. INVARIANCE OF ENTROPY PRODUCTION AND EFFICIENCY
An important question for any coarse-graining method concerns its effect on entropy production. In general, a coarse-grained description without imposed time-scale separation or detailed balance for the eliminated variables often underestimates the entropy production of the system [29][30][31][32]40]. In this section, we show that for the type of models considered here, our coarse-graining method conserves the entropy production even if there is no time-scale separation between the eliminated and remaining degree of freedom.
Since transitions can be uniquely attributed to motor or probe particle, the total entropy production of the system [28] can be split in two parts, analogous to bipartite or partially masked systems [63,64], where j x i (y) = ((∂ y V (y) − f ex )p i (y) + ∂ y p i (y))/γ is the current due to the motion of only the bead for fixed i. Obviously, bothṠ p tot andṠ m tot are non-negative. The total entropy production (51) can be calculated using the LDB condition (19) aṡ Using partial integration, it can be easily seen that the parts involving V (y) cancel, i.e., the energy of the linker is constant on average. The total entropy production is then given by the chemical free energy consumption that is not transformed into mechanical power. For the coarse-grained description, the total entropy production contains only contributions from the effective jump process, Using the LDB condition for the coarse-grained rates (20) and the condition on the operational current (21) yieldṡ which is precisely (52). For these models for which the state space of the eliminated degree of freedom does not contain entropy producing internal cycles, the average total entropy production in the NESS remains invariant under our coarse-graining procedure. It is also instructive to apply the entropy-splitting scheme introduced in [31] to our coarse-graining procedure. In [31], it was shown that the total entropy production can be written as a sum of the coarse-grained entropy production (53) plus a contribution of the microstates corresponding to a mesostate (which are eliminated during coarse-graining) plus a contribution due to the fact that jumps between mesostates can occur involving different microstates. In our framework, the total entropy production is already recovered by the coarsegrained entropy production. The two additional contributions which correspond to the total entropy production of the probe particle and the average total entropy production of the motor minus the coarse-grained entropy production cancel each other.
We finally show that our coarse-graining procedure also preserves the energy transduction, or thermodynamic, efficiency η T defined as the ratio of the extractable powerẆ out and the rate of chemical energy input∆µ [65], For the systems we have studied so far, as long as the external force is smaller than the stall force, the power output is given byẆ out = f ex v and the power input by i<j,α ∆µ α ij j α ij which leads to the efficiency which is the same in the coarse-grained description since v, j α ij and ∆µ α ij are conserved. For motor models with tight coupling or multi-state models with a single motor cycle, the rate of chemical energy input equals the velocity∆µ = v∆µ/d and the efficiency reduces to η T = f ex /∆µ. In general, however, any idle cycles of the motor increases the rate of chemical input over the velocity and therefore reduces the efficiency.
VI. STALL FORCE AND RATE ANOMALY
Coarse-graining multicyclic motor models as developed here reveals a remarkable feature concerning the stall force with significant implications on the interpretation of experimental data. For an example, consider the kinesin motor for the parameters chosen in Fig. 13. Fig. 16 shows that the stall force is a function of the size of the attached probe particle. Generally speaking, the stall force can indeed depend on the size of the probe since the network of the full system comprises more cycles than the coarse-grained or bare motor network, see Fig. 7. Varying the size of the probe, the relative weight of the cycles in the full system and hence their contribution to an operational current can change yielding a varying stall force. Thus, the experimentally obtained stall force corresponds to the stall conditions of the motor-probe complex but does not necessarily represent the stall conditions of the bare motor. If one is interested in the latter one should use very small probe particles since the limit of vanishing friction coefficient γ is equivalent to applying the force directly to the motor. As discussed below, the stall force is independent of γ for one-state or unicyclic multi-state motor models. Hence, an experimentally observed variation of the stall force with probe size can be used as proof that the motor is indeed multicyclic. The varying stall force has also implications on the transition rates. In Fig. 13, a close look around f ex d = 14 shows that these data points are missing for the following reason. For all investigated models, we find that if, as a function of the external force, the sign change of an operational current depends on the friction coefficient γ, the coarse-grained rates corresponding to this transitions can become piecewise negative. This phenomenon occurs when the affinity of the affected transitions, ln P i Ω α ij /(P j Ω α ji ) , has the opposite sign of j α ij . An isolated sign change in the denominator of eqs. (24,25) leads to a pole in the corresponding rate. Such an anomaly in Ω α ij necessarily implies a corresponding one in Ω α ji since the ratio of the effective rates obeys the local detailed balance condition which enforces the same sign for both rates. In this range, the coarse-graining scheme introduced here fails to produce physically acceptable rates. In practice, one should discard the results at least when either a rate is negative or becomes larger than the rate for vanishing bead size. In Fig. 17, where we zoom into the range around the stall force, this range is shaded gray. Taken at face value, this phenomenon looks like a short-coming of our approach. It is the price to pay for requiring over the full parameter range both the local detailed balance condition and the correct net currents from any one motor state to any other. While the negative rates do not allow for a sensible physical interpretation, they can nevertheless be used to calculate average quantities and yield, e.g., the correct entropy production as shown in section V.
This stall force anomaly with a corresponding range of negative rates occurs neither for any one-state motor model nor for unicyclic motors around stall conditions since only one motor cycle contributes to all cycles of the Fig. 13. Near the stall force at fexd ≃ 14 these rates exhibit a pole.
In the gray shaded range, they should not be interpreted as physical transition rates.
full system causing the zero of j α ij and ln P i Ω α ij /(P j Ω α ji ) to occur for the same f ex . We also found several multicyclic motors that do not lead to negative rates, e.g., the kinesin model if one assumes the chemical rates to be independent of y. A derivation of the precise conditions under which for multi-cyclic motor models a pair of effective rates diverges or becomes negative must be left to future work. We stress, however, that in all examples shown in this study, this anomaly occurs only in the narrow range shown in Fig. 17. From a practical point of view, it may therefore not be as relevant as it is intriguing from a theoretical perspective.
VII. CONCLUSION
Most experiments on molecular motors comprise some kind of probe particle. Therefore, any theoretical modelling with parameters estimated from experimental data will explicitly or implicitly contain characteristics of the probe particle.
In this paper, we have introduced a systematic coarsegraining method that allows to reduce motor-bead models to effective one-particle motor models. This coarsegraining procedure provides a compromise between a oneparticle description that is simple to handle and a detailed model comprising the dynamics of the full system. It yields an effective one-particle model maintaining the true motor network, where the influence of the probe is naturally incorporated without any additional assumptions since the simplification of the description takes place a posteriori. Any external force acting on the probe is then acting on the effective motor directly. The coarse-grained rates obey a LDB condition and yield the correct net currents. Fixing the marginal distribution and the average currents, there is still freedom on how to choose the rates. Only with the LDB condition the effective rates are determined uniquely.
Applying the coarse-graining procedure to motor-bead models, we find that in general the coarse-grained rates do not show a single exponential dependence on the external force in contrast to what is often assumed for mechanical transition rates in one-particle models. Only in the often unrealistic limit of fast bead relaxation, the coarse-grained rates reduce to the corresponding oneparticle rates.
In the absence of external forces, in general the coarsegrained rates are not proportional to the ATP concentration even if the motor rates obey mass action law kinetics. This feature originates from the drag effect of probe (due to friction) that is incorporated in the coarsegrained rates. For the same reason, the average velocity shows a sub-linear dependence on the ATP concentration even for a one-state motor model. Assuming an a priori one-state model with external force acting directly on the motor, one would have either to use a rather counterintuitive complex force-dependence of the transition rates or to introduce additional motor states in order to obtain a sublinearly growing velocity caused by the drag of the probe. In a one-particle description, the effect of large probe particles on the dwell time distributions could also be mistaken as signature of additional motor states thus leading to an overly complex motor network [21,56].
Considering the influence of the coarse-graining procedure on the stochastic thermodynamics of the system, we show that the total entropy production remains invariant under coarse-graining. This is due to the fact that, on the one hand, the state space of the eliminated degree of freedom contains no entropy producing cycles. On the other hand the design of the coarse-graining procedure is also important. It has to conserve the motor network as well as the net currents and provide transition rates fulfilling a LDB condition. Likewise, the thermodynamic efficiency remains invariant in our scheme.
Our coarse-graining method conserves average quantities like the entropy production or operational currents although eliminating the dynamics of the probe particle strongly affects the cycle structure of the full system. In order to preserve also fluctuations of current observables in the long-time limit it was found that coarse-graining methods should conserve the cycle structure of the full system [36,37].
From the experimental point of view, in order to obtain the simpler effective model, the underlying mesoscopic modelling need not to be known since all these quantities enter the coarse-grained description via the net currents and the marginal distributions which, in principle, can be extracted from the experimental data as we have demonstrated using a two step model for the F 1 -ATPase.
The main advantage of the coarse-graining procedure introduced here is that once the rates have been obtained from experimentally accessible quantities, they automatically fulfill a LDB condition and provide the correct average currents, i.e., velocity, entropy production, hydrolysis rate etc..
For multicyclic motors, the coarse-graining procedure can yield rates that can have poles and become (piecewise) negative. If this scenario occurs, the coarse-grained rates lack a physical interpretation as transition probabilities in this range but they can still be used to calculate average quantities. For this class of motors the stall force typically depends on the size of the probe particle, i.e., the friction coefficient. Applying naively a one-particle model to such an experimental setup would not allow to determine the energy transduction mechanism of the motor correctly. For one-state motors, the coarse-grained rates are always positive.
So far, we have discussed coarse-graining only under NESS conditions. In principle, the coarse-graining procedure as introduced in sections II B and III A can also be applied to non-stationary states, e.g., if the nucleotide concentrations are not constant and ∆µ decreases with time [62,66]. Such a scenario would yield time-dependent P i 's, net currents, LDB conditions and therefore also time-dependent coarse-grained rates.
Further generalizations might include other types of models representing the full system. While developed here for discrete motor models, the coarse-graining procedure should be also applicable to continuous motors moving in a tilted periodic potential where the potential minima will become the discrete states of the coarsegrained effective motor. The introduction of the index α in principle also accounts for more involved potentials or free energy surfaces that depend on both the motor and the probe state. ACKNOWLEDGMENTS E.Z. thanks P. Pietzonka for valuable hints concerning numerical implementations.
Appendix: Limiting case: Large applied force In the limit of large external forces, f ex → ∞, the coarse-grained rates (7,8) can be expressed as While ∆µ is independent of the external force, the average velocity is a function of the external force v = ∂ y V (y) − f ex /γ = κ y /γ − f ex /γ.
It becomes negative for forces larger than the stall force ∆µ/d which ensures that both Ω + and Ω − are positive. If there is no time-scale separation between the dynamics of motor and probe, y grows linearly in f ex for f ex → ∞ with a smaller slope than 1/κ. On the other hand with time scale separation, we have y = f ex /κ. Note that within time scale separation, the average velocity has to be calculated using the average velocity of the motor, eq. (4), since the "average velocity" of the probe ∂ y V (y) − f ex /γ is zero as a result of the fast-bead limit of eq. (3). Due to the linear dependence of y on f ex , the average velocity, and therefore also Ω − , are then proportional to the external force whereas the exponential factor dominates for Ω + , Ω − ∼ f ex /(γd). (A.5) In the opposite limit of a large assisting force f ex → −∞, the coarse-grained rates (7,8) This simple analysis clearly shows that the coarse-grained rates do not coincide with the often a priori assumed single exponential force-dependence of one-particle rates. Within our numerical analysis, the asymptotic behaviour appears for |f ex | 500/d. The regime for large forces shown in Figs. 2 and 3 is not the asymptotics yet. However, since y is also linear in f ex in this region yet with different slope, v is still proportional to f ex . | 2015-01-29T21:12:26.000Z | 2015-01-29T00:00:00.000 | {
"year": 2015,
"sha1": "5fe8e5418f71d135d82d915c42885b3c2104cf21",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1501.07616",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5fe8e5418f71d135d82d915c42885b3c2104cf21",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Mathematics",
"Physics",
"Biology",
"Medicine"
]
} |
53631291 | pes2o/s2orc | v3-fos-license | Evaluating Sensitivity to Different Options and Parameterizations of a Coupled Air Quality Modelling System over Bogotá , Colombia . Part I : WRF Model Configuration
Meteorological inputs are of great importance when implementing an air quality prediction system. In this contribution, the Weather Research and Forecast (WRF-ARW) model was used to compare the performance of the different cumulus, microphysics and Planet Boundary Layer parameterizations over Bogotá, Colombia. Surface observations were used for comparison and the evaluated meteorological variables include temperature, wind speed and direction and relative humidity. Differences between parameterizations were observed in meteorological variables and Betts-Miller-Janjic, Morrison 2-moment and BouLac schemes proved to be the best parameterizations for cumulus, microphysics and PBL, respectively. As a complement to this study, a WRF-Large Eddy Simulation was conducted in order to evaluate model results with finer horizontal resolution for air quality purposes.
Introduction
Air quality is one of the main issues that are concerned by current atmospheric research.Global air pollution has an impact on human health, climate change and on the physics and chemistry of the atmosphere [1].Air pollution has become one of the most important interests of the local authorities in Latin America and represents the greatest social and economic costs of environmental damage after water pollution and natural disasters in Colombia [2].Urban agglomerations as Bogotá are major sources of regional and global atmospheric pollution with the pertinent environmental impact [3].Bogotá (4.6˚N, 74.1˚W) is the 5 th most populated city with around 7.4 million inhabitants in Latin America and one of most polluted cities [4], emissions from traffic linked to the increasing numbers of vehicles contribute to this concern [5] [6].
Air quality modelling has become a useful tool for administrations since it provides them a method to deal with human resources, production, emergency proceedings or to improve existing air quality plans and test abatement strategies [7].There are several air pollution modelling studies in South America [8] [9] but none of them are developed in Colombia or nearby countries.There are a few works focused on Colombia [10] [11] which analyze sensitivity of a mesoscale meteorological model to couple with an emission model and with a photochemical model.Together, these three models compose an air quality modelling system [12].Accordingly, implementing an air quality system in a particular area starts with setting up the meteorological model (the final aim of this study) which provides inputs for emission and photochemical models.The main interest of this work is to evaluate how the Weather Research and Forecasting (WRF) mesoscale meteorological model responses to different parameterizations during high air pollution episodes, and more specifically during days of high ozone concentrations in Bogotá.
Mesoscale meteorological models allow us to study and simulate meteorological variables.These models have a wide range of physical options to set up.It is a fundamental factor when configuring a model the selection of the physical parameterizations that are used to simplify somehow unresolved processes applying diverse approximations, the determination of the suitable model setup is one of the challenges when establishing a mesoscale model in a new region.Apart from the existence of a large array of available options, the best combination for one region is not necessarily applicable to another [13].
In this paper, we focus our attention on the meteorological modelling system.Exploring its sensitivity to variation in its configuration options, it is an important model evaluation exercise [14].In terms of air quality applications, the simulated concentration depends on the accuracy of this meteorological model and the importance of meteorological inputs on air quality modelling has been clearly stated [15] [16] so this analysis allow us to reduce the total uncertainty associated to the air quality modelling system since meteorological outputs are inputs both in the emission and photochemical models.Few studies of WRF sensitivity to diverse parameterizations exist over tropical regions, and most of them are related to PBL parameterization schemes [17]- [19].ARW (Advanced Research WRF) core has been used to obtain meteorological fields.Meteorological outputs were evaluated by means of statistical techniques.Numerical deterministic evaluation has been realized to compare modelling results with measurements.
Description of the studied area is presented in Section 2.1, as well as simulation domains and selected episodes.A characterization of the model and the methodology to evaluate it is presented in Sections 2.2 and 2.3, respectively.Detailed analysis of the experiments developed is presented in Section 2.4 and results obtained are presented in Section 3. Finally, some conclusions are reported in Section 4.
Methodology
In the following sections we show a more detailed description of the studied area features, the simulation domains and periods analyzed as well as a more comprehensive explanation of the modelling approach and model evaluation.
Studied Area, Simulation Domains and Episodes Selected
Following the aim of implementing an air quality modelling system in Colombia, Bogotá was chosen to perform WRF model sensitivity.
Bogotá is the capital of Colombia, the fourth biggest country in South America.It is divided into 32 departments and one capital district (Figure 1), Bogotá, the capital of the department of Cundinamarca and also treated as a department itself.Bogotá ranks fourth in the list of national capitals ordered by altitude with 2625 m above sea level.It lies in a 40 km wide and 100 km long plateau placed in one of the three Andean mountain ranges which cross the country.Mountainous complex terrain borders the high plateau.Its longest river is Bogotá River which has shown high pollution levels in recent years.Bogotá registers average yearly rainfall of 1013 mm and average yearly temperatures of 15˚C.
Modelling Approach
The Advanced Research WRF (WRF-ARWv3.5.1) mesoscale model developed by the National Center for Atmospheric Research (NCAR), USA, was the model chosen to conduct the simulations.It is a universally used community mesoscale model and a state-of-the-art atmospheric modelling system that is applicable for both meteorological research and numerical weather prediction.Different physical options that WRF offers can be combined in many different ways.Further details and description on this model appear in [20].WRF has different parameterizations for microphysics, radiation (long and short wave), cumulus, surface layer, planetary boundary layer and land surface.The initial and boundary conditions for domain D01 were supplied by the National Centers for Environmental Prediction and National Center for Atmospheric Research (NCEP/NCAR) Climate Forecast System Reanalysis (v1), with 0.5˚ (~55 km × 55 km) of spatial resolution and 6h of temporal sampling.Numerical simulations are executed for 48 hours corresponding on every day selected, taking the first 24 hours as spin-up time to minimize the effects of initial conditions and in order to represent a complete diurnal cycle.This is a common practice in meteorological modelling for air quality applications [21].
Two-way nesting was used for the three external domains (D01, D02 and D03) and one-way nesting for D04 and D05.The vertical structure of the model includes 32 vertical layers covering the whole troposphere and a resolution decreasing slowly with height in order to allow low-level flow details to be captured.The first 20 levels are inside atmospheric boundary layer (below 1500 m), with the first level at approximately 16 meters, and the domain top is about 100 hPa.The higher resolution close to the surface is a common practice in air quality studies in order to better represent the physical-chemical processes within de Atmospheric Boundary Layer [9] [12] [22]- [24].A total of 224 simulations have been run during the project development-14 configurations × 16 simulations/configuration.Meteorological modelling system works operationally in a computing cluster owned by Meteosim S.L. with 25 nodes and more than 212 cores.
Datasets and Model Evaluation
The evaluation performed is focused on the innermost domains, D04 and D05, since the final aim of this study is to find the best model setup for high resolution simulations.Meteorological observations were provided by 10 air quality stations that belong to the Red de Monitoreo y Calidad del Aire de Bogotá (RMCAB).Figure 3 shows the location of these stations and There are several methodologies for model evaluation that all together complement themselves [25] [26].The approach of comparing measurements with model results through different statistics (statistical deterministic approach) has been applied.The evaluations include the speed and wind direction at 10 m and air temperature and relative humidity at 2 m.Temperature (K) is calculated from WRF T2 predictions, wind speed (m•s −1 ) and wind direction (˚) are computed from U10 and V10 and relative humidity (%) is obtained from Q2 (water mixing ratio at 2 m), T2 and PSFC (surface pressure) using Magnus formula and specific humidity definition.The statistics have been calculated from hourly data of the model and observations, obtaining a global statistical value for the total period.These statistics provide information on how uncertain a model is in regard to the observations [27] and according to them a benchmark is given following Emery and Tai [28] suggestions.Table 2 shows the statistics used for model evaluation: the Mean Bias (MB), the Mean Absolute Gross Error (MAGE), the Root-Mean-Square Error (RMSE) and the Index of Agreement (IOA) and its benchmarks.
The circular nature of wind direction makes that statistical parameters should be carefully considered.Then, for the wind direction evaluation: D represents the minimum difference between modelled values and observed ones and it is always between -180˚ and +180˚ and N is the total number of measurements for all the days considered.if
Extent of the Sensitivity Analysis: Experiments
Many different physics options in WRF are available for microphysics, radiation, surface layer, land surface, Planet Boundary Layer (PBL) and cumulus.Physics options (schemes) considered in our study are listed in Table 3.We focus our attention on the study of cumulus, microphysics and PBL schemes; and radiation and land surface schemes have been fixed for all configurations: Rapid Radiative Transfer Model (RRTM) as a longwave radiation scheme [31] and the Dudhia scheme as a shortwave radiation scheme [32].One only option was tested as land-surface model (LSM): Noah LSM [33].RRTM, Dudhia and Noah LSM schemes correspond to the default WRF physical options.A total of 14 experiments have been evaluated progressively, as Table 4 shows.Three of them by varying cumulus parameterizations, two experiments by varying microphysics and a total number of eight by varying PBL schemes.We have focus most part of the configurations on PBL parameterizations due to the relevance of these schemes on air quality modelling [16].Additionally, an experiment has been undertaken at a higher resolution to find out the effects on predictions when increasing horizontal resolution.
Cumulus parameterization is used to predict the collective effects of convective clouds at smaller scales as a function of larger-scale processes and conditions.First, three configurations, i.e.Default, C1 and C2, were analyzed to take out the best cumulus parameterization between Kain-Fritsch (KF) scheme [34] that has a deep and shallow convection sub-grid scheme, Betts-Miller-Janjic (BMJ) scheme [35] [36] that is the most popular for tropical systems and Grell-Freitas (GF) scheme that is a stochastic convective parameterization for air quality modelling [37].Once cumulus option was selected, experiments M1 and M2 were evaluated together with the previous "best cumulus case" and with different Microphysics options.Microphysics parameterizations resolve water vapour, cloud and precipitation processes and that is the reason why they play such a significant role on air pollution levels [15].The three microphysics schemes considered have been the WRF Single-Moment 3-class scheme (WSM3) [39], the Stony Book University (Y.Lin) scheme [39] and the Morrison double-moment scheme (Morrison 2-mom) described in [40].
Several authors have recently shown the impact of PBL parameterizations on air quality modelling applications.Some of these examples would be the [15] or [41] sensitivity analysis.Consequently, taking into consideration the future air quality applications of this contribution, more experiments were tested by varying PBL parameterizations.A total of nine PBL schemes are evaluated in this study.Once cumulus and microphysics options were selected, experiments P1, P2, P3, P4, P5, P6, P7 and P8 were tested together with the previous "best cumulus and microphysics case" and with different PBL options.The schemes to describe vertical sub-gridscale PBL fluxes due to eddy transport in the atmosphere are the Yonsei University (YU) PBL [42], the Mellor-Yamada-Janjic (MYJ) PBL [35], the Assymetric Convective Model (ACM2) PBL [43], the Quasi-Normal Scale Elimination (QNSE) PBL [44], the Mellor-Yamada Nakanishi and Niino Level 3 (MYNN3) PBL [45], the Grenier-Bretherton-McCaa (GBM) PBL [46] that is a TKE scheme new in the WRF version used for conduct these simulations, the Bougeault-Lacarrère (BouLac) PBL [47] that is a parameterization of orography-induced turbulence, the UW [48] and the Total Energy-Mass-Flux (TEMF) scheme [49].The surface layer schemes calculate friction velocities and exchange coefficients that enable the calculation of surface heat and moisture fluxes by the land-surface models and surface stress in the planetary boundary layer scheme.These coefficients are computed by the similarity theory (MM5 similarity) surface layer scheme (described in [20]) for YSU, ACM2, GBM, BouLac and UW PBL schemes; similarity theory (Eta) surface layer scheme [36] for the MYJ PBL scheme and QNSE, MYNN and TEMF surface layer schemes for QNSE, MYNN3 and TEMF PBL schemes, respectively.
As a result of the experiments evaluation and comparison, a model setup was chosen for prospective air quality applications in Bogotá.Additionally, we have included into the analysis, a modelling experiment with finer horizontal resolution (333 m) over Bogotá centre (D05).meteorological maximum horizontal resolution places a restriction on the maximum horizontal of coupled air quality modelling systems.In order to couple the different meteorological scales and to deal with the step from regional to local scale are a state-of-art topic in the atmospheric modelling science [50] and several approaches have been evaluated during the last years to solve this problem.Every approach uses different frameworks to characterize sub-grid features.WRF model includes several urban parameterizations as the Urban Canopy Model [51] or the Building Effect Parameterization [52].Both of them present a major disadvantage because they need the use of detailed urban database.Moreover, WRF includes the possibility to use WRF with a large-eddy-simulation (LES) module that replaces the use of a traditional planetary boundary layer scheme.Other approaches are based on the coupling between air quality models indicated for different meteorological scales [53]- [55], or on a detailed monitoring of air quality levels to analyze sub-grid variability [56].To complement this work, we have focus our attention in one of these approaches and a Large Eddy Simulation configuration has been run at a finer resolution (D05).
Results and Discussion
Results of the comparison of every configuration are presented below using the proposed statistics.They have been compared for each meteorological parameter; temperature, wind speed, wind direction and relative humidity, and the one that showed best results for the maximum meteorological parameters was selected as "best case".It is necessary to clarify that in the event of a "tie" or not conclusive differences, wind direction will carry the most sway when selecting "best case" due to the importance of this variable in air quality modelling.
Cumulus Schemes
The first schemes analyzed have been cumulus.Wind direction errors are not within the benchmark for any of the simulations ran.Terrain complexity has a considerable influence on wind direction errors and the values found are substantially above the MB and MAGE benchmarks.However, these values were found in similar studies [12] [15] [16] [29].For the rest of the parameters, all of them follow the recommendation value (except wind speed RMSE for C1 (2.17 m•s −1 ) and C2 (2.15 m•s −1 ) configurations).
The three schemes produced similar results for temperature, with all values within the benchmarks and slightly overpredicting it.The C2 configuration produced the lowest MB for temperature (0.07 K) while the lowest MAGE (1.67 K) and highest IOA (0.91) corresponded to Default configuration, even though no significant differences are observed between them, as can be seen in Table 5.As for wind speed, C1 and C2 produced similar MB and RMSE values, it is the Default configuration which minimized wind speed MB (0.16 m•s −1 ) and 5 and wind statistics for wind direction, the cumulus parameterization of C1 (BMJ cumulus scheme) configuration provides the optimum results.For this reason BMJ was selected for next simulations to come as cumulus scheme.Graphics in Figure 4 [left] reflect the mean daily temperature evolution (a), the mean daily wind speed evolu-
Microphysics Schemes
Once BMJ cumulus parameterization was selected, three configurations were compared with this setting and by varying microphysics schemes: previous C1 "cumulus best case" using WSM3 microphysics scheme, M1 configuration with SBU-YLin and M2 using Morrison 2-moment.
Results for the three configurations with different microphysics schemes tested are shown in Table 6.The C1 configuration produced the lowest MB for temperature (0.13 K) while the lowest MAGE (1.64 K) corresponded to M2 configuration, while no conclusive differences where found for IOA for this parameter.M1 minimized wind speed MB (0.08 m•s −1 ) and wind speed RMSE (1.84 m•s −1 ).If we focus on wind direction, it is also C1 which produced the lowest MB (−9.30˚) but not the lowest MAGE (66.32˚) which is given by M2 configuration.Likewise, even though no significant differences were found for MAGE for relative humidity, M2 presented the lowest MAGE (10.34%) and highest IOA (0.80) together with C1.Although microphysics parameterization is considered to be highly influential for precipitation outputs and therefore wet deposition predictions [57], results for relative humidity are quite similar in three configurations (Table 6 and Figure 4(f)).According to these results, the better overall description was given by the Morrisson 2-moment microphysics parameterization that belongs to M2 configuration.
Graphics in Figure 4 [right] reflect the mean daily temperature evolution (d), the mean daily wind speed evolution (e) and the mean daily relative humidity evolution (f) for C1, M1 and M2 configurations comparing with the same observed parameters.Graphic for temperature (Figure 4(d)) shows that microphysics does not affect temperature profile significantly because similar results are observed for C1, M1 and M2. Figure 4(e) shows that wind speed tends to be overestimated for all configurations.
PBL Schemes
The last evaluation of WRF physics options involves PBL parameterizations.Once Morrison-2moment microphysics parameterization was set for the next configurations as a result of the C1, M1 and M2 experiments, other nine configurations were compared with this microphysics scheme and by varying PBL parameterizations as Table 4 summarize.Results are shown in Table 7. P6 produced the lowest MB (0.02 K) for temperature while M2 did the same with MAGE (1.64 K).PBL is also influential for wind speed, a parameter lightly overpredicted under all the PBL configurations tested.P4 reduced the MB (0.07 ms −1 ) and RMSE (1.73 ms −1 ) for wind speed.It is quite clear that P6 is the scheme that showed the best MAGE results for wind direction (57.24˚) reducing by up to 12% the average MB for all configurations (65.38˚).P6 also minimized relative humidity MB (0.26%) and relative humidity MAGE (9.30%) and improved the results of relative humidity IOA (0.82) with values of the three metrics that did not show important differences with P5.P8 produced the worst results for all the metrics calculated for temperature, wind direction MAGE and both relative humidity MAGE and IOA with remarkable variation between configurations (up to 18.73˚ difference in terms of wind direction MAGE and 0.17 difference in terms of wind direction IOA if we compare both with P6).According to this, P6 proved to be the best configuration improving the results for wind direction and relative humidity.Graphics in Figure 5 [left] show the mean daily temperature evolution (a), the mean daily wind speed evolution (b) and the mean daily relative humidity evolution (c) for M2, P1, P2, P3, P4, P5, P6, P7 and P8 configurations comparing with the same observed parameters.P6 is the best configuration in forecasting maximum wind speed and P8 the worst one.Almost all configurations accurately predict temperature, with the exception of P8, and the same conclusion can be drawn for relative humidity, for which P8 continues to show the worst results with the TEMF Planet Boundary Layer scheme.
P2 (ACM2 PBL scheme), P4 (MYNN3 PBL scheme), P6 (BouLac PBL scheme) and P8 (TEMF scheme) configurations turned to be computationally more expensive than the others (about 30% -40%) and P7 (UW scheme) up to 120%.In the later case, this can be explained by a reduction of the time step from 60 s to 40 s due to computational errors.
LES (Bogotá-333 m Resolution)
A finer-grid LES covering a smaller horizontal domain (D05) is nested inside a coarser-grid covering a larger horizontal domain (D04).This last contribution aims to validate the model results by increasing the resolution so that a future coupling of the meteorological model and the photochemichal model would be interesting in terms of air quality applications.M2 configuration was selected to run LES simulation because cumulus and microphysics parameterizations were already evaluated obtaining the best results.M2-LES is compared with M2 for a smaller domain (D05) so that validation is consistent including the same stations.
Comparisons between M2-LES configuration and M2 configuration within the D05 are shown in Table 8 and Figure 5 [right].Even though M2 (D05) improved most metrics for all the meteorological parameters, there are not conclusive differences between them and this is an interesting outcome as it would allow us to apply WRF-LES approach to forecast air quality at an urban scale without deteriorating the quality of results.Figure 6 displays a wind flow comparison between M2 (D05) and M2-LES.This figure shows that by increasing reso- lution with LES approach, we find similar wind direction patterns and lower wind speed values for the same area at a higher resolution.
Conclusions
A total of thirteen WRF sensitivity experiments were conducted over Bogotá by varying cumulus, microphysics and Planet Boundary layer schemes during high air pollution episodes of 2010 and aiming to find the optimal setup of the model over this region.This work has focused most part of the configurations on PBL parameterizations due to its relevance on air quality modelling.We evaluate the differences in meteorological parameters of temperature, wind and relative humidity compared with observations in the innermost domain following a statistical analysis and the results show that no significant differences were found for temperature and relative humidity predictions depending on microphysics and cumulus parameterizations and no configuration perfectly works for all the variables.Among all the configurations analyzed, the best for the maximum meteorological parameters and selected as "best case" for cumulus, microphysics and PBL, proved to be P6, which improves the results for wind direction MAGE (57.24˚) and relative humidity MB (0.26%), MAGE (9.30%) and IOA (0.82).P6 has Betts-Miller-Janjic as cumulus scheme, the popular cumulus parameterization for tropical systems, Morrison 2-moment as microphysics scheme and Bougeault-Lacarrère (BouLac) as PBL scheme, a parameter- The model replicated temperature observations with a global index of agreement of 0.90.Not so precisely wind direction was predicted, but uncertainty of the prediction associated to this variable plays an important role.
Finally, a WRF-Large Eddy Simulation was included into the analysis, a modelling experiment with finer ho-rizontal resolution (333 m) over Bogotá centre (D05).This experiment was compared with M2 configuration and meteorological evaluation found that although the latter improved most metrics for all the meteorological parameters, there were not conclusive differences between them.These findings will allow us to couple WRF-LES with the emission and photochemical models at a higher resolution as an area of work for the future.However, default WRF physiographic data sets (topography and land uses) were used for 333 m resolution simulations.This analysis may be extended in the future by including higher resolution data sets so that we can accurately evaluate model performance of the LES approach.To achieve conclusive results, both in WRF and WRF-LES simulations, it will be useful to extend this study to a large period.
Figure 1 .
Figure 1.Main administrative divisions and topography map of Colombia [Image generated with ArcGIS].In Figure 2, we show modelling domains used for simulations.The WRF model is built over a mother domain (D01) with 27 km spatial resolution, centered at 4.6˚N, 74.1˚W.It comprises Central America, northern South America and part of Brazil and Peru, Pacific and Atlantic Oceans and Caribbean Sea and it is intended to capture synoptic features and general circulation patterns.The first nested domain (D02), with a spatial resolution of 9 km, covers northwestern South America and part of the Caribbean Sea and Pacific Ocean.The third nested domain (D03), with a spatial resolution of 3 km, comprises the Cundinamarca department and the fourth nested domain (D04) covers Bogotá.A fifth domain was included to take further the sensitivity analysis of WRF model at a higher resolution (WRF-Large Eddy Simulation): it is the innermost domain (D05), with a 333 m resolution.Table 1 shows the main characteristics of the simulation domains.Simulations were conducted in 16 specific days of the year 2010 (1-2/01/2010; 5-6/01/2010; 13-14/02/2010; 27-28/02/2010; 21-22/08/2010; 11-12/09/2010; 1-2/04/2010, 11-12/12/2010).These days present ozone concentrations above 60 ppb as a maximum running average over eight hours according to air pollution records supplied by the Red de Monitoreo y Calidad del Aire de Bogotá (RMCAB).
Figure 2 .
Figure 2. Modelling domains for simulations.[Image on the right generated using Google Earth].
Figure 3 (Figure 3 .
Figure 3. Meteorological stations used to conduct the analysis.[Images generated using Google Earth (a) and ArcGIS (b)].
Figure 4 .
Figure 4. Daily evolution of the mean hourly temperature (a); wind speed (b) and relative humidity (c) for cumulus experiments [left] and hourly evolution of temperature (d); wind speed (e) and relative humidity (f) for microphysics experiments [right].
Figure 6 .
Figure 6.Daily evolution of the mean hourly temperature (a); wind speed (b) and relative humidity (c) for PBL experiments [left] and hourly evolution of temperature (d); wind speed (e) and relative humidity (f) for finer resolution experiments [right].zation of orography-induced turbulence.The model replicated temperature observations with a global index of agreement of 0.90.Not so precisely wind direction was predicted, but uncertainty of the prediction associated to this variable plays an important role.Finally, a WRF-Large Eddy Simulation was included into the analysis, a modelling experiment with finer ho-
Table 2 .
Statistics used for model evaluation.
Table 4 .
Experiments developed and physics parameterizations.[Default configuration for default schemes in WRF].
Table 5 .
Statistical evaluation.Results for configurations by varying cumulus schemes.Results within the benchmark are highlighted in bold, and the best for each statistic is shaded in gray.
wind speed RMSE (1.90 m•s −1 ).Nevertheless, it is C1 configuration which produced the lowest MB (−9.30˚) and MAGE (66.43˚) for wind direction, and the lowest MAGE (10.45%) and highest IOA (0.80) for relative humidity.According to the results shown in Table
Table 6 .
Statistical evaluation.Results for configurations by varying microphysics schemes.Results within the benchmark are highlighted in bold, and the best for each statistic is shaded in gray.
Table 7 .
Statistical evaluation.Results for configurations by varying PBL schemes.Results within the benchmark are highlighted in bold, and the best for each statistic is shaded in gray.
Table 8 .
Statistical evaluation for M2 configuration (M2-LES and M2 (D05) comparison).Results within the benchmark are highlighted in bold, and the best for each statistic is shaded in gray. | 2018-11-08T08:01:10.055Z | 2015-04-15T00:00:00.000 | {
"year": 2015,
"sha1": "a76ab1711eb32bd77e8afead7297daf3a06fce5f",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=55648",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a76ab1711eb32bd77e8afead7297daf3a06fce5f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.