text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Evaluation of a simple method for storage of blood samples that enables isolation of circulating tumor cells 96 h after sample collection
Background Minimizing the effects of transportation on the properties of biological material is a major challenge for the scientific community. The viability of cells is important in cases where their study is urgent for evaluation of treatment response or for the study of cancer progression. Circulating tumor cells (CTCs) constitute a cell subpopulation with great importance for oncologists, because of their prognostic value. Detection and isolation of CTCs from blood samples is a routine activity in many laboratories, but concerns exist with regard to the maintenance of the cells during transportation. In this study, experiments were conducted to determine the stability of gene and protein expression in CTCs over a period of 96 h. Results Blood samples collected from healthy individuals and patients with cancer were each divided into five aliquots, which were stored at 2–8 °C and analyzed after 0, 24, 48, 72 and 96 h of storage. CTCs from patients and CD45-negative cells from healthy individuals were isolated each day using enrichment protocols, and qPCR was performed to determine expression levels of genes encoding specific biological markers. In addition, cells from breast and colon cancer cell lines were spiked into blood samples from healthy individuals, and these samples were stored and analyzed over a period of 96 h by PCR and by flow cytometry. The markers that were studied included housekeeping genes and genes associated with the response to chemotherapy, as well as genes encoding transcription factors. The results demonstrated that the expression profiles of specific genes and proteins in CTCs were not significantly affected by 72 h of storage. After 96 h of storage, expression of some genes was altered. Conclusion The transportation of blood at low temperature (2–8 °C) in the presence of the anticoagulant EDTA can protect CTCs from alteration of gene and protein expression for at least 72 h. Furthermore, under these conditions, CTCs can be detected and isolated 96 h after blood collection.
Background
In oncology, the detection and isolation of circulating tumor cells (CTCs) are useful procedures for cancer prognosis and prediction of treatment response. These cells constitute a subpopulation of cancer cells that have detached from the primary tumor and circulated through the blood stream, and which can initiate metastatic spread to other organs. A number of methods have been established for detection and isolation of CTCs, and each method has advantages and disadvantages [1]. The CTCs are generally isolated from whole blood using enrichment protocols or flow cytometry [2][3][4][5]. A major problem is the stability of this cell population during transportation; collected blood should be handled with care, as any mechanical trauma can be detrimental to the viability of the cells [6]. Specific storage-tube formulations are available, containing different anti-coagulants, but no clear data exist in relation to the stability of samples > 72 h after blood collection.
Although the ability to isolate CTCs from the samples is crucial, it is equally important to demonstrate RNA stability and maintenance of specific antigens. The aim of the present study was to identify CTCs and other cells in blood samples stored at 2-8 °C for 0, 24, 48, 72 and 96 h after collection. In addition to the detection of these cells, expression of relevant RNA and protein markers was evaluated.
Results
In the first set of experiments, qPCR was used to measure the expression levels of housekeeping genes 18S rRNA, 28S rRNA, ACTB and GAPDH, as well as the genes NANOG, POU5F1 (OCT3/4), SOX2, CD34, NES, ERCC1, DHFR, cFOS, cJUN, cMET (HGFR) and EGFR. In the second set of experiments, endpoint PCR was used to identify expression of the housekeeping genes 18S rRNA and ACTB and the genes EPCAM, KRT19, PECAM1 (also known as CD31) and CDH2 (N-cadherin).
Expression levels were measured in CD45-negative cells isolated from blood samples from two healthy individuals (Figs. 1, 2). In the samples from one individual, gene expression did not alter significantly from 0 to 96 h, but in the samples from the second individual, qPCR Ct values were significantly different after 96 h, but not after 72 h (p = 0.72), compared with the values at 0 h. In the samples from healthy individuals, no expression of NES was observed. Gene-expression levels in CTCs isolated from blood samples from two patients with cancer did not alter significantly during the 96 h period of time (p = 0.63 and p = 0.76) (Figs. 3, 4).
In endpoint PCR experiments, 18S rRNA and ACTB were expressed in all samples at all time periods. CD45negative cells from healthy individuals expressed PECAM1, but not EPCAM, KRT19 or CDH2 (Table 1). The EPCAM-positive breast-cancer cells and colon-cancer cells expressed EPCAM on all 96 h, whereas they only expressed KRT19 and CDH2 from 0 to 72 h (Table 1).
A blood sample from a healthy individual was tested for expression of CD45 and EPCAM during the 96 h time period ( Table 2). The results indicated no significant difference between samples stored for different time periods (p = 0.50).
Blood samples from healthy individuals spiked with breast-cancer cells or colon-cancer cells were tested for expression of EPCAM and cMET. Moreover, breast-cancer samples were tested for an additional marker, CD227. EPCAM was expressed in both sample types during the 96 h period, and CD227 was expressed in breast-cancer samples, but cMET was not expressed in any of the samples (Table 3). No significant differences in expression were observed (p = 0.79 for breast and p = 0.5 for colon cancer cells, respectively).
Physical properties of the cell samples, determined by flow cytometry, were compared across the 96 h time period (Figs. 5,6). Dot plots of side scatter versus EPCAM expression in colon-cancer cells showed that side-scatter properties of the cells changed with storage time, whereas EPCAM expression remained stable (Fig. 7).
Regarding the microscopically analysis, the phenotype of the cells did not change after 96 h (Fig. 8). As far as Fig. 1 Analysis of gene-expression levels in CD45-negative cells isolated from blood samples taken from a healthy individual and stored for 0-96 h. Gene expression was determined by qPCR, and threshold cycle (Ct) values for entry of reactions into the exponential-amplification phase are shown the immunocytochemistry is concerned, no change was also appeared in expression of specific markers (Figs. 9, 10, 11).
Discussion
Many tests have been validated for routine use in oncology. The transportation of biological material to appropriate laboratories can be problematic and is a source of different opinions between scientists, many of whom claim that the stability of samples alters dramatically over long time periods [6]. Blood is the most widely used biological material, and different transport conditions have been proposed for its maintenance [6]. Appropriate transport conditions can differ depending on the downstream applications. The detection and isolation of CTCs is very important, as they can be used to predict cancer progression and/or resistance to therapy [7]. Therefore, their stability should be evaluated in a time period that represents the transportation time from sampling to analysis. Transportation time and temperature are important Expression of genes was determined electrophoretically by the presence of a product band at the end of PCR. "+" indicates expression, whereas "−" indicates the absence of expression
Storage time (h) Gene expression in colon-cancer cells spiked into blood from healthy individuals
variables for determination of optimal transport conditions. Many researchers claim that samples should not be stored for longer than 6 h at temperatures between 20 and 25 °C [8]. However, according to Hankinson et al., temperature has no significant effect on biochemical markers [9]. Other groups have demonstrated that the preservation of CTCs in blood samples can be promoted by using a sugar-based cell-transportation solution, in which viability can be maintained for > 72 h [10]. Proprietary storage tubes called Cell-Free DNA ™ BCT devices are available; in these tubes, CTCs are stable for at least 4 days at room temperature [11]. CellSave Preservative Tubes are also approved for stabilization of CTCs for up to 96 h at room temperature. EDTA is the most widely used anticoagulant agent, and is suitable for preservation of cells in blood samples, as well as cell-free DNA [12].
The aim of the present study was to demonstrate that CTCs can be detected and isolated after storage for up to 96 h. According to our protocol, blood samples should be stored in tubes containing EDTA, at a temperature between 2 and 8 °C. These conditions should preserve gene expression for up to 72 h, but after 96 h of transportation, changes in expression of specific markers may be observed.
Our experimental data indicated that stored CTCs or CD45-negative cells do not lose expression of specific markers that may be correlated with stemness (such as NANOG and POU5F1) [13], or resistance to chemotherapeutic agents (such as ERCC1 and DHFR) [14,15]. The change of ΔCt parameter was also not affected and the results of gene expression were accurate. Furthermore, the study of genes implicated in key pathways, like the mitogen-activated protein kinase (MAPK) pathway [16], indicated that genes located in the nucleus were not affected. Our results indicated that RNA can be isolated from CTCs for prognostic or diagnostic purposes even 96 h after blood collection.
Our results also suggested that distinctive patterns of gene expression in cancer cell lines spiked into blood samples from healthy individuals were maintained up to 72 h with this storage protocol.
Our flow-cytometric data indicated that protein expression of markers for identification of CTCs, as well as CD45 (an antigen expressed by all leukocytes) [17], did not change significantly over the 96 h time period. Dot plots of forward scatter versus side scatter demonstrated no apparent changes to the physical properties of the cells [18]. The maintenance of biomarker expression indicated that samples stored under these conditions are sufficiently stable for shipping and further analysis, without increasing the possibility of false-positive results.
Taking everything into consideration, it is clear that the transportation of genetic material is very important when CTCs analysis is required. There are different transportation solutions, which should not be ignored depending on the study we need to perform. For example, if cfDNA analysis is needed, specific tubes protecting this type of material, are essential. However, in this situation, there is no ability to protect the mRNA and therefore the proteins for further studies [11]. The solution of using sugar based transportation solution is quite promising, since CTCs are viable after 6-7 days. If the enumeration of CTCs is required, then the last transportation solution is suggested, since no specific transportation conditions are required [10]. However, even in this case, there are no data regarding the stability of gene and protein expression. The use of collection tubes filled with EDTA and the transportation under specific conditions of temperature seems the most promising, since apart from the viability of the CTCs it enables further study at both gene and protein level. In addition, the transportation does not require specialized laboratory consumables as well as specific tubes.
Conclusion
The present study aimed to demonstrate that CTCs can be detected and isolated after storage for at least 72 h. Blood was stored in tubes containing EDTA at temperatures between 2 and 8 °C. With storage for 72 h, gene expression in isolated cells was stable, but at 96 h, changes in expression of specific markers were observed. Fig. 9 Immunocytochemical analysis of CTCs that were isolated from the samples using Pancytokeratin magnetic beads after 0 and 24 h. DAPI was used to stain the cells' nucleus. Anti-CD45-PE antibody (orange) was used to stain PBMCs and anti pancytokeratin antibody to stain CTCs On the basis of these results, future studies could expand this analysis to include more samples (including other types of cancer) and additional biomarkers.
Sample collection
Blood samples (40 ml) from two patients with cancer and two healthy donors were collected in sterile 50 ml Falcon tubes (4440100, Orange Scientific, Braine-l' Alleud, Belgium) containing 7 ml of 0.02 M EDTA (E0511.0250, Duchefa Biochemie B.V., Haarlem, The Netherlands) as an anticoagulant. One patient was a 45-year-old woman with stage II breast cancer, and the other patient was a 54-yearold man with stage III colorectal cancer. The healthy individuals were a 30-year-old man and a 28-year-old woman. The samples were placed on a roller for 30 min, divided to five clean 50 ml Falcon tubes and stored at 2-8 °C. The study was performed from April to September 2016.
Cell lines
A human breast-cancer cell line (MDA-MB 231) and colon-cancer cell line (HCT-116) were obtained from the European Collection of Cell Cultures (ECACC-HPA cultures, Salisbury, UK). Cells were cultured in 75 cm 2 flasks (5520200, Orange Scientific) at 37 °C in a 5% CO 2 atmosphere, in the recommended media supplemented with the appropriate amount of heat-inactivated fetal bovine serum (10106-169, Invitrogen, NY, USA) and 2 mM l-glutamine (G5792, Sigma-Aldrich, Munich, Germany). Approximately, 3,750,000 cells were isolated and divided into five 15 ml Falcon tubes each containing 1.5 ml of blood from healthy individuals, with EDTA anticoagulant (with a total of ~ 750,000 cancer cells in each tube).
Sample preparation
Whole-blood samples were centrifuged for 20 min at 2500×g with 4 ml polysucrose solution (Biocoll separating solution 1077, Biochrom, Berlin, Germany). Mononuclear cells, lymphocytes, platelets and granulocytes were collected after centrifugation and washed with phosphate-buffered saline (PBS) (P3813, Sigma-Aldrich). The cells were incubated in lysis buffer [154 mM NH 4 Cl (31107, Sigma-Aldrich), 10 mM KHCO 3 (4854, Merck, Darmstadt, Germany), and 0.1 mM EDTA in deionized water] for 10 min to lyse the erythrocytes. Samples were then centrifuged as above and washed with PBS. Cells from the healthy donors (non-cancer) were incubated at 4 °C for 30 min with CD45 magnetic beads (39-CD45-250, Gentaur, Kampenhout, Belgium), whereas those from patients with cancer were incubated with pancytokeratin beads (recognizing CK4, CK5, CK6, CK8, CK10, CK13 and CK18) (5c-81714, Gentaur) at 4 °C for 30 min. Following incubation, the samples were placed in a magnetic field to collect microbead-bound cells for pan-cytokeratin and negative selection was performed for CD45 cells, which were washed with PBS. Molecular Fig. 10 Immunocytochemical analysis of CTCs that were isolated from the samples using Pancytokeratin magnetic beads after 48 and 72 h. DAPI was used to stain the cells' nucleus. Anti-CD45-PE antibody (orange) was used to stain PBMCs and anti pancytokeratin antibody to stain CTCs Fig. 11 Immunocytochemical analysis of CTCs that were isolated from the samples using pancytokeratin magnetic beads after 96 h. DAPI was used to stain the cells' nucleus. Anti-CD45-PE antibody (orange) was used to stain PBMCs and anti pancytokeratin antibody to stain CTCs analysis was performed on the isolated CD45-negative cells (non-cancerous) and the pan-cytokeratin-positive cells (cancerous).
Molecular analysis
Total RNA from cultured cells was extracted using an RNeasy Mini Kit (74105, Qiagen, Hilden, Germany). Total RNA samples were evaluated spectrophotometrically, and 1 µg of each RNA sample was used as a template for cDNA synthesis using a PrimeScript RT Reagent Kit (RR037A, Takara, Beijing, China). Real-time qPCR was then performed using KAPA SYBR Fast Master Mix (2 ×) Universal (KK4618, KAPA Biosystems, MA, USA) in a final volume of 20 μl. Specific primers for each marker and for reference genes were designed using Gene Expression 1.1 software (Genamics, New Zealand). Length parameter was set between 20 and 25 bp, %GC 40-60, Tm Range 57-60 °C, 3′ End Stability from − 3 to − 12, 5′ End Stability from − 6 to − 9 and finally without ΔG Dimer and ΔG hairpin. Primer sequences were evaluated by BLAST searching to exclude those that would amplify undesired genes ( Table 4). The PCR program was as follows: initial denaturation at 95 °C for 2 min followed by 45 cycles of denaturation at 95 °C for 10 s and annealing at 59 °C for 30 s. Melting-curve analysis was performed from 65 to 95 °C with 0.5 °C increments for 5 s at each step. The qPCR products were run on agarose gels and visualized, to validate the results. ΔCt value was used for analysis of experiments. The ΔCt calculates the relative expression of a gene of interest in relation to another gene (adequate reference gene).
For the endpoint PCR, cDNA was amplified using GoTaq G2 Flexi DNA Polymerase (M7805, Promega, WI, USA) with the follow PCR program: initial denaturation at 95 °C for 5 min followed by 35 cycles of denaturation at 95 °C for 15 s, annealing at 59 °C for 15 s and extension at 72 °C for 30 s, along with a final extension step at 72 °C for 10 min. The reaction products were separated by electrophoresis on agarose gels and visualized.
The final primer concentration was 400 nM in both qPCR and endpoint PCR. In all sets of reactions, cDNA from Universal Human Reference RNA (740000-41, Agilent, CA, USA) was used as a positive control. Template-free and negative controls were also used in all experiments. All the reactions were performed in triplicate.
Flow-cytometry sample preparation and staining
Aliquots (0.5 ml) from the previously prepared 15 ml Falcon tubes, containing ~ 250,000 cells from a cancer-cell line spiked into normal whole blood, were analyzed by flow cytometry. Red blood cells were lysed using ammonium chloride, and samples were then stained with the antibodies CD45-PC7 (25-0459-42, eBioscience, Wien, Austria) and CD31-RPE (MCA1738PE, Abd Serotec, Segrate, Italy), which were used to exclude CD45 + cells and endothelial cells, respectively, during analysis of the data. The antibody EPCAM-APC (324208, Biolegend,
Gating strategy and data analysis
Data analysis was performed using the FCSExpress software (DeNovo Software V3). Gating was performed with dot plots. To exclude non-target cells, a forward scatter/ side scatter (FS/SS) dot plot was used to exclude debris, an SS/CD45 dot plot was used to exclude all whiteblood-cell populations, and an SS/CD31 plot was used to exclude all CD31 + endothelial cells. To select the EPCAM + and CD227 + populations, gates were drawn on SS/EPCAM and SS/CD227 dot plots to select the positive populations.
Microscopy evaluation
The isolated cells were evaluated microscopically during the 96 h. For each sample, cells were plated in microscopy slides, visualized in an inverted Zeiss Microscope (Primovert) and captured by using the Axiocam and the ZEN software (ZEISS, Germany). The exposure and time frame of the camera were set automatically, according to manufacturer's guide (zoom at 10 ×).
Statistical analysis
The qPCR results were tested for normal distribution with the Kolmogorov-Smirnov test. One-way ANOVA tests were also performed on the qPCR data to test for significant differences between the various samples. A significant p value was defined at < 0.05 level. Statistical analysis was performed with PAST version 2.10 [19].
Authors' contributions
PA carried out the molecular biology assays, drafted the manuscript and performed statistical analysis. DAN performed the protein-based assays, drafted the manuscript. IP supervised the assays and the manuscript. All authors read and approved the final manuscript. | 4,425.4 | 2017-09-25T00:00:00.000 | [
"Biology"
] |
Detecting Land Use Modication and Its Inuence on Resident Communities
The study intended to detecting the land use land cover changes, trends and their magnitude between 1986 and 2019 years by using GIS and remote sensing in Fagita Lekoma District, Amhara region, Ethiopia. Three satellite data set of Landsat Thematic Mapper for 1986, Enhanced Thematic Mapper Plus for 2002 and Operational Land Imager for 2019 were used generate land use and land cover maps of the study area. Post classication comparison changed detection method was employed to identify gains and losses between Land Use Land Cover classes. Socioeconomic survey, key informant interview and eld observation were also used conclude the encouragement of land use /land cover change in the study area. The result shows that cultivated land and wetland similarly decline in the entire study periods. In the 33 years, forest lands expanded by upon 200% of the original forest cover what was existed on the base year. Whereas, a result of the socioeconomic analysis the expansion of Acacia decurrens tree plantations and agricultural land are main causes of land use land cover change in the study area. The impact of this land use land cover change is more signicant on the livelihood condition and status of the study area. The land use system of the study area highly converted cultivation land into forest/tree plantation. Mainly, the expansion of Acacia decurrens tree plantation on farmland is increasing the income of local residence when compare with the previous living condition in the study area.
Background
Global land use and land cover has signi cantly changed from the past decades. Historically, the driving force for the majority of land use land cover change is population growth, although there are several interaction factors involved .The growing population has increased demand for land, trees, and water, which, coupled with tenure insecurity or the absence of clear property rights, has resulted in the overexploitation of these natural resources, and this in turn has threatened the sustainable development of agriculture, forestry, and livestock sectors (Dean, 2003).
There were signi cant global historical changes in LULC between 1700 and 1990, when the area of cropland expanded from about 3.5 million km 2 to some 16.5 million km 2 (Lambin & Geist, 2002). Even though the net loss of the global forest area had reduced signi cantly due to a large scale of afforestation reported in some countries, such as China and Vietnam, tropical deforestation has continued into the 21 st century, the world was the past (2000/2005) experienced about 0.073 million km 2 of net annual deforestation, largely due to agricultural expansion (FAO, 2010). It was experienced in feature of the poorer countries and the average annual deforestation rate, between 1990 and 2005, in low income countries was 0.5% while deforestation is lower (0.2%) in middle income countries (World Bank, 2007).
Of course agricultural activities in sub-Saharan Africa are dependent on variable climate and land use land cover change. In addition to this, farmers in Africa live on small hectare of farmland so it creates a great opportunity to divert one land use system to another. Decimation of farm size accelerates and hence obstacle efforts to increase farm productivity. Therefore, in line with augmenting agricultural productivity, looking for other way out has been put forward as an equally potent strategy for addressing household food security (Briassoulis, 2019). Through time such changes have bene cial while inversely they have had detrimental and adverse impacts on the environment and people's economic activities (Briassoulis, 2019). Studies of rates, patterns, and implications of LULC dynamics at local level can help to design appropriate land management practices, strategies and policies (Daniel, 2008). Gathering information about Land Use Land Cover changes is fundamental for a better understanding of the relationships and interactions between humans and the natural environment. Remote sensing data have been one of the most important data sources for studies of LULC spatial and temporal changes (Mes n, 2016).
Land use change status was happened through time processes. Then this change has impacts on livelihood of the people and environmental condition (Daniel, 2008). In the study area the recent history through the intervention of government and the intentionally of communities started to cover their land by vegetation especially Accacian decurrens and Eucalyptus tree plantation as local people investment at the district level. These expansions of tree plantation taken as the initial issue of land use land cover change and have impacts on the socioeconomic activities of the local people. Therefore, this study analyses historical patterns of land use/cover and consequences that occurred between over a period of three decades in the study area.
Methods And Materials
Geographical location Fagita Lekoma district is one part of the Awi Zone in Amhara region Northwestern of Ethiopia. Fagita Lekoma is bordered on the south by Banja Shekudad, on the west by Guangua, on the north by Dangila, and on the east by the Mirab Gojjam Zone. Towns in Fagita Lekoma include Addis Kidam and Fagita Lekoma woreda is located between10°57'23''-11°11'21''North latitude and 36°40'01''-37°05'21'' longitude, in Awi Zone of Amhara National Regional State. It is situated about 460 km northwest of Addis Ababa and 100 km southwest of Bahir Dar, the capital city of Amhara regional state.
Data acquisition and image classi cation approach
In order to cover the intended period of study, different types of imageries originating from different types of sensors were used. The images were obtained from United States Geological Survey (www.USGS.gov). Images from different time periods were used for the classi cation of LULC of the study area. That is multi temporal raw satellite data has been imported to Erdas Imagine 2010 image processing software. Image processing consists of processes aimed at the geometric and radiometric correction, standardization of imagery to improve our ability to interpret qualitatively and quantitatively image components. Since the United States Geological Survey (USGS) freely offers the Landsat Orthorecti ed data which consists of a global set of high-quality, relatively cloud-free Orthorecti ed TM, ETM+ and OLI imagery therefore there is no need for geometric correction but check the position. After all, land use land cover change maps and land cover statistics were generated to compare the temporal change of the study area for the past three decades by using ArcGIS 10.4 and ERDAS Imagine 2010 integrally.
Development of a Classi cation Scheme
Based on the prior knowledge of the study area and a brief reconnaissance survey a classi cation scheme was develop for the study area as following in the table 2 Based on de nitions of LULC types given above are a modi cation of Anderson's classi cation scheme in 1976.
Cultivated land
Includes annual rain fed and irrigated cultivation. Lands mostly used for cereal production in subsistence farming and mixed with bushes.
Grassland
Predominantly covered by small grasses with a small proportion of shrub and trees.
Tree plantation
Includes densely growth trees forming closed canopy. The predominant species are Bahirzaf (Eucalyptus tree), Chigegn (Acacia decurrens) and Kerkeha (Arundinaria alpine) or Bamboo forest. Note that, sparsely natural forests which are found in the study area including under tree plantations. Because which haven't a known ratio to differentiate from one others.
Wetland
Represents most plains areas with frequent ooding event during the rainy season and water table is at, near, or above the land surface for a signi cant part of most years.
settlement Area occupied by small town and settlement including market places, roads, and institution
Methods of Supervised Classi cation and Data Analysis
To perform the classi cation the maximum likelihood supervised classi er was employed. Training areas for all spectral classes were developed composing each information class to be identi ed by the classi er. Since there was more than one spectrally different signature found for each information class. The researcher used more than thirty training samples for each classi cation. A recode function was used to merge spectrally different classes to generate nal information classes.
The post classi cation approach was used for mapping detailed LULC determination. This approach is generally considered the most obvious approach to change detection. It requires the comparison of independently classi ed images of the same study area acquired over two different time periods. One advantage of using post-classi cation change detection is that data normalization for atmospheric and sensor differences between two date is not required, since images acquired for two dates are classi ed separately (Singh, 1989). Change statistics were computed by comparing values of area of one data set with the corresponding value of the second data set in each period. Percentage change to determine the trend of change can then be calculated by dividing observed change by sum of changes multiplied by 100 (Puyravaud, 2003).
Results And Discussion
Cultivated land, grassland, wetland, forest and urban area are the major LULC classes for the study area and periods.
The classi cation result of the 1986 image revealed that cultivated land constituted the largest proportion of land in the woreda with a value of (31037.4 ha) 45.82 % followed by grassland which accounts for (28104.12 ha) 41.49%. Tree plantation/forest and wetland were constituted 9.1% and 3.5% respectively. And there was smallest settlement area during these period which covers 85 ha (0.13%). As shown in Figure 27.2% and settlement area also increased in the past over fteen years which accounts about (1491 ha) 2.4%.
Land Use Land Cover Change Map
According to Table 3 conversion matrix for the year 1986 -2019, the change in the land use land cover in the study area was by enlarge attributed to expansion of tree plantation. This class has expanded at the expense of cultivated land (7486.02 ha) about 11.1%, grass land (7285.59 ha) 10.75% was shifted into tree plantation classes. There was also signi cant change of grassland to cultivated land in this period in 8815.5 ha (13%) and 8588.43 ha (12.7%) of cultivated land was also changed to grassland. Averagely, tree plantation covers showed increment in the past over three decade's period which mainly attributed to the change of cultivated land and grassland into tree plantation and settlement area respectively.
Therefore compare this result in 1986-2019 satellite image analysis, the trend of change was forest land, and settlement patterns were increased on the other hand cultivated land, grassland and wetland land use classes were also decreased. However, still now cultivated land has cover the dominant range of land in the study area. Whereas, survey data indicated that forest resources are highly increased. , 2014). This information reveals both the changes (additions and reductions) and classes that are relatively stable overtime. This information will also serve as a vital tool in management decisions. Table 4 shows that the pattern of changes in LULC between 1986 and 2019. Land used for cultivated land decreased by 21% compared with the previous amount of cover. Grassland and wetland showed similar patterns of change, with decreases of 20% and 63% respectively in the last three decades. In contrast, tree plantation cover showed a reverse trend, increasing by 200% during the same period of time. Settlement area was showed a similar pattern of change and increased by 1654% in this period. In contrast, more and more cultivated land and grassland were became decreasing their cover in compared to the previous time and replaced into tree plantation and settlement area. 1957-1982, while 2% decrease in 1982-1998. Aggregately cultivated land about 6396.2 ha (20.6%) was losing into forest and settlement area. However, still now a day cultivated land is the cover the largest parts and the basic socioeconomic activities in the study area.
Grassland
In the three decade years, this category was expanded by over 41.4% (28063.4 ha) of the original grassland cover which existed at the base year. Between 1986 and 2019, almost 31% of grassland cover was obtained from cultivated lands. This indicates that when the farmers fallow their farm land they use it as a grazing for their cattle. It is easy to convert a given land between these class categories even within year. During the overall increasing period (1986-2019), the conversion of wetland into grassland accounted for 39.5% of the total gained grassland. In the same period, this category also gained 27.1 % of its areal extent from the removal of trees. This nding disagrees with that of Gete and Hurni (2001) and Solomon (2005) forest and settlement respectively in all periods considered. While wetlands may be the most productive of ecosystems on earth, they are also the most threatened. Wetland destruction and alteration has been and is still seen as an advanced mode of development, even at the government level (Abebe and Gaheb, 2003). The area is the only part that represents the wetland class currently and its areal extent reduced when compared to the previous image results.
Tree plantation (Forests)
The average rate of change tree plantation cover was showed relatively constant or no more change between 1986 and 2002 ( rst period) which accounts about 5.1 ha/yr. (0.08%) added on the previous, and highly increased between 2002 and 2019 (second period) which accounts about 817.7 ha/year (13.1%) and 1986 and 2017 (the overall study period) was similarly to the rst and second study period which was increasing about 398 ha/year (6.4%). In 1986 the area under tree plantation cover was 6181.56 ha (9.1%) of the study area which increased into 6262.7 ha (9.24%) in 2002.
In the total tree plantation cover in 1986, about 36.7% was converted into cultivated land, 6.9% into wetland and 5.1% into grassland in 2002. Greater than half about 51.3% was the original area remained in the same category to forest. The transformation of forest land throughout the study period was due to an increasing demand for cultivated land and grassland.
That is why a large proportion of the transformed land, i.e. around 43.3% of the tree plantation cover was changed into the two classes. According to the local elders the little existing remnants of natural forests on the rst study period were protected by local
Settlement Area
Addis Kidam town, almost at the center of the woreda, is part of the study area. Even if there is information from the elders about the existence of the town in 1986 as the smallest village, it has smallest sign of town on the Landsat image. This is because the houses at a time were corrugated steels and huts that make the re ectance similar with other LULC classes. Surprisingly, the size of this town had been changed during the second study period and on the 2018 the Small towns like Fagita, Segila, Gezehera, Wazina, Ashewa afri and Chguali which are located on all direction around of Addiskdam.
In the 1986 study period the settlement area covers about 85 ha (0.13%). And also in the 2002 middle study period area covered by settlement was 280.8 ha (0.41%) and 1491 ha (2.4%) of the study area in 2019. The land use and land cover change between on the rst study period (1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000)(2001)(2002) settlement area were the class which have been expanded on cultivated land and grassland respectively. And 2019 showed that about 893.5 ha of the net gain of urban area was due to the conversion of cultivated land. The remaining 454 ha and 96.7 ha were obtained from grassland and forest respectively. As focus group discussants replied that, around Addis Kidam town basically there was a community grazing land. When the town expands this area was distributed to the community and converted to built-up areas.
Implication of land use change
The socioeconomic activities like crop production, animal husbandry, and charcoal production of an area are playing an important role on the land use type of the area. In other way the changes in land use directly or indirectly affect the socioeconomic of a given society. The changes discussed from the period (1986-2018) remote sensed image classi cation the one which had more effect on the economic condition of the inhabitants is the change in relation to forest cover. According to the rsthand information obtained from the interviewees in FLWARDO, almost 90% of the woreda population is now directly or indirectly relate with the production of Acacia decurrens and its effects. On both rural and urban parts of the study area the communities are producing Acacia decurrens seedling in their compound. These created a job of them without age and sex speci cation. Some of these activities in relation to this are production of seedling, keeping the grown forest from cattle, cutting of it while it reached for purpose and production of charcoal.
Previously, in Fagita Lekoma Woreda many youths were jobless and exposed to participate in illegal activities. But now these groups of population are working by forming three groups. The rst group team covers the loaded charcoal and makes ready for transportation. The second group loads the charcoal to car. Finally the third serves as a broker to meet the buyers and sellers of charcoal. In the rural parts of the study area previously most of the young populations were employed as a servant within and out of the woreda; migrate to Addis Ababa and different parts of the country. Now a day thanks to this golden kind of forest this is becoming a history, they are engaged with starting from the production of seedling to the production and provision of charcoal which enables to lead a better life in their birthplace. Such a kind of activities requires large labor force because they are wide and continues throughout the year.
Females also participate basically on preparation of the inputs to produce seedling, watering the seedling and the transportation of the charcoal product from inaccessible areas. In general Acacia decurrens is serving as a cash crop in the woreda. For example, as indicated from the report of Fagita Lekoma Woreda Revenue O ce in 2019 (2011 E.C. budget year) the woreda obtained more than 11 million ETB from the exported charcoal as royalty tax. Now the woreda is one of the major providers of charcoal to Addis Ababa city's population and other parts of the country.
The government employees and merchants are also the bene ciaries of this land use change. Being a government employee at woreda level and having a car is very di cult really now. But who were government employees could do this by changing their job to trading charcoal. Before the past ten and fteen years almost all of the rural residents of the study area were inhabited in huts and looking steel houses was endemic. Now a day in order to have such asset having land by itself is enough. Those farmers who haven't the capacity to cultivate their land can rented to somebody and utilize it. Now the majority of the rural population leaves in steel houses and they are also constructing additional homes at urban areas. In addition the farmers are learning technical way of using their land like generating two purposes from a given land at the same time. When they need to grow the Acacia decurrens they plow the land and grow with the crop together. After two and three years they use the grass inside for their cattle's as a food source. So, the land which is covered by this type of forest becomes out of such additional advantages not more than four and ve years. When the forest becomes from four to six years old, it will be removed to produce charcoal and replaced by another.
Despite the above mentioned advantages, due to the land use change in general, the expansion of forest in particular the farmers are facing problems like shortage of productive farmland and grazing lands. These are more shown the existence of importing cereals for consumption. There are also farmers who exposed to different social and economic problems by renting almost all of the land they have. The other problems facing the productive cultivated land should automatically replace by forest resource due to this case the next few years the demand of food consumption will be diminishing to serve the study area.
To generalize depend on data obtained from surveyed household; land use change on expansion of forest land has positive impacts on economic, social and environmental perspective. In the perspective of economic, it can increase the income of local society, increasing royal tax of the government, reduce jobless youths and create job opportunity in the study area and socially, acceptable the bene ts of forest resources for charcoal production, construction materials and energy source. And environmentally, it increases soil fertility, productivity and it serves as soil conservation because Acacia decurrens tree is growing any up and down area as suitable land in the study area. On the other hand it has negative impacts that are natural forest degradation, loss of productive land and communal land.
Conclusion And Recommendation
This study has revealed that the recent advancements in spatial technology, namely remote sensing and GIS, could provide powerful tools for evaluating land use and land cover changes at woreda levels. The results of this study revealed the existence of signi cant land use and land cover changes in the last 32 years. Especially the expansion of tree plantations on cultivated land.
Results showed the changes in the last 33 years, forest land shows increase (+200%) while cultivated land have shown a signi cant decrease in area (-20.6%) in the study area. The rest Grass land, settlement area and wetland show decrease to -20.4%, increase to +1654% and decrease to -63.2% respectively for the whole period. Accordingly, the 1986-2002 change analysis showed high change in land use land cover that was cultivated land increased to +13%, wetland increase to +46% forest increase to +1.3% while 2002-2019 changed analysis showed that relatively changes from cultivated land highly decreased by -75% and forest increased by +196%.
Therefore the area of land under tree plantation has increased; there should be an appropriate land use planning and policy with impact studies and scenarios, in order to use a given land with its maximum output. In addition the farmers should be aware of their land use system. Because they are covering their land with Acacia decurrens only by considering the money that will be obtained from charcoal product after four to six years. This activity is shown sometimes even on irrigable lands which can give merit twice a year. So, when farmers need to cover their land by Acacia decurrens the concerned government body should approve as the land is not productive for crop cultivation. Such policy also will be enabling to identify the proper land for speci c purpose so that the marginal lands like wetland will not be put into use. And to make the woreda community more pro table, the charcoal production system should be supported by results of modern technology. Because the communities are practicing traditional way of charcoal production system still now and during this process there are woods that will be changed to ash and the producers are also exposed to health problems. Availability of data and materials The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Competing interests
The author declares that they have no competing interests.
Funding
The author received no nancial support for the research, authorship, and/or publication of this article. | 5,433.4 | 2020-09-14T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
CFTs on Non-Critical Braneworlds
We examine the cosmological evolution equations of de Sitter, flat and anti-de Sitter braneworlds sandwiched in between two n dimensional AdS-Schwarzschild spacetimes. We are careful to use the correct form for the induced Newton's constant on the brane, and show that it would be naive to assume the energy of the bulk spacetime is just given by the sum of the black hole masses. By carefully calculating the energy of the bulk for large mass we show that the induced geometry of the braneworld is just a radiation dominated FRW universe with the radiation coming from a CFT that is dual to the AdS bulk.
Introduction
Recent years have seen the development of two very interesting ideas in Theoretical Physics: holography and braneworlds. For our purposes, a braneworld [1,2] is an n−1 dimensional surface (or brane) marooned in some n dimensional AdS spacetime. In [2], this extra dimension is infinite. Its geometry is warped exponentially and it is this warp factor that ensures that gravity is localised on the brane.
The notion of holography was given substance by the celebrated AdS/CFT correspondance [3,4,5]. This states that a theory of gravity in a bulk AdS spacetime is dual to a conformal field theory (CFT) on its boundary. Witten [6] argued that if we give a finite temperature to the bulk AdS by considering AdS-Schwarzschild then we find that the CFT on the boundary is also at finite temperature. He found that we could associate the mass, temperature and entropy of the black hole with the corresponding quantities in the boundary CFT.
In [7], it was shown that the original Randall-Sundrum braneworlds were equivalent to the AdS/CFT correspondance, with the CFT being cut off in the ultra violet. It is paradoxical to think of a scale invariant theory as having a cut off, so what we actually have is a broken CFT on the brane. As we move the brane towards the boundary we approach the original unbroken version of this duality. Now consider the impact of gravity. When our CFT lives on the boundary, the bulk graviton cannot reach it, and gravity is omitted from the boundary theory. However, for a Randall-Sundrum brane away from the boundary, the bulk graviton can reach the brane and we find that gravity is coupled to the broken CFT.
This model of Randall and Sundrum contains a flat braneworld, where the cosmological constant on the brane is set to zero. We can, however, adjust the brane tension so that we induce a non zero value for the cosmological constant [8]. Recent observations that our universe may have a small positive cosmological constant suggest that it is these braneworlds that are closer to reality. Such inflationary braneworlds are naturally induced by quantum effects of a field theory on the brane [9,10,11] . We shall henceforth refer to flat braneworlds as critical and the dS(AdS) braneworlds as super(sub)critical. As before, we can view these in the context of AdS/CFT [12]. We do indeed find (at least for subcritical walls) that a Karch-Randall compactification is dual to a CFT on the brane, coupled to Einstein gravity.
We wish to examine what happens on these braneworlds when the bulk spacetime is at finite temperature. This occurs naturally for a hot critical braneworld due to the emission of radiation into the bulk [13]. The pure AdS bulk is once again replaced by AdS-Schwarzschild. For a critical braneworld it was shown [14] that the induced geometry on the brane is exactly that of a standard radiation dominated FRW universe. This radiation is represented by a CFT with an AdS dual description. The issue of AdS/CFT on critical braneworlds embedded in a general class of bulk spacetimes with a negative cosmological constant is discussed in [15]. In this paper, we attempt to reinforce the AdS/CFT duality by extending the discussion to noncritical braneworlds. There have been previous attempts to do this [16]. However, they used a version for the braneworld Newton's constant G n−1 in terms of the bulk constant G n , that is only valid for critical walls. We use the correct version [17,18] and find that we need to be more careful in calculating the energy of the bulk AdS. In Appendix A we have shown that we cannot assume the energy is given by the masses of the black holes. By using a procedure similar to that used in [19], we properly determine the energy of the bulk. We find that we can similarly describe the braneworld as a radiation dominated FRW universe with the radiation coming from a CFT with an AdS dual. Our analysis is restricted to κ = 1 closed brane universes, although it would also be interesting to consider κ = 0, −1. Note that the quantum cosmology of braneworlds of arbitrary tension in a pure AdS bulk (as apposed to the AdS-Schwarzschild bulk being investigated here) has been studied from a holographic viewpoint for all values of κ [20,21].
The rest of this paper is organised as follows: in section 2 we will review the derivation of the equations of motion for the brane embedded in AdS-Schwarzschild. In section 3 we will consider how these equations of motion can be regarded as Friedmann equations for the braneworld. We will properly derive the energy of the bulk spacetime and use this to derive the energy density of the dual CFT. Finally, section 4 contains some concluding remarks.
The Equations of Motion of the Bulk and the Brane
Let us consider two identical n dimensional spacetimes with negative cosmological constant Λ. These are glued together by an n − 1 dimensional braneworld of tension σ. This is described by the following action: where g ab is the bulk metric and h µν is the induced metric on the brane. K µν is the extrinsic curvature of the brane. It appears with a factor of two in the action because we have two copies of the bulk glued together at the brane. The bulk equation of motion is just given by Einstein's equations with a negative cosmological constant: This admits the following solution [22,18,23]: where and k 2 n is related to the bulk cosmological constant by Λ = − 1 2 (n − 1)(n − 2)k 2 n . The integration constant c distinguishes between the pure AdS solution (c = 0) and the AdS-Schwarzschild solution (c > 0). Meanwhile, dΩ n−2 is the metric on an n − 2 sphere.
We now parametrise the brane using the parameter τ . The brane is given by the section (x µ , t(τ ), Z(τ )) of the general bulk metric. The brane equation of motion is given by the Israel equations for the jump in extrinsic curvature across the brane. We have Z 2 symmetry across the brane so these equations just take the following form: The extrinsic curvature is defined in terms of the unit normal, n a , to the brane, by the relation K µν = h a µ h b ν ∇ (a n b) . Using the fact that: we arrive at the equation of motion for the brane: where a = σ 2 n − k 2 n and σ n = 4πGnσ n−2 . This analysis has also been presented in more detail in [24].
The Cosmology of Non-Critical Braneworlds
We shall now examine in more detail what is happening on our braneworld when it is sandwiched in between two AdS-Schwarzschild spacetimes. The induced metric is given by the following: Notice that the size of our braneworld is given by the radial distance Z(τ ) from the centre of the black hole. Given this structure we can interpret equations (7) and (8) as giving rise to the Friedmann equations of the braneworld. If we define the Hubble paramater in the usual way (H =Ż Z ), we arrive at the following equations for the cosmological evolution of the brane: The cosmological constant term in equation (11) is given by a. For a = 0 we have the critical wall with vanishing cosmological constant. For a > 0/a < 0 we have super/subcritical walls that correspond to de Sitter/anti-de Sitter spacetimes. As was discussed in [14,11], the brane crosses the black hole horizon for all values of a. For n > 3, critical walls and subcritical walls have a maximum value of Z only. For supercritical walls there are three possibilities: (i) Z runs from zero to infinity (or vice versa), (ii) Z runs from infinity down to a strictly positive minimum and then up to infinity again or (iii) Z runs from zero up to a maximum and then down to zero again 1 . We will concentrate on the third possibility in this paper, as this is intuitively what one expects from a κ = 1 universe.
The interesting part of equations (11) and (12) lies in the c-term. This is a bulk quantity that should have some interpretation on the brane. The natural interpretation would of course be that it corresponds to the energy density and pressure of a dual CFT. We will consider c to be large so that the contribution of this "holographic" term is dominant. We will also restrict ourselves to the region in which our braneworld is near its maximum size. This way we avoid the problems one might encounter near the Big Bang and the Big Crunch, aswell as allowing us to make use of Euclidean quantum gravity, as we shall see.
Calculating the energy density of the dual CFT
In order to evaluate the energy density of the dual CFT we first need to evaluate the energy of the bulk spacetime. We could naively assume that the energy of bulk is just twice 2 the mass M of the black holes, where the mass is given by the standard formula [25]: and Ω n−2 is the volume of the unit n − 2 sphere. However, the derivation [6,25] of equation (13) includes contributions from the AdS-Schwarzschild spacetime all the way up to the AdS boundary. In our case, we have a brane that has cut off our bulk spacetime before it was able to reach the boundary. We should not therefore include contributions from "beyond" the brane and must go back to first principles in order to calculate the energy of the bulk. See Appendix A to see what happens if we choose the bulk energy to be 2M. We will need to Wick rotate to Euclidean signature: This is valid provided we restrict ourselves to the region nearŻ = 0, where we have maximal expansion of the braneworld. By considering c to be large we can guarantee when Z starts out small.
2 "twice" because we have two copies of AdS-Schwarzschild.
that our Euclidean analysis does not stray away from this region. Our bulk metric is now given by: We wish to avoid a conical singularity at the horizon, Z = Z H where h(Z H ) = 0. In order to do this we cut the spacetime off at the horizon and associate t E with t E + β where β = 4π h ′ (Z H ) . The brane is now given by the section (x µ , t E (τ E ), Z(τ E )) of the Euclidean bulk. The new equations of motion of the brane are the following: It is not difficult to see that for both critical and non critical walls, Z(τ E ) has a minimum value. In contrast to Lorentzian signature, in Euclidean signature these branes do not cross the black hole horizon. The supercritical wall have a maximum value of Z, whilst the critical and subcritical walls may stretch to the AdS boundary. This will not be a problem because the integrand in our overall action will remain finite, as we shall see.
In calculating the energy we could go ahead and evaluate the Euclidean action of this solution and then differentiate with respect to β. We must however, remember to take off the contribution from a reference spacetime [26]. In this context, the most natural choice of the reference spacetime would be pure AdS cut off at a surface, Σ whose geometry is the same as our braneworld.
The bulk metric of pure AdS is given by the following: where As we said earlier, the cut off surface should have the same geometry as our braneworld. The induced metric on this surface is therefore: To achieve this, we must regard our cut off surface as a section (x µ , T(τ E ), Z(τ E )), where: Let us now evaluate the difference ∆I between the Euclidean action of our AdS-Schwarzschild bulk, I BH and that of our reference background, I AdS .
where K 0 is the trace of the extrinsic curvature of the cut off surface. Now from equations (2) and (5), we can immediately obtain: The unit normal to the cut off surface, Σ is given by n a = (0, − dZ dτ E , dT dτ E ). We use this to find: We will also need the correct form of the measures and the limits in each case. If we say that − β 2 ≤ t E ≤ β 2 , then we obtain the following (see Appendix B for a detailed derivation): The factor of two in equations (27) and (28) just comes from the fact that we have two copies of the bulk spacetime in each case. Notice that the expressions for the integrals over the brane and the cut off surface Σ are the same. This is a consequence of the two surfaces having the same geometry. Also using equations (17) and (21), we put everything together and arrive at the following expression for the difference in the Euclidean action: To proceed further, we are going to have to make things a little bit simpler. As we stated earlier, this analysis is only valid when c is large, and so our bulk is at a very high temperature. By considering this regime we guarantee that we focus on the "holographic" energy density, and can ignore contributions from matter on the brane. We have not included any such contributions in our analysis so it is appropriate for us to assume that we are indeed working at large c. To leading order: For supercritical and critical walls we can assume Z(τ E ) ≫ c 1 n−1 . For subcritical walls this is true provided |a| ≪ 1 (see Appendix C). Given this scenario, we now evaluate ∆I to leading order in c: The entire leading order contribution comes from the bulk rather than the brane, which is consistent with [6]. We can now determine the energy of our bulk spacetime: Notice that in this large c limit, E ≈ 2M k 2 n σ 2 n , so for critical walls the choice E = 2M would indeed have worked. Our aim was to calculate the energy of the dual CFT, rather than the bulk AdS-Schwarzschild. We must therefore scale E, byṫ, so that it is measured with respect to the CFT time τ . Recall that we are considering a regime near the maximal expansion of the braneworld. If Z is large,ṫ ≈ σn k 2 n Z and the energy of the CFT is given by: In order to calculate the energy density we must first evaluate the spatial volume of the CFT: We are now ready to give an expression for the energy density of our CFT:
The Cosmological Evolution Equations
Now that we have determined the energy density, ρ CF T of our CFT, we can use the following equation [14] to determine the pressure, p CF T : This yields an expression that is consistent with the CFT corresponding to radiation: If we are to make sense of the cosmological evolution equations of the braneworld we will need to know the Newton's constant, G n−1 in n − 1 dimensions. In [22] we proposed that: This is confirmed by the analysis of [2,17,15,27]. Using equation (41) in our expression for ρ CF T gives the more useful expression: We are now ready to insert this and equation (40) into equations (11) and (12) to derive the cosmological evolution equations for our braneworld: These are the standard FRW equations in (n − 1) dimensions. The braneworld observer therefore sees the normal cosmological expansion driven by the energy density and pressure of the CFT dual to the AdS-Schwarzschild bulk. We have shown this to be true even for non-critical braneworlds, given that we are in near a region of maximal expansion. The conformal field theory behaves like radiation.
Conclusions
In this paper we have examined the cosmological evolution equations for a braneworld sandwiched in between two AdS black holes with large masses. We focused on a region near the maximal expansion of the braneworld and found that the contribution of the black hole energy could be exactly associated with the energy density of a CFT living on the brane. One can regard the evolution of the scale factor as being driven by radiation that is represented by a CFT with an AdS dual.
The remarkable thing about this analysis was that it was done in full generality, allowing for all de Sitter, flat, and a large proportion of anti-de Sitter braneworlds. The work of [14] concentrated only on flat braneworlds. Recent observations that we may live in a universe with a small positive cosmological constant suggest that it is important that we extend the discussion at least to de Sitter braneworlds. These have been considered in our paper along with anti-de Sitter braneworlds satisfying |a| ≪ 1.
Given the mounting evidence for holography in the literature, we are not really surprised by our result. What is interesting is the way in which we were forced to prove it. The proof offered by [16] is unacceptable because it relies on the assumption that: This is true for critical walls, but one should replace k n in the above expression with σ n when one considers non critical walls [17,18]. We also see in Appendix A that if we had applied the approach of [14] to non critical walls, a factor of k 2 n σ 2 n would have appeared in front of the CFT terms in equations (43) and (44). This comes from assuming that the bulk energy is just given by the sum of the black hole masses. As we stated in section 3, this involves an overcounting because it includes energy contributons from "beyond" the brane. The correct calculation of the bulk energy given in this paper ensures that the undesirable factor of k 2 n σ 2 n does not appear. We need not be restricted to considering the energy density and pressure of the CFT. We could also investigate its other thermodynamic properties. This has been discussed for flat walls in [14,19]. One expects that the corresponding results for non critical walls will add even more evidence to the holographic principle. This analysis will be left for future study.
A Assuming the Bulk Energy is 2M
Here we shall assume that the energy of bulk spacetime is given by: In order to calculate the energy of the CFT, we should scale E by dt dτ E so that it is measured with respect to the CFT time τ . However, for large Z we have from equation (9): The energy of the CFT is then: Since the spatial volume of the CFT is just V CF T = Ω n−2 Z n−2 , we have the following expression for the energy density of the CFT: where we have used equation (13). We now introduce the correct formula for the Newton's constant in n − 1 dimensions given by equation (41). This gives: which when inserted back into equation (11) does not in general give the standard form of the Friedmann equation in n − 1 dimensions: We note that for critical walls the factor of k 2 n σ 2 n disappears and we do indeed recover the Friedmann equation, although this is not the case for non critical walls.
B Limits and Measures for the Action Integrals
Let us consider in more detail each contribution to the action integrals given in equations (22) and (23). We will start by looking at the bulk integral for the black hole action: From equation (24), we see that R − 2Λ is constant and so does not cause us any problems. Given that the AdS-Schwarzschild bulk is cut off at the brane, Z(τ E ), and the horizon, Z H , we find that: (53) which is just equation (27). The factor of two comes in because we have two copies of AdS-Schwarzschild. The factor of Ω n−2 just comes from integrating out dΩ n−2 . We now turn our attention to the bulk integral for the reference action: Again, R − 2Λ is constant and does not worry us. This time the AdS bulk is cut off at Σ (given by Z = Z(τ E )), and at Z = 0. The periodicity of the T coordinate is β ′ rather than β. The bulk integral for the reference action is then: (55) β ′ is fixed by the condition that the geometry of Σ and the brane should be the same. This just amounts to saying that T −1 (± β ′ 2 ) = ±τ max = t −1 E (± β 2 ) where −τ max ≤ τ E ≤ τ max on both Σ and the brane. As illustrated below by changing coordinates to τ E and then t E , we arrive at equation (28): Consider now the brane integral: We will use the coordinate τ E to begin with and then change to t E , thus arriving at equation (29): The procedure for arriving at equation (30) is exactly the same, owing to the fact that Σ and the brane have the same geometry.
C Justifying Z(τ E ) ≫ c 1 n−1 in large c limit Let us consider the claim made in section 3.1 that for most brane solutions, Z(τ E ) ≫ c 1 n−1 in the large c limit. The governing equation for the branes in Euclidean AdS-Schwarzschild is given by equation (15): Now in each case, Z ≥ Z min where Z min is the minimum value of Z on the brane. It is sufficient to show that Z min ≫ c 1 n−1 . At Z = Z min , dZ dτ E = 0. For a = 0, we have: For a > 0, we have: We see that our claim holds for supercritical and critical walls. For subcritical walls with a < 0 we need to be more careful. Z min satisfies: We see, therefore that the claim made in section 3.1 was indeed valid: Z(τ E ) ≫ c 1 n−1 for subcritical walls with |a| ≪ 1 and all supercritical and critical walls. | 5,392.8 | 2001-11-27T00:00:00.000 | [
"Physics"
] |
Sufficient conditions of stochastic dominance for general transformations and its application in option strategy
A counterexample is presented to show that the sufficient condition for one transformation dominating another by the second degree stochastic dominance, proposed by Theorem 5 of Levy (Stochastic dominance and expected utility: Survey and analysis, 1992), does not hold. Then, by restricting the monotone property of the dominating transformation, a revised exact sufficient condition for one transformation dominating another is given. Next, the stochastic dominance criteria, proposed by Meyer (Stochastic dominance and transformations of random variables, 1989) and developed by Levy (1992), are extended to the most general transformations. Moreover, such criteria are further generalized to transformations on discrete random variables. Finally, the authors employ this method to analyze the transformations resulting from holding a stock with the corresponding call option. JEL C51 D81
Introduction
Stochastic dominance (SD) has been proved to be a powerful tool for ranking random variables and is employed in various fields such as finance, decision analysis, economics and statistics etc. (cf., Levy, 1992Levy, , 2006Chakravarty and Zoli, 2012;Jouini et al., 2013;Tsetlin et al., 2015;Post et al., 2015 andPost, 2016;Gao and Zhao, 2017). The SD rules indicate when one random variable is to be ranked higher than another by specifying a condition which the difference between their cumulative distribution functions (CDFs) must satisfy. However, economic and financial activities usually induce transformations of an initial risk, and the classical SD rules are inefficient in ranking such transformations. Transformations of random variables have been discussed in the early stochastic dominance literature, especially in the risk analysis portion. For example, Sandom (1971) has used a particular linear, risk altering, transformation in discussing the comparative statics of risk. Hadar andRussell (1971, 1974) have dealt with special cases of the transformation question, emphasizing its use in dealing with portfolios of random variables. Cheng, Magill, and Shafer (1987) have used the transformation approach to address the comparative statics of first degree stochastic dominance shifts in a random variable within a general decision model context. Meyer (1989) has proposed the first and second stochastic dominance (FSD and SSD) criteria for the increasing, continuous, and piecewise differentiable transformations on continuous random variables. Meyer goes on to analyze the transformation resulting from coinsurance, the transformation resulting from holding a stock with the corresponding call option, or even holding call and put options simultaneously. Gao and Zhao (2017) have developed FSD and SSD criteria for monotonic transformations on discrete random variables, and they apply these results in ranking transformations resulting from pension funds. These applications indicate that the transformation approach is useful in discussing comparatives statics of random variable changes and financial issues.
For the general transformations, Levy (1992) has given several sufficient conditions under which one transformation dominates another by FSD and SSD. Hereafter, some authors discuss the transformations of different random variables (cf., Trannoy, 2007, 2012;Denuit et al., 2013). To the best of our knowledge, Theorem 5 of Levy (1992) is the only result on the stochastic dominance for general transformations. However, we have found that its dominance condition for SSD is not sufficient and its dominance condition for FSD can be relaxed. Then, by restricting attention to the monotone property of the dominating transformation, we present a revised exact sufficient condition for one transformation dominating another. Next, we further extend the stochastic dominance criteria to the most general transformations.
3 Moreover, we further generalize these stochastic dominance criteria for transformations on continuous random variables to the discrete case. Finally, we employ the SD approach to analyze the transformations resulting from holding a stock with the corresponding call option.
The paper is organized as follows. Section 2 presents a counterexample to show that Levy's theorem about SSD does not hold. By discussing the monotone property of the dominating transformation, Section 3 derives the exact sufficient condition for one transformation dominating another by SSD. Section 4 deduces the stochastic dominance criteria for the most general transformations, which further perfect and improve Levy's result and extend Meyer's result to more general case. Section 5 further provides the stochastic dominance criteria for transformations on discrete random variables. Section 6 analyzes the transformations resulting from holding a stock with the corresponding call option. Section 7 concludes the paper.
Levy's sufficient conditions and its counterexample
Suppose that X is a continuous random variable with support in the finite interval [ , ] a b . To facilitate the narrative, we will refer to transformed random variables, derived by applying transformation functions to X , as transformations on X , or shortly transformations. Obviously, the classical SD rules rely heavily on CDFs. But in most cases CDFs of transformations are difficult to compute n F x P n X x , and the frequently-used integration by parts is invalid in this case. Thus, the classical SD rules based on the CDFs framework lose their great charm when dealing with the transformations. In order to determine the SD relations between two general transformations, Levy (1992) gives several sufficient conditions under which one transformation dominates the other by FSD and SSD, the main result is shown as follows.
Alleged Theorem 5. (Levy, 1992) Given a random variable X with the density ( ) f x and support in the Similarly, the dominance condition for SSD is given by Note: For simplicity, we assume that the range of the random variable is finite. Actually, the stochastic dominance criteria can easily be extended to the infinite range by mathematical skills (see Hanoch and Levy, 1969).
4
Although Levy's Theorem 5 only proposes the sufficient conditions for FSD and SSD relations between two transformed random variables, its really meaningful contribution is that it tries to represent the SD rules by the transformation functions and the density function of the original random variable, rather than by CDFs of the transformed random variables. To better illustrate this meaning, Figure 1 to Figure 4 show the relationship between the method of Levy's Theorem 5 and that under the framework of CDFs for the uniformly distributed random variable.
•The comparative diagrams for FSD (2) (2) in Alleged Theorem 5. But for the increasing and concave utility n X by SSD. By carefully analyzing Theorem 5 of Levy (1992), we find that the monotone property of the dominating transformation is inevitable for stochastic dominance of transformations.
Actually, we have proved the following conclusion.
A revised sufficient condition for SSD
In this part, we will revise Theorem 5 of Levy (1992) and derive the exact sufficient condition for one transformation dominating another by SSD.
Proof. See Appendix A. 6 By restricting the monotonicity and differentiability of the dominating transformation, Theorem 1 provides the exact sufficient condition for one transformation dominating another by SSD. Compared with Theorem 5 of Levy (1992), Theorem 1 gives a revised dominance condition concerning SSD, so it can be viewed as an primary improvement of Theorem 5 in Levy (1992).
Furthermore, in the next paragraph we will prove that the FSD condition listed in Alleged Theorem 5 and the SSD condition listed in Theorem 1 can be weakened via complicated mathematics skill.
Stochastic dominance criteria for general transformations
Theorem 5 in Levy(1992) and Theorem 1 of this paper give the dominance condition under which one transformation dominates another by FSD and SSD. In the following, we will prove that these conditions can be relaxed to a more general case. That is, the restrictions to the dominating transformation in Theorem 1 can be further relaxed.
Proof. See Appendix B.
Theorem 2 provides two dominance conditions under which one transformation dominates another by FSD or SSD for the most general transformations. Compared with Theorem 5 of Levy (1992), in Theorem 2(1) points are permitted to violate the dominance condition (1) only if they constitute a set of measure zero. So, Theorem 2(1) reduces the dominance condition for FSD in Theorem 5 of Levy (1992). Compared with Theorem 1, Theorem 2(2) only requires the dominating transformation to be increasing, and the property of differentiability is not necessary.
Moreover, only the increasing property is considered in Theorem 2, and we can derive a similar conclusion if the dominating transformation is decreasing. Obviously, Theorem 2 and Theorem 3 extend Meyer's result to a more general case. Either the dominating or the dominated transformation in Meyer (1989) is assumed to be increasing, continuous, and piecewise differentiable. However, in Theorem 2 and Theorem 3, only the dominating transformation is assumed to be monotonous, and there are no any other restrictions to both the dominating and the dominated transformation.
Apparently, the differentiability is redundant. Furthermore, Theorem 3 considers the decreasing transformation that is absence in Meyer's result.
Remark 2. The concept of increasing risk and increasing n th degree risk, introduced by Rothschild and Stiglitz (1970) and Ekern (1980), play an important role in risk analysis. It requires that all the random variables to be compared have the same mathematical expectations. Given this supposition, we can easily induce the following conclusion from Theorem 2 and Theorem 3.
Corollary. Given a random variable
(2) Supposing that ( ) m x is decreasing in [ , ] a b , ( ) m X has more increasing risk than ( ) n X if From this corollary, we can easily deduce that there exist a kind of risk transformations which lead to the SSD relation which is completely opposite to the conclusion of Theorem 5 in Levy (1992).
Example 2. Assume that the random variable X satisfies standard normal distribution. Define Obviously, ( ) m x and ( ) n x satisfy condition (2). According to Theorem 5 of Levy (1992), it should be concluded that ( ) m X dominates ( ) n X by SSD. But, the truth is on the opposite side. Actually, by Theorem 3, it is easy to prove the fact that ( ) n X dominates ( ) m X by SSD since
Stochastic dominance criteria for general transformations on discrete random variables
By discussing Levy's dominance conditions for one transformation dominating another by FSD or SSD, we obtain several stochastic dominance criteria for transformations, which perfect and improve Levy and Meyer's results. It must be pointed out that all the conclusions, whether Levy and Meyer's results or the stochastic dominance criteria developed in the paper, are concentrating on transformations of continuous random variables. Actually, there exist similar stochastic dominance criteria for transformations on discrete random 8 variables. Gao and Zhao (2017) have discussed the stochastic dominance relationship between two transformations on discrete random variables, and presents several sufficient conditions for ranking transformations on discrete random variables by FSD or SSD. Such conclusions can be summarized in the following theorem.
Theorem 4. Let X be a discrete random variable whose prospects are characterized by The proofs of the first three items in Theorem 4 are in Gao and Zhao (2017), and the proofs of the last two items follow from them and are omitted. Theorem 4 presents several dominance conditions for ranking transformations on discrete random variables by FSD or SSD, and it overcomes the drawbacks of Meyer and Levy's results that cannot deal with transformations on discrete random variables.
Applications in the option strategy
It is well-known that put and call option contracts can modify the value of common stock. These contracts provides the buyer of the option with the right to either buy (call) or sell (put) shares of common stock at a fixed price referred to as the striking price. On the other hand, the seller of such an option contract incurs the obligation to either sell or buy the common stock at the agreed upon striking price if the contract purchaser 9 decides to exercise the option. To model one such option transaction using the transformation notation, let X represent the random value of 100 shares of a given common stock and assume that its support is the interval Of course, experience in choosing option strategies with varying sizes for the striking price indicates that it is unlikely for the option price charged to be smaller with lower striking price. Furthermore, it is typical for the reduction in the option price to be a fraction of the increase in the striking price. Thus it is further assumed Theorem 2 we deduce that ( ) m X dominates ( ) n X by SSD. That is, if the mean value of ( ) m X is at least large as the mean value of ( ) n X , then ( ) m X is a better choice for all risk-averse investors.
10 While this example deals with the selling of a call option, the purchase of a put option contract can also be modeled using a similar transformation. One can also model the simultaneous purchase or sale of put or call contracts with different striking prices, although the transformations involved become cumbersome.
Conclusion
We first present a counterexample to show that Levy's result with respect to SSD does not hold. Then, we give the revised exact dominance condition for one transformation dominating another by SSD. Next, we propose several stochastic dominance criteria for the most general transformations, which can be viewed as a further improvement of Theorem 5 in Levy (1992). Moreover, we further generalize these stochastic dominance criteria for transformations on continuous random variables to the discrete case. Finally, we employ the SD approach to analyze the transformations resulting from holding a stock with the corresponding call option.
Whether on theory or in applications, much can still be done concerning transformations and stochastic dominance. It would be useful to extend such stochastic dominance criteria to transformations on more than one random variables and to consider higher-degree SD rules for transformations. In addition, we will further apply these results to the analysis of transformations resulting from economic and financial issues. derive the conclusion that ( ( ( )) ( ( ( )) 0 E u m X E u n X . □ | 3,255 | 2018-01-02T00:00:00.000 | [
"Mathematics",
"Economics"
] |
Transcriptional and apoptotic responses of THP-1 cells to challenge with toxigenic, and non-toxigenic Bacillus anthracis
Background Bacillus anthracis secretes several virulence factors targeting different host organs and cell types during inhalational anthrax infection. The bacterial expression of a key virulence factor, lethal toxin (LeTx) is closely tied to another factor, edema toxin (EdTx). Both are transcribed on the same virulence plasmid (pXO1) and both have been the subject of much individual study. Their combined effect during virulent anthrax likely modulates both the global transcriptional and the phenotypic response of macrophages and phagocytes. In fact, responses brought about by the toxins may be different than each of their individual effects. Results Here we report the transcriptional and apoptotic responses of the macrophage-like phagocytic cell line THP-1 exposed to B. anthracis Sterne (pXO1+) spores, and B. anthracis Δ Sterne (pXO1-) spores. These cells are resistant to LeTx-induced cytolysis, a phenotype seen in macrophages from several mouse strains which are sensitive to toxigenic anthrax infection. Our results indicate that the pXO1-containing strain induces higher pro-inflammatory transcriptional responses during the first 4 hours of interaction with bacterium, evident in the upregulation of several genes relevant to Nf-κB, phosphatases, prostaglandins, and TNF-α, along with decreases in expression levels of genes for mitochondrial components. Both bacterial strains induce apoptosis, but in the toxigenic strain-challenged cells, apoptosis is delayed. Conclusion This delay in apoptosis occurs despite the much higher level of TNF-α secretion induced by the toxigenic-strain challenge. Interestingly, CFLAR, an important apoptotic inhibitor which blocks apoptosis induced by large amounts of extracellular TNF-α, is upregulated significantly during toxigenic-strain infection, but not at all during non-toxigenic-strain infection, indicating that it may play a role in blocking or delaying TNF-α-mediated apoptosis. The suppression of apoptosis by the toxigenic anthrax strain is consistent with the notion that apoptosis itself may represent a protective host cell response.
Background
Highly pathogenic strains of Bacillus anthracis contain two plasmids, XO1 and XO2 encoding major virulence factors of this Gram-positive bacillus. Lethal toxin (LeTx) and edema toxin (EdTx) genes reside on the pXO1, while the anti-phagocytic capsule is encoded by pXO2. LeTx is necessary for pathogenicity, as deletion of its gene renders the microbe avirulent, while EdTx-knockout strains are only partially attenuated [1]. Capsule substantially contributes to the virulence of the microbe but unencapsulated strains, such as Sterne (34F2), are still capable of causing death in experimental animals [1,2] Therefore, the pXO1 + , pXO2 -Sterne strain serves as a convenient experimental toxigenic model of highly virulent strains.
During inhalational exposure to B. anthracis, the spores may enter alveoli where they become deposited on mucosal surfaces. Within hours of the initial interactions with the host, the spores can be engulfed by phagocytes, such as monocyte-derived macrophages or dendritic cells [3,4]. Both LeTx and EdTx are expressed upon germination within the macrophage phagosome [3] and seem to play important roles in suppressing the bactericidal innate immune mechanisms of the epithelium and the intraphagocytic environment [3,[5][6][7][8][9]. In a currently accepted model of anthrax, some of the phagocytosed spores en route to the mediastinal lymph nodes survive and multiply within the phagolysosome, kill the cell, and become released into the lymphatic system [5]. In the following process of hemorrhagic lymph node destruction, the bacteria gain access to the bloodstream and quickly become systemic by spreading to the spleen and other internal organs [10,11]. According to this mechanism, lung phagocytes such as macrophages and dendritic cells are critically involved in the initiation of the disease, and their response to anthrax spores, among other factors, determines whether the exposure to aerosolized spores results in the infectious process [12].
Macrophages were the first cell type discovered to die after exposure to LeTx [13]. LeTx consists of a heptameric protective antigen (PA) noncovalently associated with lethal factor (LeF). LeTx is a zinc metalloprotease, which cleaves and thus inhibits mitogen-activated protein kinase kinase (MAPKK) family members in vitro and in vivo, resulting in defective host cell signaling [14][15][16], with broad implications for the host innate and adaptive immune responses [17]. However the death of macrophages after exposure to LeTx in vitro does not correlate with the cleavage of the MAPKK substrates by LeTx, and the macrophages sensitive to LeTx reside in the strains of mice tending to be resistant to the lethal effect of LeTx [2,18]. This paradoxical effect did not get a satisfactory mechanistic explanation till it was discovered that LeTx was able to induce the process of programmed, apoptotic death in a murine macrophage cell line RAW 264.7 [19]. It was suggested that for sensitive macrophages, and perhaps other cell types, undergoing apoptosis may serve as an in vivo sensor alerting the immune system and inducing the protective response [20]. Similar correlation between macrophage susceptibility to apoptosis and the outcome of infection in mice takes place, for example, in the case of M. tuberculosis [21][22][23]. Generally, pneumonic macrophages appear to exhibit a broad apoptotic response during less virulent M. tuberculosis infection, while infection with more virulent strains correlates with a near complete loss of the same apoptotic indicators [23].
Several studies have found that resistance of cells to LeTx may vary depending on the conditions of growth or stimulation. Human peripheral blood monocytes with the LeTx-resistant phenotype become sensitive to LeTx upon growth media deprivation leading to a cellular stress [6]. This finding was further elaborated, when it was shown that the stress factors in vitro sensitizing murine macrophages to apoptosis include a range of the bacterial Tolllike receptor (TLR) agonists of different nature, including the pore-forming hemolysins, peptidoglycan or endotoxin [24]. Human U-937, HL-60 and THP-1 cell lines of monocyte origin are resistant to LeTx but become sensitized upon phorbol myristate acetate (PMA)-induced differentiation into macrophages in culture [25], although relevance of this stimulation to the infection process in vivo is not clear. Additionally, PMA-induction may yield significant changes in the transcriptome which do not occur in vivo [26].
In contrast to the LeTx-exposed cells, human peripheral blood monocytes and monocyte-like undifferentiated THP-1 cells readily become apoptotic after exposure to B. anthracis spores [6]. These observations suggest that the process of infection may involve sensitization of cells to LeTx through the activity of unknown factors, such as the bacterial TLR agonists, similar to the effects of these agonists in vitro [24]. Another possibility is the existence of apoptotic factor(s) independent of LeTx or working in concert with it. In order to explore these hypotheses we decided to obtain insight into the transcriptional and apoptotic responses of THP-1 cells challenged with B. anthracis spores. To identify host signaling associated with the pathogenic activity of pXO1-encoded bacterial factors we compared the effects of the toxigenic (pXO1 + , pXO2 -) Sterne strain and the non-toxigenic (pXO1 -, pXO2 -) Δ Sterne strain. In experimental animals the toxigenic strain, even in the absence of capsule-encoding pXO2, is able to cause a lethal systemic infection, while a non-toxigenic strain is non-lethal and causes no clinical symptoms of disease. We therefore suggested that the use of the matched pair of isogenic strains would provide a means to distinguish between the pathogenic and the protective host responses.
We demonstrate that the presence of toxin plasmid XO1 in the challenge strain results in higher pro-inflammatory responses. Both B. anthracis strains induce apoptosis, albeit by different mechanisms. Our data supports a hypothesis that in the presence of pXO1, apoptosis may proceed through the mitochondria-dependent pathway, while in the non-toxigenic infection it seems likely to be activated mainly through an extrinsic TNF-α receptormediated caspase-8 pathway. The onset of apoptosis induced by a toxigenic strain is markedly delayed, and key apoptotic inhibitors are clearly induced, indicating that the suppression of early apoptosis in phagocytic cells may be beneficial for the virulent microbe. The fact that pXO1 is not required for the induction of apoptosis suggests that in addition to LeTx, B. anthracis possesses chromosomally-encoded pro-apoptotic factor(s).
Growth characteristics indicate increased bacterial replication in the presence of THP-1 during early growth
A Bacillus anthracis Δ Sterne strain was selected which was negative for the toxigenic XO1 plasmid, identical in genomic variable sequences checked, and which had identical growth characteristics to its parent Sterne strain (Additional files 1 and 2). THP-1 cells were challenged with spores of the Sterne strain and Δ Sterne strains, the bacterial growth monitored, and THP-1 mRNA was extracted at early time points to be used for microarray analysis. Figure 1 shows the bacterial growth represented by simple absorbance at 2 to 8 h post-coincubation with THP-1. In the non-toxigenic challenge, there is no statistically significant difference between challenge with or without THP-1 other than the MOI-dependent difference. This is in contrast to the Sterne-challenge, which indicates more bacterial replication in the presence of THP-1 at both MOIs.
Principal component analysis indicates a suppressed response to toxigenic strains during early (2 h) infection
THP-1 mRNA was collected at 0, 2, and 4 h time points, and the transcriptional response determined by microarrays. Principal component analysis of the microarray results shows time-and dose-dependent responses by THP-1 cells during challenge by both strains of B. anthracis ( Fig. 2A). Variation attributed to each component is also shown (Fig 2B). Data points clustered in the PCA and confirmed that the principal components of the data set are the experimental variables such as MOI, post exposure time and the nature of the challenge strain. Therefore the mRNA expression levels reflect changes depending on the differences in treatment conditions rather than experimental artifacts. As expected, PCA also closely grouped the results with respect to chip replicates indicating their consistency, and allowed us to evaluate the overall dynamics of cell responses (Fig. 2). The results of the 2hour challenges by toxigenic Sterne strain spores at both MOIs cluster in the 3-dimensional parameter space close
A B
to the control samples within the same quadrant of our PCA, in contrast to the cell responses to 2-hour challenges by Δ Sterne spores. This demonstrates that THP-1 cells respond to the non-toxigenic infection faster, compared to the toxigenic one, in agreement with the hypothesis that the initial general response of macrophages to the virulent spores is suppressed due to the presence of the LeTxexpressing plasmid [17]. At the 4-hour time point, there are notable and discernable differences between Sterne and Δ Sterne strain-challenged cells indicating a clear distinction in cellular responses between the strains with regard to the presence of the toxigenic plasmid. The strainspecific separation at higher MOIs and time points is so distinct that it may be useful in delineating toxigenic vs. non-toxigenic strains. Further confidence in our gene chip data was obtained using Real-Time PCR (Fig. 3), which demonstrates the results overall correlating to the microarray data within the experimental variations of both data sets. Fig. 4 shows clustered heatmaps of important biological processes, molecular functions, and cellular component modules. In general, our results for both strains appear consistent with the macrophage and innate immune activation programs described in response to other bacterial pathogens [27,28]. Activation of innate and other immune defenses and inflammatory responses are typically observed for both spore strains, but in general the activation is greater in the case of the more virulent Sterne strain spore-challenge model ( Fig. 4a) in spite of the overall initial delay noticed in the 2-hour challenges. Several of the NF-κB-controlled programs are upregulated similarly between both strains, indicating that they are pXO1independent, while the others could be considered as characteristics of virulence. The most striking differences seen between strain challenges in our clustering analyses consist of a series of modules that are activated at the 4hour time points in the Sterne spore treatments, while no activation is seen in any of the Δ Sterne spore treatments. Among these modules, there are important signaling pathway processes, including at least 10 phosphatase modules, DNA modification modules, and transcriptional regulation modules. In addition, the pathways involved in eicosanoid and prostaglandin-type metabolism are similarly affected, which agrees with reports on drastic changes in prostaglandin levels during toxigenicanthrax pathogenesis in mice [29]. Other cellular component functions encompassing catabolic and metabolic components do not show much variation between strains, similar to nucleotide/nucleic acid and ATP metabolism and catabolic processes (Fig 4b, c). Interestingly though, the cytochrome oxidase and oxidoreductase modules appear highly activated early in the avirulent Δ Sterne strain challenge, in comparison to the Sterne strain, mark-ing the suppression of the transcriptional-level mitochondrial activation as an important virulence feature. Previous reports have indeed implicated LeTx in the dysfunction of mitochondria in murine macrophages and human peripheral blood monocytes [6,30].
Anti-apoptotic and stress-responses are favored during toxigenic challenge, while cell cycle/differentiation responses are favored during non-toxigenic challenge
To further examine specific differences in host response to each strain, an Analysis of Variance (ANOVA) was performed and genes found to be significant in the cell survival, stress response, and apoptosis were compared between different strain challenges under identical conditions (Tables 1, 2, 3). The use of ANOVA here allowed us to evaluate the differential expression of a particular human gene while varying 3 conditions: time, MOI, and the presence or absence of pXO1 in the infecting B. anthracis strain. In our analysis, positive numbers indicate that the expression levels are higher in the non-toxigenic pXO1 (-) spore treatment, while negative numbers reflect a shift in expression level favoring the more virulent toxigenic pXO1 (+) spore treatment. A complete list of genes evaluated in the ANOVA can be found in Additional file 3 of the Additional Materials.
The ANOVA analysis shows a marked upregulation for several stress and inflammatory response genes relevant to the TNF-α biological module in the case of toxigenic Sterne spore challenge relative to the non-toxigenic Δ Sterne one (Table 1, 3). Contrasted against the apoptosisrelated genes are the cell cycle genes, which seem to be affected in response to the non-toxigenic challenge ( Table 2), indicating an initial shift towards proliferation and differentiation. CXCL2, the chemokine precursor to macrophage inflammatory protein-2α (MIP-2α) is strongly increased in Sterne spore-challenged cells, pointing to a higher innate response to the toxigenic strain. The apoptosis inhibitor genes for cFLAR (CASP8/FADD-Like Apoptosis Regulator) and GADD45b (Growth Arrest and DNA-Damage-inducible, beta) also appear to be expressed higher in the toxigenic spore treatments (Table 1 and Fig. 5). The level of cFLAR protein determined by Western blot follows the same trend (Fig. 6a). The increased anti-apoptotic responses to the toxigenic strain challenge correlate with the induction of higher transcription of Nf-κB1 (Table 1), a gene which may have some anti-apoptotic effects [31,32]. In addition the Nf-κB inhibitor QNZ reduces cFLAR expression (Fig. 6b) in agreement with its higher level in the toxigenic strain infection (Fig. 6a).
TNF-α-dependent apoptosis during toxigenic infection may be countered by cFLAR-and NF-kB-dependent antiapoptotic responses
The role of TNF-α, the primary mediator of the cellular inflammatory response, has attracted significant attention Real-Time PCR of single gene treatments confirms trends in microarray expression levels [12,20,33] and therefore cannot be considered as a primary cause of death, as it was previously suggested [34,35]. Nevertheless, some data suggest that the TNF-α response may play an important function in the local host antibacterial defense [36,37]. In our gene chip and Real-Time PCR experiments, TNF-α gene expression is significantly induced relative to uninfected cells in a time-and concentration-dependent manner during infection by both strains (Fig. 5), however the induction by the toxigenic strain is approximately 10 times higher, compared to the non-toxigenic one. This induction correlates with the increased protein levels of TNF-α determined by ELISA in the culture medium of cells after spore challenge (Fig. 7). The level of TNF-α is almost 3-fold higher (p < 0.01) in the cells challenged with the toxigenic spores after 4 h.
This large cellular TNF-α response to toxigenic spores compared to a lower one in the nontoxigenic spore-chal-lenge is an interesting observation, suggesting that the issue of cytokine suppression during anthrax is not straightforward. It is well known that pXO1 modulates the expression of many chromosomal proteins in addition to its own, resulting in larger numbers of unique secretome proteins in pXO1 (+) strains compared to pXO1(-) strains [38]. This may contribute to the increased response to the plasmid (+) strain seen in our study. However LeTx, a well-known protease of the mitogen-activated protein kinase kinases (MEKs), would be expected to inhibit expression of cytokines, and in THP-1 cells, it has been shown to inhibit the release of TNF-α following induction by an agonist [39]. In our experiments, the expression of toxins takes place approximately 2 to 4 hours post-spore challenge, allowing time for a general innate response before cytokine suppression. Our study also agrees with a previous report from Pickering et. al., who found no inhibition of cytokine response by a toxigenic strain, and an even lower response from a non-toxigenic strain [12], while other studies, such as those by more expected suppression of TNF-α and cytokine responses upon toxigenic B. anthracis challenge [40,41]. These discrepancies may in part be explained by differences in the timing mentioned above, but also by the experimental conditions. Each of these studies attempts to remove extracellular bacteria and/or excreted proteases/ factors from their models in some way, therefore excluding their effect. In our conditions, we were interested in leaving these factors to allow for the evaluation of these extracellular components in the host cell response of this LeTx-cytolysis-resistant cell line.
Elevated TNF-α inflammatory response of the THP-1 cells to the pXO1 gene products is expected to enhance the protective, bactericidal capacity of macrophages and sensitize them to apoptosis [42,43]. It is therefore possible that B. anthracis has developed a mechanism to counter these macrophage defenses during the initial infectious activity. In sensitive macrophages TNF-α is expected to substantially contribute to the apoptotic death program through a TNF Receptor 1 (TNFR1)-dependent pathway including activation of caspase-8. An interesting feature of the TNF signaling network is the existence of extensive cross talk between the apoptosis and NF-κB signaling pathways that emanate from TNFR1. While in the absence of NF-κB activity the cellular susceptibility to TNF-α -induced apoptosis increases, activation of NF-κB protects against apoptosis [44,45]. Therefore, the increased TNF-α and NF-κB expression induced by the toxigenic strain may result either in the anti-apoptotic effect or in the delay of apoptosis onset depending on the balance between the above stimuli in the TNF-related apoptotic pathway. In agreement with this hypothesis, our data show that apoptosis detected by TUNEL assay is delayed after 2 h post exposure to the toxigenic strain while after 6 h it takes place to a similar extent in the case of both strains (Fig. 8).
Discussion
Our results indicate that the THP-1 monocytes present significant innate immune responses evident in activation of several biological modules during both toxigenic, and non-toxigenic challenge. In general, the early response is suppressed by the toxigenic strain but overall this strain demonstrates stronger activation for several stress, inflammatory and apoptosis-related genes relevant to the TNF-α biological module. Observed genotypic responses specific for the pXO1-encoded pathogenic factors are consistent with mitochondrial damage, prostaglandin level disruption, and phosphatase induction likely inhibiting phosphoprotein cell signaling. These results agree with the transcriptional analysis of the murine macrophage RAW 264.7 cells in response to B. anthracis Sterne spore challenge [46]. The authors reported transcriptional activation of immune modules, induction of Nf-κB, along with the apoptotic inhibitor GADD45b. However in the case of unstimulated macrophages treated with LeTx there were no significant changes observed. Analyzing the effect of LeTx on Salmonella lipopolysaccharide-stimulated RAW 264.7 cells, Tucker et al. detected suppression of the host immune response [47] however the antigen (LPS) used for stimulation is irrelevant to anthrax infection. According to our results, the plasmid-negative strain provides the background information, which is important for precise evaluation of the pXO1 contribution to the host cell response, such as silencing of protective signaling in response to infection and/or changes in signaling mechanisms.
We present evidence that both toxigenic and non-toxigenic B. anthracis strains induce apoptosis, which can be considered as a form of innate immunity against some pathogens. Our data indicate that in the absence of toxigenic pXO1, the infection with the avirulent strain pro- ceeds without a substantial intrinsic insult to the cell. Therefore in the absence of cFLAR induction, and in conjunction with the observed TNF-α production, we hypothesize that infection in the absence of pXO1 probably induces apoptosis through a classical extrinsic TNF-α receptor-mediated caspase-8 pathway. In contrast, the pXO1-encoded complex of secreted toxins acquired by B. anthracis as an instrument of intracellular immune evasion causes a strong intracellular insult. The large induction of CFLAR likely plays some role in inhibiting TNFRmediated apoptosis, and may be involved in its observed phenotypical delay, ultimately pushing the system toward the intrinsic, mitochondria-dependent pathway of apoptosis some time later [48]. This pathway proceeds through mitochondrial dysfunction, and is a primary differential observation of our gene ontology modules between the strains. (Fig. 4), as indicated by the perturbation of modules containing mitochondrial components in the Sterneinfected cells. Future experimentation such as the TNFR1dependent processing of caspase-8 vs. caspase-9, mitochondrial membrane polarization/permeability, and the formation of apoptotic signaling complexes during challenge by each of these strains will help to determine if this is indeed the case. Interestingly, the onset of apoptosis induced by a toxigenic strain is markedly delayed, and bacterial growth is significantly increased, indicating existence of a mechanism modulating apoptosis in favor of the microbe through the increased production of anti-apoptotic cFLAR and Nf-κB. This delay of apoptosis during anthrax infection could allow for more successful bacterial survival and dissemination, resulting in systemic disease. This supports a hypothesis that in anthrax the suppression of apoptosis in macrophage-like cells may contribute to disease virulence during the initial stages of infection, and that apoptosis itself may represent a protective host cell response until it reaches a pathologic proportion.
Fold change of TNF-α and cFLAR expression in THP-1 cells as measured by Real-Time PCR
Several lines of evidence in the literature support this hypothesis. Previous analyses of anthrax-infected RAW 264.7 cells [46] identified overexpression of ornithine decarboxylase (ODC), a normal biosynthetic enzyme involved in the conversion of putrescine to the polyamines spermine and spermidine. Since the overproduction of polyamines by ODC has been implicated in preventing apoptosis [49], this led the authors to speculate that ODC overexpression was involved in suppression of macrophage apoptosis. In the case of other bacterial infections, apoptosis has been generally considered as a protective host response [50]. There is additional evidence that the delay of apoptosis may play roles in some viral and fungal infections as well [51][52][53].
Macrophages infected with viable virulent strains of Mycobacterium tuberculosis undergo apoptosis with far less frequency than macrophages infected with attenuated strains [21,22]. Strain virulence in mice correlates directly to the extent to which apoptosis is inhibited [23]. Thus virulent strains of M. tuberculosis apparently suppress apoptosis in host cells. Conversely, macrophage apoptosis appears to provide the host with a means for killing intracellular microorganisms, with host cells likely initiating apoptosis following initial infection [54,55]. Consistent with this, Mycobacterium Avium, a less virulent pathogen that is less adapted to the intra-macrophage growth species compared to M. tuberculosis does not suppress macrophage apoptosis [56].
Mycobacterium infections have been indicated to subvert pro-apoptotic TNF-α signaling by either reduction of TNFα expression or downregulation of its receptor [57], as well as differential NF-κB activation [58]. It was found that both virulent and avirulent M. tuberculosis strains activated NF-κB after 4 h in THP-1 cells however after 48 h only the virulent strain maintained NF-κB activation, which lead to up-regulation of a bcl-2 family anti-apoptotic member, bfl-1/A1. These results indicate that NF-κB activation may be a determinant factor for the success of virulent mycobacteria within macrophages. L. pneumophila induces a robust activation of caspase-3 in alveolar macrophages however the apoptotic cell death is not executed until late stages of the infection, concomitant with the termination of intracellular replication [59].
Recent studies of hepatitis C infection directly implicate the sustained upregulation of the cFLAR gene product in preventing TNF-α-mediated apoptosis and contributing to disease initiation by the virus core protein [52]. The induction of cFLAR during toxigenic anthrax infection likely contributes to prevent apoptosis in a similar fashion, particularly when combined with the suppression of TNF-α production, as it has been shown that LeTx inhibited the TNF-α production in both endotoxin-stimulated macrophages [60,61] and anthrax bacterial cell wall-stimulated peripheral blood mononuclear cells [6]. A mechanism consistent with our observations has been suggested for THP-1 cells stimulated through TLR-4 with the poreforming hemolysin, anthrolysin O (AnlO) [54]. In this report the NF-κB-dependent activation of Bfl-1/A1 gene by AnlO contributed to anti-apoptotic signaling, although overall the TLR-4 stimulation resulted in apoptosis. However the role of AnlO in vivo remains to be elucidated. Another possible pathway of apoptosis inhibition in mac- rophages includes EdTx-mediated activation of the antiapoptotic Platelet Activator Inhibitor type 2 gene (PAI-2), although THP-1 cells carry an inactivating mutation in this gene [46].
CFLAR protein induction in THP-1 cells
The previous findings that anthrax LeTx inhibits the p38 signaling pathway by cleavage of MAPKKs in macrophages and dendritic cells [60,62] are consistent with the protective role of this stress-activated pathway in elimination of intracellular pathogens and activation of host immunity [63]. From this standpoint, the proteolytic activity of LeTx on the MAPK activation pathway may contribute to establishing infection by preventing cell activation in response to infection and thereby increasing the bacterial survival within phagocytes [19,60].
Conclusion
Using a matched pair of the pXO1 (+) and pXO1 (-) B. anthracis strains, we demonstrate that the monocyte-type THP cells in vitro generate a distinct set of transcriptional responses to infection characteristic of the virulence factors contributed by the XO1 plasmid. Among these responses, the delay of apoptosis associated with the elevated expression of anti-apoptotic cFLAR represents a likely pathogenic strategy directed toward bacterial survival in the conditions of the increased host inflammatory response. This may be particularly relevant for hosts carrying mononuclear cells which, like THP-1, are resistant to LeTx-induced cytolysis. In the absence of pXO1, the nontoxigenic strain induces early apoptosis consistent with its protective role in several infectious diseases as a form of the host innate immune response. This finding implies that in addition to LeTx, virulent B. anthracis possesses a chromosome-encoded factor(s) triggering a pro-apoptotic host response, as well as the plasmid factors counterbalancing apoptosis.
Bacterial strains and THP-1 cell cultures
Bacillus anthracis strain 34F2 Sterne (pXO1 + , pXO2 -) was obtained from Colorado Serum Company (CO). Δ Sterne strain, from the Collection of the National Center for Biodefense and Infectious Diseases (George Mason University, Manassas, VA), is a plasmidless derivative of Sterne strain generated by curing of the pXO1 plasmid by growth for 6 days in LB media at 42.5°C [64]. In our experiments, the absence of pXO1 in the cured strain was confirmed by PCR with the protective antigen-specific primers. B. anthracis is one of the most monophyletic bacterial strains known, however the introduction of genomic changes by the curing process cannot be discounted. Despite the uniformity of its genome, B. anthracis strains do contain sequences which exhibit variability in response to selective pressures and allow differentiation. As it is not possible to test all genomic sequence without considerable time and expense, we therefore applied a high-resolution multi locus sequence typing using the technique and primers of Helgeson [65] to ensure that both strains were at least identical for all variable sequences tested (Additional file 2) Growth curves for both strains were also identical in 1% fetal bovine serum (FBS) cell culture medium, with 4 h corresponding to the mid point of the exponential phase of growth (Additional file). Spores were generated by growth for 4 days on LB agar plates followed by re-suspension in water and pelleting by centrifugation two to four times, then storage in water at 4°C until use. tion of THP-1 monocytes by both strains for 4 h in the conditions of our experiments resulted in a viability of greater than 90% relative to untreated cells.
Microarrays and ANOVA analysis
RNA was collected and purfied immediately at the appopriate timepoints by Trizol (Invitrogen Corporation, Carlsbad, CA), and quality determined by checking on a Lab-on-a-Chip Bioanalyzer 2100 (Agilent Technologies, Santa Clara, CA). Samples were similarly quality checked after both probe labelling and hybridizations steps. The microarray platform was developed, printed, hybridized and analyzed at Avalon Pharmaceuticals, Inc.(Germantown, MD). Probe design and printing were all done using automated, proprietary processes (Avalon Pharmacueticals, Germaintown, Md) and contained quality control steps after each. Arrays consist of proprietary 80-mer long, spotted oligonucleotides representing 2,000 human genes. Each chip contains four replicates of each gene (8,000 total/chip) with 64 perfect match and 64 mismatch controls. Arrays were hybridized using standard Cy3/Cy5 sample/reference labeling and analyzed using a ScanArray reader (Perkin-Elmer, Wellesly, MA). Several sets of genes related to apoptosis, stress response, and cell differentiation were chosen for further evaluation. To determine whether these genes were differentially expressed between two experimental states, an ANOVA was performed on the log-transformed expression ratios as described in [66]. The following model was used for this analysis: y ijkl = E i + M j + T k + (EMT) ijkl + ε ijkl where y ijkl denotes the log 2 expression ratio measured for experimental state i, MOI j, time state k, replicate l, with 1 ≤ i ≤ 3, 1 ≤ j ≤ 2, 1 ≤ k ≤ 2, and 1 ≤ l ≤ 6. The term E i measures the effect of the infection; M j measures the effect of the MOI; T k measures the effect of the time state, and the term (EMT) ijkl measures the interaction effect of all three variables. An ANOVA was performed on each gene using the linear model above. Twelve contrasts were based on pairwise differences between each experimental condition and the control state, as well as the experimental state which differed in the MOIs only. The R package limma was used for ANOVA methods [67]. A multiple testing correction [68] controlled the false discovery rate, which is the expected proportion of false positives in the rejected hypotheses. Genes with adjusted F-statistic p-values < 0.05 were collected for further inspection.
Representation of biological response modules
Expression values of genes from each of three replicate chips for each treatment were averaged, and genes corresponding to specific gene ontology (GO) categorizations (modules) http://www.godatabase.org. were clustered using 'Cluster' http://rana.lbl.gov/EisenSoft ware.htm [34,70]. Clustering was performed without additional filtering or adjustment, using the uncentered correlation coefficient, followed by complete linkage clustering.
Western Blot Analysis and ELISA
For Western blot analysis, spore-treated THP-1 cells were lysed in a buffer containing 25 mM Tris-HCl, pH 7.6, 150 mM NaCl, 1% NP-40, 1% sodium deoxycholate and 0.1% SDS (RIPA buffer, Pierce) and cell debris was removed by centrifugation. Resulting proteins were separated in a 4-20% gradient gel and electrophoretically transferred to a PVDF membrane. The membrane was probed with mouse monoclonal antibodies against cFLAR (Abnova), and actin (Chemicon), and then the horseradish peroxidisaseconjugated anti-mouse secondary antibody. To quantify TNF-α proteins secreted into culture medium, human TNF-α ELISA was performed according to the manufacturer's recommendation (BD Biosciences). The culture supernatants were collected by centrifugation of THP-1 cells cultured in a serum-free medium at the indicated time point post spore challenge. To investigate whether NF-kB is involved in cFLAR expression, 10 μM NF-kB inhibitor 6-amino-4-(4-phenoxyphenylethylamino) quinazoline, a cell-permeable quinazoline (QNZ, Sigma), was added to THP-1 cell culture 30 min prior to treatment with spores (MOI 1). After incubation for 8 h, cells were lysed in RIPA buffer and analyzed by Western blotting as described above.
Apoptosis Detection
For detection of apoptosis, terminal dUTP nick-end labeling (TUNEL) assay was performed according to manufacturer's recommendation (Roche Applied Science). Briefly, the spore-challenged THP-1 cells were washed 3 times with phosphate-buffered saline (PBS), and fixed with 2% paraformaldehyde for 60 min at room temperature. The cells were washed with PBS and permeabilized with 0.1% Triton X-100 in 0.1% sodium citrate for 2 min. TUNEL reaction was carried out for 60 min at 37°C in a humidified atmosphere by adding 50 μL of reaction mixture containing terminal deoxynucleotidyl transferase. The TUNEL-positive cells were analyzed under a fluorescence microscope. The number of objects with the intensities above background on the images was determined using the Elements software (NikonNicon). | 7,519.4 | 2008-11-13T00:00:00.000 | [
"Biology"
] |
A motivic local Cauchy-Crofton formula
In this note, we establish a version of the local Cauchy-Crofton formula for definable sets in Henselian discretely valued fields of characteristic zero. It allows to compute the motivic local density of a set from the densities of its projections integrated over the Grassmannian.
Introduction
The aim of this note is to establish a motivic analogue of the local Cauchy-Crofton formula. The classical Cauchy-Crofton formula is a geometric measure theory result stating that the volume of a set X of dimension d can be recovered by integrating over the Grassmannian the number of points of intersection of X with affine spaces of codimension d, see for example [9]. It has been used by Lion [11] to show the existence of the local density of semi-Pfaffian sets. Comte [5], [6] has established a local version of the formula for sets X ⊆ R n definable in an o-minimal structure. The formula states that the local density of such a set X can be recovered by integrating over a Grassmannian the density of the projection of X on subspaces. This allows him to show the continuity of the real local density along Verdier's strata in [6].
The local Cauchy-Crofton formula appears as a first step toward comparing the local Lipschitz-Killing curvature invariants and the polar invariants of a germ of a definable set X ⊆ R n . It is shown by Comte and Merle in [8] that one can recover one set of invariants by linear combination of the other, see also [7].
A notion of local density for definable sets in Henselian valued field of characteristic zero has been develloped by the autor in [10]. Our formula is a new step toward developing a theory of higher local curvature invariants in non-archimedean geometry.
A p-adic analogue has been developed by Cluckers, Comte and Loeser in [2, Section 6]. We will follow closely their approach. Our precise result appears as Theorem 4.1 at the end of Section 4.
Acknoledgements. Many thanks to Georges Comte for encouraging me to work on this project and useful discussions. I also thanks Michel Raibaut for interesting comments.
Motivic integration and local density
We assume the reader is familiar with the notion of motivic local density developed by the autor in [10] and in particular with Cluckers and Loeser's theory of motivic integration [3], [4]. See [10, Section 2.7] for a short summary of the theory.
Key words and phrases. Motivic integration, Henselian valued fields, local density, metric invariants.
We adopt the notations and conventions of [10]. In particular, we fix T , a tame or mixed-tame theory of valued fields in the sense of [4]. Such a T always admits a Henselian discretely valued field of characteristic zero as a model. Definable means definable without parameters in T and K is (the underlining valued field of) a model of T with discrete value group and residue field k enough saturated.
For example, we can take for T the theory of a discretely valued field of characteristic zero in the Denef-Pas language ; if the residue field is of characteristic p > 0, one need to add the higher angular components.
For each definable set X, Cluckers and Loeser define a ring of constructible motivic functions C(X), which include the characteristic functions of any definable set Y ⊆ X.
For ϕ ∈ C(X) of support of dimension d and that is integrable, they define If d = dim(X), we drop the d from the notation. If ϕ is the characteristic function of some definable set Y ⊆ X, we denote the integral µ d (Y ). If the residue field k is algebraically closed, the target ring of motivic integration is equal to where B(x, n) is the ball of center x and valuative radius n.
There is some e ∈ N * such that for each i, the subsequence (θ ke+i ) k∈N converges to some d i ∈ C({ * }). Here C({ * }) has a topology induced by the degree in L. We define the motivic local density of X at x to be It is shown in [10] that one can compute the motivic local density on the tangent cone. Fix some Λ ∈ D, where D = {Λ n,m | n, m ∈ N * }, The Λ-tangent cone with multiplicities is a definable function CM Λ x (X) ∈ C(K m ), of support C Λ x (X), well defined up to a set of dimension < d. For example, if X ⊆ K m is of dimension m, there is no muliplicity to take into account and CM Λ x (X) is the characteristic function of C Λ x (X). Theorem 3.25 and 5.12 from [10] state that there is a Λ such that for all Λ ′ ⊆ Λ,
Local constructible functions
Consider a definable function π : X → Y between definable sets X and Y of dimension n. Recall form [4] the notation π ! (ϕ) ∈ C(Y ) for any motivic constructible function ϕ ∈ C(X).
If X is a definable subset of K n and x ∈ K n , define the ring of germs of constructible motivic functions at x by C(X) x : Consider now a linear projection π : It does not depend on the large enough r chosen. Indeed, as π : being the projection on Y . Hence (up to taking a finite definable partition of X), for some r 0 , for any r ≥ r 0 ,
Grassmannians
Fix a point x ∈ K n and view K n as a K-vector space with origin 0. Then denote by G(n, d) K the Grassmannian of dimension d subvector spaces of K n . The canonical volume form on G(n, d) K invariant under GL n (O K )-transformations induce a constuctible function ω n,d on G(n, d) invariant under GL n (O K ) transformations, see [3, Section 15] for details. Since G(n, d) k is smooth and proper, the motivic volume of G(n, d) K is equal to the class [G(n, d) k ] of G(n, d) k is the (localized) Grothendieck group of varieties over the residue field k. Denoting F q the finite field with q elements, it is known, see for example [1], that |G(n, d)(F q )| = (q n − 1)(q n − q) · · · (q n − q r−1 ) (q r − 1)(q r − q) · · · (q r − q r−1 ) .
The proof rely on the fact that Since the analog of this formula holds in K(Var k ), the same proof shows that Note that even if the right hand side can be written without denominator, hence as an element of K(Var k ), one has to work in K(Var k ) loc to show the equality with this method. In particular, this shows that [G(n, d) k ] is invertible in K(Var k ) loc . The motivic volume of G(n, r) K is then invertible. Hence we can normalize ω n,r such that 1 = V ∈G(n,r) ω n,r (V ).
For V ∈ G(n, n − d), define p V : K n → K n /V the canonical projection. We identify K n /V to K d as follows. There is some g ∈ GL n (O K ) such that g(K n−d × {0} d ) = V . We identify K n /V to g({0} n−d × K n−d ). The particular choice of g does not matter thanks to the change of variable formula. If X is a dimension d definable subset of K n then there is an dense definable subset Ω = Ω(X, x) of G(n, n−d) such that for every V ∈ Ω, π V satisfies condition ( * ). Indeed the tangent K × -cone of X is of dimension at most d and it suffices to set which is indeed dense in G(n, n − d) In particular, for any V ∈ Ω, p !,x (ϕ) is well defined for any ϕ ∈ C(X) x . With these notations, we can now state our motivic local Cauchy-Crofton formula. Recall that Θ d (X, x) ∈ C( * ) ⊗ Q is the motivic local density of X at x, Theorem 4.1 (local Cauchy-Crofton). Let X ⊆ K n a definable set of dimension d and x ∈ K n . Then By [10, Proposition 3.8], we may assume X = X. We can also assume x = 0 and 0 ∈ X. Indeed, if 0 / ∈ X, then both sides of the formula are 0.
Tangential Crofton formula
We start by proving the theorem in the particular case where X is a Λ-cone.
Lemma 5.1. Let X be a definable Λ-cone with origin 0 contained in some Π ∈ G(n, d). Then Proof. Assume Λ = Λ e,r . Fix some V ∈ G(n, n − d) such that π V : Π → K d is bijective. As X is a Λ-cone and π V is linear, π V (X) is also a Λ-cone. From the definition of local density, see also [10,Remark 3.11], we have Then since X is a Λ-cone and π V is bijective, we have a disjoint union The set C j is indeed a disjoint union since it is the image of X ∩ S(0, i) by the application Indeed the function ϕ V restricted to X ∩ S(0, i) is a definable bijection of image D V i since π V is linear and bijective on Π.
By the change of variable formula, we have L −v(Jac(ϕV (x))) .
Since ω n,n−d is invariant under GL n (O K )-transformations and ϕ V (x) = ϕ gV (x ′ ), by the change of variable formula we get that C i (x) is equal to V ∈G(n,n−d) Moreover, it is independent of i by linearity of π V , hence we denote it by C and we have We also have Combining Equations 2, 6 and 4, we get V ∈G(n,n−d) This last expression is equal to = CΘ d (X, 0). We find C = 1 by computing both sides of the previous equality with X = Π. This following lemma is the motivic analogous of the classical spherical Crofton formula, see for example [9,Theorem 3.2.48]. See also [2, Remark 6.2.4] for a reformulation in the p-adic case.
Lemma 5.2. Let X be a definable Λ-cone with origin 0. Then Proof. We only have to modify slightly the proof of Lemma 5.1. Indeed, we use now the function restricted to the smooth part of X. It is now longer injective on X ∩S(0, i), however the motivic volume of the fibers is taken into account in p V !,0 (X). Hence we get similarly V ∈G(n,n−d) which is equal to CΘ d (X, 0). Once again, we find C = 1 by computing both sides for X a vector space of dimension d.
General case
Before proving Theorem 4.1, we need a technical lemma. Lemma 6.1. Let X ⊆ K n be a definable set of dimension d and V ∈ G(n, n − d) such that the projection p V : C Λ 0 (X) → K d is finite-to-one. Then there is a definable k-partition of X such that for each k-part X ξ , there is a ξ-definable set C ξ of dimension less than d such that p V is injective on C Λ 0 (X ξ )\C ξ . Proof. We can assume Λ = K × . As the projection p V : C Λ 0 (X) → K d is finiteto-one, by finite b-minimality one can find a k-partition of C Λ 0 (X) such that p V is injective on each k-part of C Λ 0 (X). For a k-part C Λ 0 (X) ξ , define B ξ to be the ξ-definable subset of K n defined as the union of lines ℓ passing through 0 such that the distance between ℓ ∩ S(0, 0) and C Λ 0 (X) ξ ∩ S(0, 0) is strictly smaller than the distance between ℓ∩S(0, 0) and C Λ 0 (X) ξ ′ ∩S(0, 0) for every ξ ′ = ξ. Set X ξ = X ∩B ξ . Then setting Y = X\ ∪ ξ X ξ , we have C Λ 0 (Y ) empty and C Λ 0 (X ξ ) ⊆ C Λ 0 (X) ξ . Hence we set C ξ = C Λ 0 (X) ξ \C Λ 0 (X) ξ and we have X ξ and C ξ as required. Proof of Theorem 4.1. From [10, Theorem 3.25], there is a Λ ∈ D such that Θ d (X, 0) = Θ d (CM Λ 0 (X), 0). As in the proof of [10, Theorem 3.25], by [10, Proposition 2.14 and Lemma 3.9], we can assume X is the graph of a 1-Lipschitz function defined on some definable set U ⊂ K d . In this case, CM Λ 0 (X) = 1 C Λ 0 (X) . From Lemma 5.2, we have Θ d (C Λ 0 (X), 0) = V ∈G(n,n−d) Θ d (p V !,0 (C Λ 0 (X)), 0)ω n,n−d (V ).
Hence we need to show that for every V in a dense subset of G(n, n − d), Θ d (p V !,0 (C Λ 0 (X)), 0) = Θ d (p V !,0 (X), 0). We can find a k-partition of X such that p V is injective on the k-parts. Replace X by one of the k-parts and suppose then that p V is injective on X. | 3,225.6 | 2017-11-10T00:00:00.000 | [
"Mathematics"
] |
Virtual Reality-Based Instructional Media through Enriched Virtual Classroom in Microteaching
Current technological advances challenge educational institutions to create innovative learning. Therefore, one way to create innovative learning can be done by developing technology-based learning media models. This research aims to develop innovative learning in higher education using virtual reality-based instructional media through enriched virtual classroom in microteaching courses. The Integrative Learning Design Framework (ILDF) development model is used in this study. Data was collected by distributing questionnaires to media and material experts from educational technology study program to conduct product trials and in-depth interviews with five students and a microteaching lecture at Universitas Negeri Jakarta. The result shows that the virtual reality model can be the ease and achievement of microteaching learning objectives. This research has implications for microteaching learning models for pre-service teachers so that they can provide teaching experience with minimal risk and encourage learning innovation through educational technology
Introduction
Current technological advances challenge educational institutions to create innovative learning (Nguyen et al., 2020).Therefore, one way to create innovative learning can be done by developing technology-based learning media models (Gustiana et al., 2023;Cahyati et al., 2022;Bin-Hady & Al-Tamimi, 2021).By using technology, educational institutions will be able to improve the quality of learning for students.Innovation in learning activities is a model for educational reform that should be carried out by educational institutions, especially educational institutions that contribute to preparing prospective educators.This institution is known as Lembaga Pendidikan Tenaga Kependidikan as known as Institute of Teachers' Education (LPTK).LPTK have challenges in various learning contexts (Hidayah, 2013).Bearing in mind, technological developments and globalization influence learning activities become challenges for LPTK in increasing innovation to prepare superior educators.
LPTK is one of the institutions that produces educational staff such as teachers from kindergarten to upper secondary level (Zahara et al., 2023).Thus, the educational design for pre-service teachers that must be developed by emphasizes content-based and content-specific pedagogy (Albayrak et al., 2023).Both of these designs can direct pre-service teachers to have good teaching skills.Therefore, the need for developing technology-based instructional media is needed for pre-service teachers at LPTK.Besides that, courses in LPTK must reflect the idea that learning to teach involves what is called practice in practice, meaning that the theory in lectures should be tried to be put into practice in the field and the theoretical basis of the practice that occurs in the field has been studied so that a mutually reinforcing relationship occurs.This combination of lectures and fieldwork provides the opportunity to link theory and practice.One of the courses at LPTK that provides knowledge and skills in applying learning activities is called microteaching.
Microteaching is one of courses yang mendorong para calon guru untuk bisa menjadi guru yang terampil dan professional.One way is to facilitate pre-service teachers to train their abilities to carry out learning simulations in a small scope (laboratory).This training is needed before prospective teachers hone their teaching skills in a wider scope, namely at school.Several previous studies have shown that pre-service teachers in the current technological era must have digital literacy skills and good technology use skills (Satriana et al., 2022;Van Allen & Zygouris-Coe, 2019;Lohnes Watulak, 2016).In addition, pre-service teachers must also have a variety of learning innovations that can motivate students in learning (Anderson & Justice, 2015).So, at LPTK there are microteaching activities for pre-service teachers.
In other words, universities which are LPTK use microteaching as a facility for students to gain knowledge and skills in applying learning activities (Ilhami et al., 2023).In microteaching, a group of pre-service teachers train to master basic teaching skills, practice teaching activities, and have discussions to discuss problems found (Arwildayanto et al., 2023).Pre-service teachers exchange roles, one day being a teacher and one day being a student.
From 2020 to 2022, the world of education must face the Covid-19 pandemic, which has significantly changed learning activities change very significantly (Mansor et al., 2021;Abidin et al., 2020) Hover & Wise, 2020).The impact of the pandemic means that microteaching activities cannot be carried out directly.Therefore, microteaching activities are carried out through online learning.This condition makes pre-service teachers face difficulties in carrying out learning activities (Simamora, 2023).This is because the microteaching process should be filled with substantial practicum.The main objective of microteaching is to improve the performance of pre-service teachers in conducting teaching and learning activities through teaching skills training.
In addition, microteaching is used to reconcile teaching theory and practice to preservice teachers.This condition is important for student teacher candidates in practicing in the laboratory.For this reason, the use of enriched virtual classrooms such as virtual reality provides new hope in learning.According to Zhang et al. (2020), virtual reality is known as interaction technology that combines the real world and the virtual world.Thus, many of the scholars' conduct virtual reality trials of learning activities both in schools and tertiary institutions (Rojas-Sánchez et al., 2023;Marougkas et al., 2023;Kuna et al., 2023).The enriched virtual classroom is in the form of virtual reality which is used as an initial introduction to the practice of learning in a classroom-like setting for pre-service teachers and students.
The enriched virtual classroom model is a classification of blended learning that can make it easier for users to carry out the learning process (Pivneva et al., 2020;Dakhi et al., 2020).Therefore, this learning model is used as an alternative to the distance learning process where activities are dominated by technology.The enriched virtual model consists of reading, cross learning, communication and debate via electronic channels.Additional sessions can be provided at schools for supporters.The emergence of virtual reality technology in the classroom which is developing rapidly and amazingly with various subversive advantages into an integrated solution for enriched virtual classrooms, has attracted the attention of both industry and education (Dong, 2016).Virtual reality which can simulate the intravenous catheter insertion procedure in a risk-free artificial environment, allows for repeated exercises, thereby exposing students to perform simulations with various patient conditions, providing immediate feedback quickly and easily (Jenson & Forsyth, 2012).
Virtual reality is implemented in microteaching behavior skills training when dealing with fire.The results show that virtual reality can safely improve fire safety behavior skills (Çakiroğlu & Gökoğlu, 2019).Research of Darojat et al. (2022) also shows that virtual reality technology can help users see a video with a 360° rotating angle and make it easy for them to maximize public speaking techniques.Berger and Cristie (2015) explained that virtual reality is needed because its current use has spread to all aspects of life and is projected to experience significant developments in the future.Then virtual reality is a three-dimensional computerbased interactive environment that simulates reality in the form of observation activities and linkages with the field.Based on several studies above, there has been no research into an enriched virtual classroom in the form of virtual reality as an initial introductory interaction to learning practices in real school settings in an effort to improve general pedagogy and specific pedagogy in the form of observation activities and connection with the field as a novelty.
This study aims to develop innovative learning in higher education using virtual reality-based instructional media to enriched virtual classroom in microteaching courses.This research was carried out using virtual reality, which can be a medium.When the learning process uses the physical environment directly it can be costly and risky.The use of virtual reality is one way to minimize this.The use of virtual reality offers the experience of visiting a place or an object in detail.Virtual reality can also include various types of digital creations, ranging from various forms of multimedia, 3D reconstruction and so on.
Methodology
This study used research and development with Integrative Learning Design Framework (ILDF) model (McKenney & Reeves, 2014).The activity of ILDF model such as the exploration, enactment, and evaluation stages.First, in the exploration stage, researcher collected data and information, formulated learning objectives, microteaching learning analysis, and student characteristics analysis.Second, in the enactment stage, we designed and developed the instructional media by formulating specific instructional objectives, developed an assessment instrument, compiled learning strategies, selected microteaching learning materials, reviewed virtual reality existing products, created flowcharts, and creating a programming storyboard.Third, in the evaluation stage, researcher evaluated the instructional media with one-to-one evaluation by experts, one-to-one evaluation by learners, small group evaluation and field testing (see Figure 1).
Participants in this research were selected through a purposive sampling technique based on certain criteria such as having knowledge and expertise related to virtual reality and microteaching for expert media, expert materials, and lectures.Meanwhile, student participants, namely those taking microteaching courses, were adjusted to research needs.Therefore, this research collaborated with one media expert and one material expert from educational technology study program at Universitas Negeri Malang.Participants in this research also involved five students from the educational technology study program and one microteaching lecturer at Universitas Negeri Jakarta.
This research uses data collection techniques through distributing questionnaires, interviews and documentation.The interview technique is carried out at the exploration and evaluation stage one-to-one by learners and lecture.Meanwhile, documentation techniques are carried out at the exploration and enactment stages.Semi-structured interviews were used by researchers to collect data related to the need and urgency of developing virtual reality as an alternative to microteaching in the laboratory as well as input on the products used.Interview activities with students and lecturers were carried out in two stages.The first stage before instructional media is developed.And the second stage after the product is developed into virtual reality.Interview activities were carried out for 90 minutes for each participant.
The researcher used a tape recorder and small notes to record the interview process.Student participants use the PS A -PS E code to maintain the research code of ethics.The questionnaire used is an open questionnaire, that is, media experts and material experts can provide input openly.Researchers used a Likert scale for alternative answers 1-4 (Strongly Disagree -Strongly Agree).Material expert indicators include suitability of learning objectives, clarity of instructions for use, clarity of performance, and materials.Besides that, indicators for media experts include visuals, software engineering, navigation and learning design.These indicators were developed through research (Kustandi et al., 2020).
Figure 1. Research Design by ILDF Model
The results of filling out the questionnaire are then processed using feasibility percentage data analysis techniques based on Arikunto (2009).The maximum expected value is 100% while the minimum is 0%.Meanwhile, the results of interviews and documentation use data reduction, data presentation and drawing conclusions (Miles & Huberman, 1994)
Exploration
Microteaching is an alternative for pre-service teachers to develop and foster their teaching skills.Generally, this microteaching activity begins with preparing a learning plan which is then submitted to the supervisor before being presented.After that, students who do not appear during microteaching will serve as supervisors, written observers, oral observers, or students in class.Given the need to develop an enriched virtual classroom, researchers conducted a needs analysis related to this urgency.Since the Covid-19 pandemic hit, the microteaching learning process has prevented supervisors from controlling and training preservice teachers directly.This condition makes pre-service teachers can not prepared teaching skills properly.
Microteaching learning via Zoom shows shortcomings because we have difficulty knowing the extent of our teaching skills (PS A, May, 2023).As a pre-service teacher, practical microteaching activities are very important to do.Moreover, I have never taught before in class.So, these activities need to be developed systematically and objectively (PS B, May, 2023).Based on the results of these interviews, it shows that not all courses can use media such as Zoom or other video conferences for learning.Microteaching is one of the practical subjects that students can feel and understand if they do it directly.
Therefore, the need for innovative instructional media in microteaching courses is important to develop.Pre-service teachers were directed to understand and deepen the concept of basic teaching skills.The research findings show that students have difficulty applying their abilities in practicing various approaches, strategies, and learning methods.Apart from that, students experience obstacles in the microteaching process at LPTK, including: a lack of time and opportunities to carry out simulations in practicing learning
Exploration
Need analysis, survey literature, theory develop, and audience characterization
Enactment
Research/system design, articulated prototipe, and detailed design
Evaluation
Evaluation by experts, evaluation by learners, small group evaluation and field testing approaches, strategies, and methods due to the high burden of learning outcomes.Students understand the material only during learning process.After the lecture is finished, the material on the steps, strategies, and learning methods that have been studied is simply forgotten by students.
Besides that, the use of virtual reality in microteaching is still very minimal.Because this model has never been used for learning.Through virtual reality technology, it will certainly optimize student potential in carrying out learning simulations.This condition creates a challenge for researchers to develop virtual reality-based instructional media that makes it easier for students to learn microteaching.And then, in practical teaching activities at school, students have not used good media, and have not utilized the internet.The use of technology in learning is limited to using PowerPoint slides to convey material.
Enactment
Based on the exploration results, researchers carried out an enactment process.There are two concepts underlying this research, namely microteaching and virtual reality.In other words, the two concepts have a relationship with each other.Microteaching consists of process material input and output.Meanwhile, learning should be fun.Thus, researchers designed relevant virtual reality-based instructional media and focused on the enriched virtual classroom.Figure 2. is the result of a prototype that researchers have tried to develop and can be used via smartphone.This is done to make it easier for developers to create application models for later testing by experts.Figure 3 shows the classroom model developed through virtual reality.
Evaluation
The evaluation process in this research was carried out in two stages.This is because researchers need input from experts, lecturers and learners before carrying out large-scale trials.Therefore, product evaluation activities use the one-to-one expert stage.Table 1 shows that the virtual reality-based instructional media design by researchers from the results of trials by material experts is feasible to use.The material expert test also reminded us that the sentences prepared to be checked again were following the EYD (Ejaan Yang Disempurnakan/Bahasa Improved Spelling).Apart from that, material experts wrote that the entire material was clear, aroused student interest and was in accordance with student characteristics, but added a description of the latest concepts from Microteaching to enrich the substance.In other words, in terms of learning materials, virtual reality-based microteaching is feasible to use.Apart from that, researchers also conducted trials with media experts.Table 2 shows that virtual reality-based instructional media is feasible for testing on students from a media perspective The media experts also explained that the media being developed is interesting and has a high level of novelty.Therefore, it can be helped pre-service teachers before carrying out microteaching practices.However, further development is expected to use real humans to communicate two-way with pre-service teachers.Apart from that, researchers are expected to carry out checks using existing devices on aspects of hardware, tablet, and Android compatibility to communicate the material well.However, media experts generally assess that the product being developed follows the display elements.Thus, virtual reality-based microteaching through an enriched virtual classroom is feasible to use.
After conducting expert trials, the researchers conducted a one-to-one learner evaluation.This evaluation is carried out through interviews.The interview results show that there is ease and achievement of microteaching learning objectives through the developed virtual reality.This is because the material presented is interesting and easy to access anywhere.Moreover, using virtual reality is supported by a focus on enriched virtual classrooms.So they can understand the situation in class even though it is virtually.Besides that, microteaching material is suitable for virtual reality.In other words, the results of this interview show that the product being developed is feasible to use.
The results of this study answer the challenge of Nguyen et al. ( 2020) regarding technological advances that universities must face.Apart from that, this research also illustrates that apart from technological advances, pandemic events also pose challenges in carrying out learning activities.This is because the use of technology needs to be adapted to the needs of learners.Therefore, researchers use exploration as the first step to analyze the needs of learners in microteaching learning.In contrast to Bin-Hady et al. (2021), this research offers the use of virtual reality technology for microteaching learning.So, innovation in developing instructional media is important.
This study shows that developing a virtual reality-based instructional media can supported LPTK to face challenges related to the learning context (Hidayah, 2013).This context exists in virtual reality through an enriched virtual classroom.As pre-service teachers, students can carry out learning independently and can be repeated without going to the laboratory.They can practice microteaching activities at home with the developed virtual reality.Thus, the results of this study support Zahara et al. (2023), where LPTK as an educational institution, is required to produce quality educational staff.Not only quality, but the development of virtual reality can help pre-service teachers encourage them to use technology-based learning.This is because pre-service teachers are required to have good digital literacy skills.
In contrast to Van Allen and Zygorius (2019), this research seeks to prepare pre-service teachers to have digital literacy competencies using virtual reality.In virtual reality, researchers added a guide that they could read.Apart from that, they can also practice internet inquiry skills in virtual reality developed by researchers.Therefore, the presence of virtual reality makes microteaching learning possible in its real form.Only the scale is reduced, and uses virtual glasses as aids (Çakiroğlu & Gökoğlu, 2019).
The demands on pre-service teachers after the Covid-19 pandemic are greater because they must have various learning innovations.Therefore, this research supports Anderson and Justice (2015) study that for pre-service teachers to have learning innovation and motivate students to learn, they must also receive the necessary teaching training.Thus, virtual reality helps them to have basic teaching skills in the classroom (Dong, 2016).For this reason, this research answers Simamora (2023) concerns about pre-service teachers in facing the pandemic when microteaching activities cannot be carried out directly in the laboratory.Unfortunately, this research has not been able to meet the expectations of Arwildayanto et al. (2023) related to microteaching skills that pre-service teachers must have.Bearing in mind the product development carried out has not been able to help pre-service teachers practice discussing with students.
In line with Halimah (2017), the implementation of microteaching, this study applies the cognitive stage and the implementation stage.At the cognitive stage, pre-service teachers were directed to understand basic skills through explanations in the enriched virtual classroom.Furthermore, the user will choose a class according to the learning strategy he wants to do.After entering the class, the prospective teacher will listen again to the explanation of the learning strategy and start learning activities.However, at the reverse stage it is still in the development process.This is because each pre-service teacher can only feel teaching practices as a user.
Moreover, the research offers a virtual reality model with a different concept from Pivneva et al. (2020).Virtual reality through an enriched virtual classroom can allow preservice teachers to carry out microteaching outside the laboratory.Apart from that, this research also shows that virtual reality can be used not only by students in the fields of health, scientific science, and technology but also in the field of teacher education (Darojat et al., 2022;Jenson & Forsyth, 2012).
Conclusion
This research concludes that technology continues to develop, and the Covid-19 pandemic is challenging for educational institutions to carry out learning innovations.Microteaching, which is usually done face-to-face, can now be done through virtual reality media.This virtual reality media was developed through an enriched virtual classroom that pre-service teachers can use via a smartphone.In other words, virtual reality makes it easy for pre-service teachers to carry out microteaching activities by simulating and modeling artificial spaces.Material experts and media experts support the development of this instructional media by providing appropriate assessments for further use and testing by pre-service teachers.Based on the assessment of media experts, this research has limitations at the implementation stage.In the enriched virtual classroom, two-way communication with preservice teachers is not yet possible.One of the factors that become obstacles is the lack of tools used.So, in future research, researchers will work together with programming experts to carry out AI coding so that the interaction process becomes more meaningful.This research has implications for microteaching learning models for pre-service teachers, so that they can provide teaching experience with minimal risk and encourage learning innovation through educational technology. | 4,709.6 | 2023-10-27T00:00:00.000 | [
"Computer Science",
"Education"
] |
Mycothiol biosynthesis is essential for ethionamide susceptibility in Mycobacterium tuberculosis
Spontaneous mutants of Mycobacterium tuberculosis that were resistant to the anti-tuberculosis drugs ethionamide and isoniazid were isolated and found to map to mshA, a gene encoding the first enzyme involved in the biosynthesis of mycothiol, a major low-molecular-weight thiol in M. tuberculosis. Seven independent missense or frameshift mutations within mshA were identified and characterized. Precise null deletion mutations of the mshA gene were generated by specialized transduction in three different strains of M. tuberculosis. The mshA deletion mutants were defective in mycothiol biosynthesis, were only ethionamide-resistant and required catalase to grow. Biochemical studies suggested that the mechanism of ethionamide resistance in mshA mutants was likely due to a defect in ethionamide activation. In vivo, a mycothiol-deficient strain grew normally in immunodeficient mice, but was slightly defective for growth in immunocompetent mice. Mutations in mshA demonstrate the non-essentiality of mycothiol for growth in vitro and in vivo, and provide a novel mechanism of ethionamide resistance in M. tuberculosis.
Introduction
The increase in drug resistance in Mycobacterium tuberculosis clinical isolates has impeded the full success of tuberculosis (TB) control. The WHO estimates that 4.3% of the newly and previously treated TB cases are multidrug-resistant (MDR) meaning that these strains are resistant to at least the two best anti-TB drugs: isoniazid (INH) and rifampicin (Zignol et al., 2006). Alarmingly, there has been an emergence of M. tuberculosis strains resistant to four to seven TB drugs (termed XDR-TB for extensively drug-resistant TB) that have been associated with the rapid death of HIV-infected individuals (Gandhi et al., 2006;Shah et al., 2007). A more effective treatment for both MDR-and XDR-TB strains requires rapid detection and therefore understanding of all the mechanisms leading to drug resistance.
Isoniazid, the cornerstone of front-line TB treatment, shares a common target with the second-line TB drug ethionamide (ETH). Both INH and ETH are pro-drugs that require activation to form adducts with NAD to subsequently inhibit InhA, the NADH-dependent enoyl-ACP reductase Quemard et al., 1995) of the fatty acid biosynthesis type II system (Marrakchi et al., 2000). However, activation of INH and of ETH occurs through different pathways. INH is activated by the katGencoded catalase peroxidase (Zhang et al., 1992;Wilming and Johnsson, 1999) to form the INH-NAD adduct (Rozwarski et al., 1998). ETH, on the other hand, is activated by the ethA-encoded mono-oxygenase (Baulard et al., 2000;DeBarber et al., 2000;Vannelli et al., 2002) to yield the ETH-NAD adduct (Wang et al., 2007). Mutations in either activator confer resistance to INH or ETH respectively (Piatek et al., 2000;Morlock et al., 2003;Ramaswamy et al., 2003;Hazbon et al., 2006). Co-resistance to INH and ETH can be mediated by mutations that alter the InhA target so as to prevent the INH-NAD or the ETH-NAD adduct from binding (Vilcheze et al., 2006), by mutations that cause InhA overexpression Vilcheze et al., 2006) or by mutations in ndh that increase the intracellular NADH concentration, thereby competitively inhibiting the binding of the INH-NAD and ETH-NAD adducts to InhA (Miesel et al., 1998;Vilcheze et al., 2005). While the majority of clinical isolates resistant to INH or ETH have been shown to map to the activator genes (katG, ethA) or the inhA target, current studies still show that up to 22% of the INH-resistant M. tuberculosis clinical isolates have no mutations in the genes known to be involved in INH or ETH resistance . In this study, to identify novel mutations conferring INH and ETH resistance, we isolated spontaneous mutants of M. tuberculosis in vitro and found that they map to mshA, a gene encoding a glycosyltransferase involved in mycothiol biosynthesis, suggesting that mshA was non-essential. Additional genetic and biochemical studies demonstrated that mycothiol biosynthesis is required for ETH susceptibility in M. tuberculosis. Furthermore, in vivo studies showed that mycothiol is not required for growth in mice. (Telenti et al., 1997;Piatek et al., 2000;Ramaswamy et al., 2003;Cardoso et al., 2004;Hazbon et al., 2006). To eliminate the majority of spontaneous mutants of M. tuberculosis that are singly resistant to INH and map to katG, we chose to isolate mutants that were co-resistant to INH and its structural analogue ETH. Samples of three independent M. tuberculosis H37Rv cultures were plated on media containing low concentra-tions of both INH and ETH [Յ 4-fold the minimum inhibitory concentration (MIC)]. Seven mutants were isolated at low frequencies (1-4 ¥ 10 -8 ). DNA sequence analysis of targeted genes in these seven strains revealed the absence of mutations in the genes known to mediate co-resistance to INH and ETH, namely inhA (the gene or its promoter region) and ndh. This analysis provided the evidence that these strains possessed mutations that conferred INH and ETH resistance and had not been previously identified in M. tuberculosis. The mutants were transformed with a cosmid genomic library of the drugsusceptible M. tuberculosis parent. The frequency of transformation was extremely low for most of the mutants (less than 100 transformants per transformation), and only one mutant, mc 2 4936, which had the lowest level of INH resistance, yielded more than 1000 transformants. The cosmid transformants were screened for restoration of INH and ETH susceptibility. One potential complementing cosmid was isolated, sequenced and shown to contain the mshA gene, a gene characterized as mediating the first step in the biosynthesis of mycothiol , a key thiol in the family of Actinomycetes bacteria (Newton et al., 1996). A link between mycothiol biosynthesis and resistance to INH and ETH had been previously established in Mycobacterium smegmatis when transposon mutants in mshA were found to be resistant to INH (more than 25-fold) and ETH (sixfold) (Newton et al., 1999;Rawat et al., 2003). Subsequent sequence analysis of mc 2 4936 and the other mutants showed that all the M. tuberculosis H37Rv mutants had missense, nonsense or frameshift mutations in mshA (Table 1). The mshA mutants had various levels of resistance to INH (2-to 16-fold) and ETH (four-to eightfold) (Table 2). This is the first report that mshA mutations confer co-resistance to INH and ETH in M. tuberculosis.
Identification of a new ethionamide resistance mechanism 1317
The mshA mutants of M. tuberculosis are defective in the synthesis of mycothiol The glycosyltransferase MshA is the first step in mycothiol biosynthesis that leads to the formation of N-acetylglucosamine inositol Newton et al., 2006). The biosynthesis of mycothiol requires five enzymes to form N-acetyl-cysteine glucosamine inositol or mycothiol from inositol-1-phosphate and UDP-N-acetylglucosamine: the glycosyltransferase MshA, the phosphatase MshA2, the deacetylase MshB, the cysteine ligase MshC and the acetyltransferase MshD ( Fig. 1). To analyse the effects of the diverse mutations in mshA on the biosynthesis of mycothiol, the levels of mycothiol were measured in all the mutants using fluorescent highperformance liquid chromatography (HPLC) assay (Newton et al., 2000a). We found a dramatic reduction (83% to undetectable levels) in the concentration of mycothiol compared with wild type ( Fig. 2A). As a control, we also measured the mycothiol level in an INH-and ETHresistant M. tuberculosis inhA mutant, mc 2 4911 (Vilcheze et al., 2006), and found that this mutant had a concentration of mycothiol similar to that in wild type. Complementation of the mutants with pMV361::mshA, an integrative plasmid containing only the mshA gene of M. tuberculosis driven by the hsp60 promoter, restored mycothiol biosynthesis in all the mutants (Fig. 2B). This confirms that the defect in mycothiol biosynthesis was due to the mutations in mshA. Although mycothiol has been suggested to be essential for the growth of M. tuberculosis (Sareen et al., 2003), our data show that M. tuberculosis strains that do not produce mycothiol are viable.
Comparison of the MshA structures of M. tuberculosis and Corynebacterium glutamicum establishes a rationale for the inactivation of MshA in the mutants
Given the sequence identity (45.9%) between M. tuberculosis MshA and Corynebacterium glutamicum MshA (CgMshA) whose structure was recently determined (Vetting et al., 2008), the monomeric homology model of M. tuberculosis MshA was created using CPHmodels 2.0 with the UDP-complexed CgMshA (PDB code 3C4Q) as template (Lund et al., 2002). Superimposition of the model of M. tuberculosis MshA (consisting of Arg46-Ile445) and the chain B from UDP/inositol-phosphatebound CgMshA (PDB code 3C4V) yields an RMSD of 0.65 Å, indicating a high homology between each other (Fig. 3A). Four of the mshA mutants have single amino acid mutation (Table 1). These four amino acids (Arg273, Gly299, Gly356 and Glu361) are conserved in CgMshA (as Arg231, Gly263, Gly319 and Glu324) (Fig. 3C). Each of these amino acids plays an important role in either the substrate binding or the domain interaction (Fig. 3B). The side-chain amines of Arg273 interact with the b-phosphate of UDP via hydrogen bonding. This arginine is also one of the major determinants of the orientation of the inositolphosphate as its side-chain lies against the face of inositol. Gly299 is not in the vicinity of the active site, but should be important for the protein stability as the next residue, Gly300, forms the only interdomain hydrogen bond with Gly61 (Fig. 3A). Although not directly seen in the model, Gly356 was proposed to be involved in the binding of the N-acetyl-glucosamine moiety which shall be transferred from UDP to inositol (Vetting et al., 2008). In mc 2 4936, the Glu361Ala mutation removes the side-chain carboxylate that forms hydrogen bonds with the 2′-and 3′-hydroxyls from the ribose moiety of UDP, which could result in the inactivation of MshA.
The other mshA mutants had either nonsense or frameshift mutations (Fig. 3C). In mc 2 4931 and mc 2 4934, the nonsense mutations caused the loss of active-site elements. In mc 2 4937, the truncation of the protein was close to the C-terminus and the active site was unlikely to be affected. Herein the inactivation of MshA could be explained by the protein's characteristic folding. Based on the homology model of MshA, each monomer is composed of N-terminal and C-terminal domains. Towards the end of C-terminus, a large a-helix spanning Cys409 to Ile445 crosses back to the N-terminal, which is likely to stabilize the overall folding of the protein. Therefore the mutations within this a-helix, such as in mc 2 4937, would detrimentally affect the conformation of MshA leading to its inactivation.
Co-resistance to INH and ETH is not mediated by inhA overexpression nor by increased NADH/NAD + ratios in mshA mutants of M. tuberculosis
Although co-resistance to INH and ETH had been previously identified in M. smegmatis mshA transposon mutants , no mechanism of resistance had been identified to link mycothiol biosynthesis with INH and ETH resistance (Koledin et al., 2002;Rawat et al., 2003). Previously, co-resistance to INH and ETH has been shown to be conferred by three different mechanisms: (i) structural mutations in the inhA target (Banerjee et al., 1994; Vilcheze et al., 2006), (ii) inhA target overexpression Vilcheze et al., 2006), or (iii) increased NADH/NAD + ratios resulting in higher concentration of NADH, which competitively inhibits the binding of the INH-NAD or ETH-NAD adduct to InhA (Miesel et al., 1998;Vilcheze et al., 2005). We reasoned that it was highly unlikely that the gene product of mshA directly interacted with InhA. However, it was possible that mutations in mshA caused overexpression of inhA or altered the NADH/NAD + ratios inside the M. tuberculosis cells. To test these possibilities, we first measured the inhA mRNA levels in three mshA mutants and their complemented strains using a molecular beacon reverse transcription polymerase chain reaction (RT-PCR) assay . In contrast to the P-15 mabA inhA mutation, which has been shown to confer 10-fold overexpression of the inhA mRNA (Vilcheze et al., 2006), all three of these mshA mutants revealed no increase in inhA mRNA levels ( Fig. 4A) and so resistance to ETH and INH was not due to InhA overexpression. We also measured the NAD + and NADH concentrations in each of the mutants and found that the mutants had mostly lower NADH concentrations compared with wild type (Fig. 4B), demonstrating that the co-resistance to INH and ETH was not due to an increase in the NADH/NAD + ratio. The sum of this work suggested that the mshA mutations must mediate a novel mechanism of resistance.
Isolation of precise null deletions of mshA in various M. tuberculosis strains
Introduction of a wild-type copy of mshA restored ETH sensitivity (Table 2) and mycothiol content (Fig. 2B) in all the mshA mutants, but a subset of the mutants (3/7) did not regain INH susceptibility (Table 2). We reasoned that these strains must have acquired secondary mutations to compensate for the loss of mycothiol and that these mutations were also mediating the INH resistance. Although further studies will be needed to identify such mutations, we could hypothesize from these complementation studies that the loss of the mshA gene would mostly confer ETH resistance. To test this possibility, precise null mutants of mshA were generated in three reference strains of M. tuberculosis using specialized transduction (Bardarov et al., 2002) (Fig. S1, Table 1). The H37Rv, CDC1551 and the Erdman DmshA strains were resistant to ETH, but only the CDC1551 DmshA strain showed a twofold increase in INH MIC (Table 2). Previously, deletions in mshB and mshD had been isolated in M. tuberculosis, and only the mshB mutant was shown to be INH-resistant. Our data show that the main drug resistance phenotype of the M. tuberculosis mshA null mutant is resistance to ETH.
All three M. tuberculosis mshA deletion mutants failed to produce any detectable level of mycothiol (Fig. 2C). Complementation of the null mutants with pMV361::mshA restored ETH susceptibility (Table 2) and mycothiol production (Fig. 2D). To rule out the possibility that the lack of mycothiol might cause a compensatory phenotype, we measured total thiol concentrations in the cells and found that the null mutants' thiol concentrations were reduced by 53%, 77% and 82% in M. tuberculosis H37Rv, CDC1551 and Erdman respectively.
A previous study had suggested that the mshA gene was essential in M. tuberculosis . The discovery of frameshift mutations within the A. Three mshA point mutants and their complemented strains, as well as wild type, were either treated or not treated with INH (1 mg l -1 ) for 4 h before the inhA mRNA levels were measured. inhA levels were normalized to sigA expression. The experiment was performed in triplicate. B. NADH and NAD + concentrations in mshA point mutants. The strains were grown to log phase. NADH and NAD + were extracted and measured spectrophotometrically, as described in Experimental procedures. c = pMV361::mshA. mshA gene, followed by our successful construction of null mshA deletions in three independent M. tuberculosis strains, demonstrates that mshA is not an essential gene in M. tuberculosis. Although both studies attempted to generate null mshA mutants in M. tuberculosis using specialized transduction, it would be difficult to extrapolate why one was successful and the other one was not. We can only note that the M. tuberculosis mshA deletion strains were obtained after a very long incubation at 37°C (8 weeks). Furthermore, this study also confirms that mycothiol is the major thiol in M. tuberculosis as it represents more than 50% of the total thiol concentration in M. tuberculosis.
The mshA mutants require catalase to grow As mycothiol has been suggested to be essential for the growth of M. tuberculosis (Sareen et al., 2003), we tested whether the mshA mutants had any growth defect in vitro.
We observed no differences in growth rates in liquid media for the mshA point mutants and the null mutants (Fig. 5A). A previous study showed that an M. tuberculosis DmshD strain producing 1% of mycothiol compared with wild type required OADC (oleic acid-bovine albumin-dextrosecatalase-sodium chloride) to grow on plate . We tested the M. tuberculosis DmshA mutants on Middlebrook 7H10 plates supplemented with glycerol and either OADC or ADS (bovine albumindextrose-sodium chloride). The mutants did not grow on the ADS plates but grew well on OADC plates. We then tested whether the mutants required either oleic acid and/or beef liver catalase to grow, as these are found in OADC supplement but not in ADS. Adding oleic acid to the ADS plate did not allow for growth of the mutants, but the addition of beef liver catalase was sufficient to restore growth on plates (Fig. 5B). The role of catalase in the OADC supplement is to eliminate toxic peroxides in the media. As mycothiol is involved in the detoxification of electrophiles, alkylating agents, antibiotics and oxidants (Rawat et al., 2007), it is not surprising that M. tuberculosis mutants producing no mycothiol require catalase for protection against toxic reactive oxygen intermediates.
Mycolic acid biosynthesis is not inhibited by ETH treatment of mshA mutants
The death of the tubercle bacillus following treatment with INH correlates with inhibition of the biosynthesis of the long-chain a-alkyl b-hydroxy fatty acids (up to 90 carbons in length) called mycolic acids, which are a major constituent of the mycobacterial cell wall (Winder and Collins, 1970;Takayama et al., 1972). ETH, based on its similarity to INH, has also been predicted and shown to inhibit mycolic acid biosynthesis (Winder et al., 1971;Quemard et al., 1992;Baulard et al., 2000). As mycothiol is not known to be involved in the FASII pathway, the resistance mediated by mshA could suggest that the lethal event occurs in some redox function. If so, it may be possible that INH and ETH treatment of mshA mutants does not confer resistance to mycolic acid inhibition by ETH or INH. Fatty acids were extracted from the wild-type M. tuberculosis strains, the DmshA mutants and the DmshA-complemented strains following INH or ETH treatment, and derivatized to their methyl esters. Analysis by thin-layer chromatography (TLC) allowed for the separation between the short-chain fatty acid (up to 26 carbons in length) methyl esters (FAMEs) and the long-chain mycolic acid methyl esters (MAMEs) (Fig. 6). Treatment of the wild-type M. tuberculosis strains and the DmshAcomplemented strains with INH or ETH resulted in inhibition of mycolic acid biosynthesis as shown by the absence of MAMEs on TLC. In contrast, the DmshA mutants were resistant to mycolic acid inhibition upon treatment with ETH, but not with INH (Fig. 6).
The mshA mutants support the premise that ETH inhibits mycolic acid biosynthesis as the mshA mutants were resistant to mycolic acid biosynthesis inhibition upon ETH treatment. Previous studies have shown four different mechanisms of ETH resistance in tubercle bacilli, including: (i) target modification (Banerjee et al., 1994;Vilcheze et al., 2006), (ii) target overexpression , (iii) intracellular NADH/NAD + ratio alteration (Miesel et al., 1998;Vilcheze et al., 2005) and (iv) ETH activator inactivation (Baulard et al., 2000;DeBarber et al., 2000). All four of these phenomena are consistent with ETH being a pro-drug that is activated to form an adduct with NAD and this ETH-NAD adduct inhibits InhA, which results in mycolic acid biosynthesis inhibition (Vilcheze and Jacobs, 2007b;Wang et al., 2007).
To address the mechanism by which mutations in mshA confer ETH resistance, we can now rule out a number of these known mechanisms. Quantitative PCR analysis demonstrated that the inhA mRNA was not upregulated thereby suggesting InhA was not overexpressed. Moreover, we measured NADH/NAD + ratios and found no increase in NADH concentration in the mshA mutants. All of these data, coupled with the lack of resistance to INH and the high resistance to ETH, allow us to hypothesize that mycothiol plays a role either in the activation step of ETH or in the formation of the ETH-NAD adduct.
Mycothiol promotes ETH activation by the ethA-encoded mono-oxygenase
As the null mutants showed low (twofold the MIC) to no resistance to INH but showed a high level of resistance to ETH (Ն 6-fold the MIC), we therefore postulated that mycothiol could be involved in either ETH activation or ETH-NAD adduct formation in M. tuberculosis. ETH is activated by the NADPH-specific FAD-containing monooxygenase EthA (Baulard et al., 2000;DeBarber et al., 2000;Vannelli et al., 2002). We tested the NADPHdependent mono-oxygenation of ETH by EthA in the presence of mycothiol, and observed an increase in the rate of reaction directly proportional to the increase in mycothiol concentration, suggesting that mycothiol plays a role in Fig. 6. Fatty acid methyl ester (FAME) and mycolic acid methyl ester (MAME) analyses of DmshA mutants. M. tuberculosis wild-type strains, DmshA and complemented strains were treated with INH (0.5 mg l -1 ) or ETH (15 mg l -1 ) for 4 h, and then labelled with [1-14 C]-acetate for 20 h. Fatty acids and mycolic acids were saponified, methylated, extracted and separated by thin-layer chromatography. 14 C-labelled FAMEs and MAMEs were detected by autoradioragraphy after 36 h exposure at -80°C. c = pMV361::mshA. the activation steps rather than in the formation of the ETH-NAD adduct (Table 3). Furthermore, replacing mycothiol by a different thiol, such as reduced glutathione, had no effect on the oxidation rate of NADPH (data not shown). This suggests that the increase in EthA activity upon the addition of mycothiol is specific to mycothiol, and does not occur in the presence of another thiol. To test if mycothiol was also required for the formation of the ETH-NAD adduct, the rate of inhibition of InhA by ETH in the presence of NAD + , NADPH, EthA and mycothiol was also measured. No formation of the ETH-NAD adduct was observed in these conditions (data not shown), which suggests that mycothiol is not involved in the formation of the ETH-NAD adduct. The mycothiol-dependent increase in the rate of NADPH conversion during the activation of ETH by EthA suggests that mycothiol promotes the activation of ETH by EthA. Two other anti-TB drugs, isoxyl and thiacetazone, are also activated by EthA . We therefore tested if the mshA mutants (null and point mutants) were also resistant to isoxyl and thiacetazone and found that they were fully sensitive to both drugs (data not shown). This implies that mycothiol is solely involved in the activation of ETH. We could hypothesize that mycothiol either stabilizes the intermediates formed upon activation of ETH or forms a complex with the active form of ETH, which allows for the formation of the ETH-NAD adduct. More in-depth studies are necessary to fully understand which role mycothiol plays in the activation step.
Mycothiol is not required for M. tuberculosis growth in vivo
Mycothiol has been postulated to be essential for M. tuberculosis growth in vivo. The mshA mutant mc 2 4936, which does not synthesize mycothiol, was chosen to study the survival of immunocompetent C57Bl/6 mice and immunocompromised SCID mice following aerosol infection. No significant difference in survival was observed between mice infected with wild-type M. tuberculosis and the mycothiol-deficient mshA mutant (Fig. 7A). Interestingly, the SCID mice infected with the complemented strain (complementation was done with a replicative plasmid expressing mshA) survived 30 days longer than the parent strain. In vivo growth of the mshA mutant mc 2 4936 in the lungs of immunocompromised and immunocompetent mice was also measured. In SCID mice, the mshA mutant and the wild-type M. tuberculosis strains grew at the same rate, while the complemented strain grew slightly more slowly, which might explain the differences in survival rates (Fig. 7B). In C57Bl/6 mice, the mshA mutant growth was slightly defective after 3 weeks but at week 8 of infection the mycobacterial burden in the lung was comparable between the mutant and the wild-type strain (Fig. 7B).
Our demonstration that M. tuberculosis DmshA strain requires catalase to grow in vitro, but not in mice suggests that either the host is not an oxidatively stressed environment or growth in mice induces alternative thiols that may compensate for the absence of mycothiol. Further studies will be required to solve this paradox.
Concluding remarks
A novel mechanism of ETH resistance has been discovered in M. tuberculosis which demonstrates that mycothiol plays a role in the pro-drug activation by the ethA-encoded mono-oxygenase. As mono-oxygenases are known to mediate detoxifying reactions, it is reasonable to assume that mycothiol plays a role in other, as yet unidentified, detoxifying reactions. This resistance is a loss of function and is consistent with mycothiol playing a role in the ETH activation process. The requirement for mycothiol in the inhibition of mycolic acid biosynthesis by ETH supports the model that ETH, upon activation with EthA, forms an adduct with NAD, which subsequently inhibits InhA. We hypothesize that novel drugs that bypass this activation step and target InhA directly should be developed as they could lead to the killing of M. tuberculosis cells.
Interestingly, the first study by Dubos and Middlebrook (1947) of a medium to grow tubercle bacilli (now referred as Middlebrook 7H9 with OADC supplement) did not add beef liver catalase. A few years later, the discovery by Middlebrook that INH-resistant mutants of M. tuberculosis were catalase-negative (Middlebrook, 1954;Middlebrook et al., 1954) initiated the need to add catalase to the media to isolate and grow M. tuberculosis INH-resistant strains. Our study shows that there exists at least one class of INH-or ETH-resistant M. tuberculosis mutants that would not readily grow on media without catalase and may provide an explanation why certain clinical isolates are difficult to grow. Furthermore, the finding that three mshA mutants, when complemented with a wild-type copy of mshA, still possessed a low-level resistance to INH but were fully sensitive to ETH suggests that these mutants must have acquired a secondary mutation. Therefore, there may exist addi-
Bacterial strains, plasmids, phages and media
The M. tuberculosis strains (H37Rv, CDC1551 and Erdman) were obtained from laboratory stocks. The strains were grown in Middlebrook 7H9 medium (Difco) supplemented with 10% (v/v) OADC enrichment (Difco), 0.2% (v/v) glycerol and 0.05% (v/v) tyloxapol. The solid medium used was the same as described above, with the addition of 1.5% (w/v) agar. The plasmids pMV261 and pMV361 were obtained from laboratory stocks. Hygromycin was used at concentrations of 50 mg l -1 for mycobacteria and 150 mg l -1 for Escherichia coli. Kanamycin was used at concentrations of 20 mg l -1 for mycobacteria and 40 mg l -1 for E. coli.
Isolation of INH-and ETH-resistant spontaneous mutants
Mycobacterium tuberculosis H37Rv mutants were isolated from non-mutagenized cultures grown in the media described above. The cultures were incubated by shaking at 37°C to late log phase. Ten-fold serial dilutions were then plated on agar plates (media described above) containing INH (0.2 mg ml -1 ) and ETH (5 or 10 mg ml -1 ). The plates were then incubated at 37°C for 6 weeks.
Determination of NADH and NAD + cellular concentrations
Mycobacterium tuberculosis strains were grown to log phase. The cultures (12 ml) were spun, and the cell pellets were re-suspended in 0.2 M HCl (1 ml, NAD + extraction) or 0.2 M NaOH (1 ml, NADH extraction). After 10 min at 55°C, the suspensions were cooled to 0°C, and neutralized by adding 0.1 M NaOH (1 ml, NAD + extraction) or 0.1 M HCl (1 ml, NADH extraction). After centrifugation, the supernatants were collected, filter-sterilized and frozen. The concentration of NAD + (or NADH) was obtained by measuring spectrophotometrically the rate of 3-[4,5-dimethylthiazol-2-yl]-2,5diphenyltetrazolium bromide reduction by the yeast type II alcohol dehydrogenase in the presence of phenazine ethosulphate at 570 nm, which is proportional to the concentration of nucleotide (Leonardo et al., 1996;San et al., 2002).
Quantification of the inhA expression levels
Total RNA extractions, cDNA synthesis and quantitative PCR with molecular beacons were performed in triplicate, as described previously . inhA levels were normalized to sigA expression.
Quantification of mycothiol contents
The M. tuberculosis strains (45 ml) were grown to stationary phase for 2 weeks. Samples (9 ml) were transferred into conical tubes and centrifuged. The cell pellets were re-suspended in either 0.5 ml of mBBr reagent (20 mM HEPES pH 8 + 2 mM monobromobimane in acetonitrile/ water 1/1, v/v) or 0.5 ml of NEM reagent (20 mM HEPES pH 8 + 5 mM N-ethylmaleimide in acetonitrile/water 1/1, v/v)). The suspensions were heated at 60°C for 15 min and spun down. The supernatants (0.5 ml) were treated with 5 M methane sulphonic acid (2 ml) and frozen. The samples were subjected to HPLC analysis as described earlier (Newton et al., 2000b).
Quantification of the total thiol concentration
Mycobacterium tuberculosis strains were grown to stationary phase and spun down. The cell pellets were washed with PBS and then re-suspended in 1 ml of PBS. Glass beads were added (0.2 ml), the suspensions were lysed using the Thermo Scientific FastPrep machine (45 s, speed 6, three times) and spun down, and the supernatants were filtersterilized. The total thiol concentration was obtained using Ellman's reagent by measuring spectrophotometrically, at 412 nm, a 1 ml solution containing 50 mM Tris (pH 8.0), 5 mM 5,5′-dithiobis(2-nitrobenzoic acid) (10 ml), and the lysate to quantify (e412 nm 2-nitro-5-thiobenzoate anion is 14 150 M -1 cm -1 ).
Construction of the DmshA strains
Mycobacterium tuberculosis mshA was replaced by a hygromycin cassette using the specialized transduction system previously described (Bardarov et al., 2002). Briefly, a 1 kb region flanking the left and right sides of mshA was PCRamplified from M. tuberculosis genomic DNA using the following primers (the cloning sites are underlined): LL TTTTTTTTCCATAAATTGGGGGCCGCGCTGACCTCAC TG, LR TTTTTTTTCCATTTCTTGGGACGGCGCTGGGCGATCA AC, RL TTTTTTTTCCATAGATTGGCCTGGTAGCGGTGGGCAA GC, RR TTTTTTTTCCATCTTTTGGGCGGGCCGATCGCGACC TTG.
The PCR fragments were cut with Van91I and cloned into p004S. The resulting cosmid was sequenced before digesting with PacI. The linearized cosmid was ligated to the PacIcut shuttle phasmid phAE159 and the resulting phasmid, phAE222, was packaged in vitro (Gigapack II, Stratagene). High-titre phage lysates were used to transduce M. tuberculosis H37Rv, CDC1551 and Erdman as described previously (Vilcheze et al., 2006). The plates were incubated at 37°C for 8 weeks. The transductants were checked for the deletion of mshA by Southern analysis (the genomic DNA of the transductants was cut with BglII and probed with the right flank of mshA) (Fig. S1).
Complementation of the mshA mutants
The wild-type M. tuberculosis mshA gene was amplified from M. tuberculosis chromosomal DNA using the following primers: mshAF CGGCAGCTGTTCGGTTCCTGCAAGGAT GG (PvuII site underlined), mshAR GCGGAATTCTCGGC AAGGAGGAAGTCACG (EcoRI site underlined). The PCR product was digested with PvuII and EcoRI and ligated to the replicative E. coli mycobacterial shuttle vector pMV261 (Stover et al., 1991) (Stover et al., 1991) restricted by PvuII and EcoRI.
The mshA mutant strains were then transformed with the plasmid pMV361::mshA or pMV261::mshA, using the following protocol. The strains (20 ml of cultures) were grown at 37°C to an OD600 ª 0.8, washed twice with a 10% aqueous glycerol solution and re-suspended in 0.4 ml of a 10% aqueous glycerol solution. The cell suspensions (0.175 ml) were added to the plasmid (2 ml) and electroporated (2.5 kV, 25 mFd, 1000 W). Medium (1 ml) was added, and the suspension was incubated at 37°C for 24 h and plated on Middlebrook plates containing kanamycin (20 mg l -1 ). The plates were incubated at 37°C for 6 weeks.
Analysis of FAMEs and MAMEs
Mycobacterium tuberculosis strains were grown to log phase, diluted to an OD600 ª 0.3, treated with INH (0.5 mg l -1 ) or ETH (15 mg l -1 ) or no drug for 4 h, and then labelled with [1-14 C]acetate (10 mCi) for 20 h at 37°C. The cultures were spun down and washed once with water. The cell pellets were saponified, methylated, extracted and analysed by TLC using hexane/ethyl acetate 95/5 as the elution system (three elutions were performed) (Vilcheze and Jacobs, 2007a).
EthA enzymatic activity assay
The his-tagged EthA was produced, as previously described . The activity of EthA was determined by monitoring the absorbance decrease of NADPH at 340 nm (e340 nm = 6.22 mM -1 cm -1 ). All the reactions were catalysed bỹ 1 mM EthA and performed in 50 mM Tris/HCl, pH 7.5. Double reciprocal plots were used to determine the kcat of the oxidation of NADPH. For measuring the effect of mycothiol, reaction mixtures contained 200 mM NADPH and varying mycothiol concentrations. | 7,399 | 2008-07-21T00:00:00.000 | [
"Medicine",
"Biology"
] |
Transcription Factor GFI1B in Health and Disease
Many human diseases arise through dysregulation of genes that control key cell fate pathways. Transcription factors (TFs) are major cell fate regulators frequently involved in cancer, particularly in leukemia. The GFI1B gene, coding a TF, was identified by sequence homology with the oncogene growth factor independence 1 (GFI1). Both GFI1 and GFI1B have six C-terminal C2H2 zinc fingers and an N-terminal SNAG (SNAIL/GFI1) transcriptional repression domain. Gfi1 is essential for neutrophil differentiation in mice. In humans, GFI1 mutations are associated with severe congenital neutropenia. Gfi1 is also required for B and T lymphopoiesis. However, knockout mice have demonstrated that Gfi1b is required for development of both erythroid and megakaryocytic lineages. Consistent with this, human mutations of GFI1B produce bleeding disorders with low platelet count and abnormal function. Loss of Gfi1b in adult mice increases the absolute numbers of hematopoietic stem cells (HSCs) that are less quiescent than wild-type HSCs. In keeping with this key role in cell fate, GFI1B is emerging as a gene involved in cancer, which also includes solid tumors. In fact, abnormal activation of GFI1B and GFI1 has been related to human medulloblastoma and is also likely to be relevant in blood malignancies. Several pieces of evidence supporting this statement will be detailed in this mini review.
The C-terminal domain is formed by a highly conserved region with six C2H2 zinc fingers. Fingers 1, 2, and 6 are required for protein interaction, whereas fingers 3-5 are necessary to bind DNA at an AATC containing motif [TAAATCAC(T/A) GC(A/T)] (10,11).
Between both domains, there is a less-characterized region that completely differs in both proteins, the function of which is still unknown.
This region is responsible for the size difference in both proteins: GFI1 has 422 amino acids (55 kDa), while GFI1B consists of 330 amino acids (37 kDa, CCDS6957) ( Figure 1B). There is also a short 284 amino acid GFI1B isoform (CCDS48049) (Figure 1C) that lacks the first two zinc-finger domains as a result of an alternative splicing, skipping exon 5 (ENST00000372123.4).
Although GFI1 and GFI1B are similar in structure and share functional mechanisms, they show distinct cell expression patterns. Both GFI1 and GFI1B have an important role in the endothelial cell to hematopoiesis transition, the process by which endothelial cells become blood cells during the third wave of blood development. This generates the first hematopoietic stem cells (HSCs) in the intraembryonic aorta-gonad-mesonephros region, silencing the endothelial program. Interestingly, the expression pattern of both genes is different: Gfi1 is specifically expressed within the dorsal aorta in endothelial cells and cells within emerging intra-aortic hematopoietic clusters, whereas Gfi1b expression is more associated with the fully formed intraaortic hematopoietic clusters (12). This suggests that although both proteins can apparently compensate for the loss of one gene by the other, they play unique differential roles in vivo.
Knockout (KO) mice have shown that Gfi1 is essential for neutrophil differentiation (13,14); consistently, in humans, severe congenital neutropenia is associated with GFI1 mutations (15). Gfi1 is also required for B and T lymphopoiesis (Figure 2). Besides being expressed in the hematopoietic system, Gfi1 is also expressed in precursors of sensory neurons, the retina, specific lung cells, and in the central nervous system (16).
GFI1B is expressed in HSCs, common myeloid and megakaryocyte/erythroid progenitors, and erythroid and megakaryocytic lineages. Moderate levels of expression are also found in immature B-cells, a subset of early T-cell precursors (17,18) and peripheral blood granulocytes and monocytes. GFI1B is very low or absent in lymphoid-primed multipotent, common lymphoid, early thymocyte, and granulocyte-monocyte progenitors (19) (Figure 2).
GFI1B role in erythropoiesis is crucial for expansion and differentiation of erythroid progenitors. Gfi1b deficiency in mice results in embryonic lethality by day E15 (20). Gfi1b null embryos display a delay maturation of erythrocytes and die because of the lack of enucleated erythrocytes. Gfi1b KO mice also fail to develop megakaryocytes but have arrested erythroid and megakaryocytic precursors in the fetal liver. Loss of Gfi1b in adult mice increases the absolute numbers of HSCs that are less quiescent than wildtype ones (21), ablates erythroid development at an early progenitor stage, and blocks terminal megakaryocytic differentiation in the polyploid promegakaryocytes that fail to produce platelets (22). Lineage-specific KOs have shown that the role of Gfi1b in megakaryocyte polyploidization and motility can be achieved by inhibiting p21 activated kinases, and the effect on proplatelet formation, by controlling α-tubulin expression, which is highly decreased in the KO cells (23).
The short GFI1B variant seems important for erythroid development and to show a stronger repressor activity than the long one (24). Instead, hyperexpression and knockdown experiments in human primary cells have shown that the long GFI1B is required for megakaryopoiesis, but not the short form (25), which may have an inhibitory effect on platelet production (26) (Figure 2).
It has also been observed that the absence of Gfi1 and Gfi1b expression produces a severe block in B-cell development. On the contrary, in vitro overexpression of Gfi1b inhibits myeloid differentiation of a cultured myelomonocytic cell line (4).
However, GFI1 and GFI1B are not just different in terms of cell specificity, as demonstrated by sequence interchange (27). Consistent with this, Gfi1 hyperexpression can rescue erythroid and early megakaryocytic differentiation from adult mouse Gfi1b KO, but terminal megakaryocyte maturation defect cannot be compensated by Gfi1 or Gfi1b hybrid containing the Gfi1 N-terminal portion (22). These differences are more patent in the inner ear than in hematopoietic cells (27).
Besides its repressive function, GFI1B may directly or indirectly activate gene expression; for example, MEF2C upregulation in T-cells by GFI1B binding to this gene promoter and MLLT3, whose expression correlates with GFI1B hyperexpression or functional block (34).
A mutation described by Stevenson et al. (43), which consists of a single-nucleotide insertion in GFI1B exon 7 (c.880_881insC) that produces a frameshift and protein premature termination (His294fsTer307, g.135866324dupC in GRCh37/hg19 assembly) ( Figure 1B: 7), disrupts the integrity of the fifth zinc finger and eliminates the coding sequence for the sixth zinc-finger domain. The mutated protein cannot bind to DNA and loses its repressor activity on target genes. This mutation was described in patients with an inherited dominant bleeding disorder with moderate macrothrombocytopenia and anisopoikilocytosis. Platelets of affected patients had substantial reductions in the α-granule components, P-selectin, and Fg, and somewhat less glycoproteins GPIba and GPIIIa.
Next, Monteferrario et al. (44) detected a nonsense hereditary mutation (c.859C>T) in the amino acid 287 (Gln287Ter, g.135866303C>T in GRCh37/hg19) (Figure 1B: 6) in a family with similar clinical features. This mutation also produces a stop codon and a truncated protein that lacks 44 amino acids at the carboxyl terminus. Normal levels of mRNA are expressed in the mutant allele, but the truncated protein is inactive as a repressor.
The condition produced by both mutations was considered a type of gray platelet syndrome, as platelets look gray under optical microscopy owing to their lack of alpha granules. However, the variable and, in general, less severe α-granule deficiency and the red cell phenotype differ from the classic gray platelet syndrome (45). This has led to classify these conditions as bleeding disorder platelet-type 17 (OMIM #187900).
Recently, another GFI1B mutation Gly272fsTer274 (c.814+ 1G>C, g.135865294G>C, GRCh37/hg19) (Figure 1B: 5), generating a truncated protein, has been associated with congenital macrothrombocytopenia linked to α-granule deficiency. This mutation affects 58 amino acids of the C-terminal, which results in complete deletion of zinc finger 5. Platelets in these patients present an increased level of CD34 expression and decreased levels of thrombospondin-1 (46).
GFI1B mutations have also been found in two patients from unrelated families with a combined alpha-delta storage pool deficiency, with reduction of α and dense (δ) granules. Both cases had thrombocytopenia. One of them had also anemia and a granulocytic left shift that corrected itself spontaneously in a few months. One patient also had urogenital and heart abnormalities and developed seizures. The other patient had persistent ductus arteriosus. A whole-exon sequencing of the first patient demonstrated a de novo heterozygous GFI1B nonsense mutation-Lys265Ter (c.793A>T, g. 135865273A>T in GRCh37/ hg19) (Figure 1B: 4). Targeted GFI1B sequencing in the second case revealed a homozygous mutation at this gene-Leu308Pro (c.923T>C, rs775963992, g.135866367T>C in GRCh37/hg19) (Figure 1B: 8). These mutations too were located at zinc-finger domains 4 and 6, respectively. It is still unclear if the non-hematological congenital abnormalities observed in these patients were related to the GFI1B mutations (47).
A GFI1B sequence study of 529 patients with atypical platelet phenotypes also allowed for the identification of seven cases with non-synonymous single-nucleotide polymorphism affecting this locus, which was absent in 11,216 unaffected individuals. Four of them were located at zinc fingers 1 and 2, highlighting the importance of these domains. One of these variants was a homozygous Cys168Phe (c.503G>T, rs527297896, g.135863848G>T in GRCh37/hg19) (Figure 1B: 1) that was associated with abnormal function and reduced platelets in an individual of Asian Indian ancestry; however, the variant was not found in 321 Indian Asian genomes (25).
All these mutations demonstrate the fundamental role of GFI1B in the biogenesis of human platelets.
GFi1B AnD MALiGnAnCY
Mutations that block differentiation and those that promote cell survival or proliferation have been considered necessary for developing acute leukemia (48). Therefore, the major role of GFI1B in hematopoiesis makes it a good candidate to be involved in blood cancers (49). Besides its role in cell differentiation, GFI1B has been reported to possess proapoptotic activity when expressed in human CD34+ cells (50); disruption of this function may also contribute to leukemogenesis.
In keeping with this, GFI1B expression has been found in high levels in some primary CD34+ human acute myeloid leukemias (AMLs) and leukemic cell lines. GFI1B silencing in these cell lines decreased proliferation and increased apoptosis (51). In chronic myeloid leukemia (CML), other myeloproliferative neoplasms (MPNs), AML, and B-lymphoblastic leukemias, GFI1B expression has also been observed to increase. Remarkably, the short GFI1B isoform is highly expressed in the leukemic cells. However, both isoforms were higher in CML after treating with tyrosine kinase inhibitors (52). Simultaneous silencing of BCR-ABL1 and GFI1B in CML cells showed a cooperative antiproliferative and proapoptotic effect in the K562 CML cell line (53). In this context, the short form may be acting as a repressor over the long species. The caveat of these experiments is the low number of patients and controls analyzed.
JAK2 V617F mutation is frequent in MPNs but has also been found in the general population (0.14-0.2%). Consistent with the importance of the GFI1B downstream sequence in its regulation and the role of this gene in blood cancer, a genome-wide association study identified a C>G variation in this region (rs621940, g.135870130C>G in GRCh37/hg19), which is associated with MPN patients and normal carriers of JAK2 V617F, but not with normal unmutated individuals (p = 1.9 × 10 −7 ) (54). Similarly, we reported GFI1B promoter mutations in human leukemias. However, no clear link has yet been established between these mutations and hematopoietic neoplasms (55).
Gfi1b repression of oncogene Meis1 also suggests that GFI1B is involved in leukemia when its repressor function is abolished (7).
In light of these evidences, we described a dominant-negative GFI1B mutation, Asp262Asn (c.784G>A, g. 135865264G>A in GRCh37/hg19) (Figure 1B: 3), associated with transition to AML from antecedent myelodysplastic syndrome (MDS). This mutation promotes the survival of normal and MDS human bone marrow CD34+ cells and skews lineage output of these normal adult primary cells and human cord blood common myeloid progenitors toward myeloid cells. This mutant works mainly through master hematopoietic regulator SPI1 (PU.1) (34). In agreement with this, SPI1 is upregulated in JAK2 V617F-positive MPNs (56).
Similar to GFI1 (57, 58), GFI1B has been linked to lymphomagenesis. TF TCF3 (E2A) is involved in T-cell human leukemias, and Tcf3 KO develops T-cell lymphoma. Ectopic expression of Tcf3 in this context induces growth arrest and apoptosis, together with direct Gfi1b upregulation. Gfi1b-increased expression in Tcf3−/− cells produces similar consequences. Therefore, consistent with the importance of GFI1B block in myeloid leukemias, TCF3 inhibition in T-cell malignancies may work through GFI1B downregulation (33). Another piece of evidence of the implication of GFI1B reduction in lymphoma comes from its relation with B-cell lymphoma 6 (BCL6), a gene frequently expressed in T-and B-cell lymphomas. BCL6 chromosomal rearrangements and/or mutations are associated with human lymphomas, up to 73% in diffuse large B-cell lymphoma (59). Gfi1b has been identified as a retrovirus integration site in diffuse large B-cell lymphomas of mice containing the human BCL6 transgene, but this was not the case in retroviral injected non-transgenic control lymphomas. Again, in this context, Gfi1b expression was decreased in the first lymphomas compared with the latest.
Unlike blood malignancies, GFI1B and GFI1 activation has been associated with solid tumors, in particular medulloblastoma. In most cases, GFI1B/GFI1 mutually exclusive abnormal expression was produced by structural variants, showing that abnormal expression of GFI1B is definitely linked with human cancer (61)(62)(63). Further investigation may establish a wider role in blood or solid malignancy.
Other genes have been related to malignancy both when upregulated and functionally inactivated, including key regulators of hematopoiesis CEBPA (64) and SPI1 (PU.1) (56,65,66). This may be the case with GFI1B too. However, more data will be needed to get a full insight into the mechanisms involved in GFI1B's role in cancer, particularly in the blood setting and the importance of its two isoforms.
AUTHOR COnTRiBUTiOnS
All authors listed have made substantial, direct, and intellectual contribution to the work and approved it for publication.
FUnDinG
This work was partially supported by the "Hay Esperanza" foundation. | 3,134.6 | 2017-03-28T00:00:00.000 | [
"Biology",
"Medicine"
] |
Moving Horizon Planning for Human-Robot Interaction
The collaboration and interaction between humans and robots intensify with ongoing research and industry needs. Robots require a motion planner that contributes to a safe environment for humans. This paper provides the online trajectory planner Moving Horizon Planning for Human-Robot Interaction (MHP4HRI), customizable for various robots, considering obstacles and humans in their environment. The planner generates motion commands in a moving horizon manner, similar to Model Predictive Control. This enables robots to react to dynamic changes in the environment in real-time. Descriptions of the planner and the underlying algorithms are given, as well as details about the provided framework regarding the benefits and usage for the community. Furthermore, we aim to provide a growing framework with new features in the future regarding the optimization and interaction with the environment, especially humans. The code, implemented mainly in C++ for the Robot Operating System (ROS), is available at GitHub: https://github.com/rst-tu-dortmund/mhp4hri.
probabilistic optimization framework that samples noisy trajectories and optimizes them, CHOMP is a gradient-based optimization method that optimizes an initial trajectory by gradient descent.Both approaches generate a global trajectory for the complete planning horizon until reaching the goal.Therefore, the required computation time is high, and a reaction to dynamic changes in the environment is not possible.On the other hand, local approaches optimize a robot trajectory for a shorter planning horizon and repeat this optimization frequently [29,31].It is required to react to changes from obstacles and humans moving in the robot's workspace in time.Therefore, the local optimization must be executed in realtime in these environments.We defne 10 Hz to be real-time for this application.A drawback of local planners is that they may converge to local extremes that do not lead to the goal and can lead to a standstill in front of an obstacle if no other solution is found.A combination of local and global approaches overcomes this drawback and leverages the benefts of global and local optimizers [12].
The work at hand focuses on a local optimization method that is executed in real-time.We contribute a framework for Moving Horizon Planning (MHP) in dynamic environments with an extended consideration of humans and human motions in the robot environment.Figure 1 shows a visualization of the MHP4HRI and a real-life application with a human interfering with the robot's motion.To allow a wide usage of MHP4HRI, the structure is modular and customizable for various robots and environments, allowing the Human-Robot Interaction (HRI) community a tailored application for their needs.MHP4HRI ofers advantages compared to other open-source frameworks for solving optimization problems like Huge Quadratic Programming [7], planners of the Open Motion Planning Library [26], Trajectory Optimization [25], Augmented Lagrangian Trajectory Optimization [10], Horizon [20] or Fast constrained optimal control problem solver for robot Trajectory Optimization and control [28].Compared to these approaches, our contribution is an extensive consideration of the environment, while most other approaches focus on the robot's motion or the optimization problem.Other approaches that explicitly consider humans in the environment but do not publish their code use trajectory optimization with repulsive forces for collision avoidance [14], human motion predictions and an optimization-based safe interval path planning [27] or intention-aware motion predictions for an optimization-based planner [17].
MHP4HRI provides an open-source planner, environment, and code structure that enables all HRI community members to use it or adapt it to their needs by publishing the complete modular C++ code for ROS with detailed documentation.
PURPOSE
MHP4HRI delivers a local planner utilizing a MHP approach for robot manipulators.The framework includes a separate module for considering dynamic obstacles and humans, including forecasting their motions.Overall, it is possible to use our planner in dynamic environments to plan and execute robot trajectories optimized for various objectives and avoid collisions with obstacles.
The following paragraphs describe key aspects of the framework that help accomplish various problems, and Figure 2 shows a simplifed overview of the framework structure for a human-robot scenario.
Optimization Problem.The frst key aspect of the framework is formulating the optimization problem regarding the robot trajectory planning task.Therefore, the framework formulates the planning problem in a dynamic environment as a Nonlinear Programming (NLP) problem.A solution for a NLP problem optimizes a cost function subject to diferent constraints.The cost function considers various objectives, like the proximity of the robot to the goal state and minimal commanded joint velocities.Furthermore, adaptions of the cost function accomplish a proactive collision avoidance of obstacles inside the workspace.A NLP problem also considers constraints, like joint angle, velocity, and acceleration limits of the robot.In order to reduce the occurrence of collisions, the framework extends the constraints by collision constraints that prohibit robot trajectories from being too close to environmental obstacles.The NLP problem is solved by the Interior Point Optimizer (IPOPT) [30].Furthermore, MHP4HRI includes a hypergraph formulation of the NLP problem to reduce the computation time of the optimization [21,22].
A description, including mathematical formulations of the optimization problem, is given in Section 3.
Human Motion Forecasts and Uncertainties.The second key aspect treats the consideration of humans in the robot's environment.One part of our framework is a motion forecast module providing future poses of the human in the robot workspace.These future poses are directly considered during MHP, so the planner generates a robot trajectory that proactively avoids collisions with humans.Based on current and previous human pose data, a state estimation of a customized human skeleton and a subsequent extrapolation of estimated skeleton joint angles allow a simple but efcient forecast [18].Furthermore, an online uncertainty estimation method determines the current extrapolation uncertainty and allows its consideration in the planning process [19].A machine learning prediction approach is included in the framework as a further forecast option [1].Further details of the methods and integration into the framework are given in Section 3.
CHARACTERISTICS
This section frst describes the mathematical formulations of key aspects of the framework and second introduces the framework's installation and dependencies.
Mathematical formulation
The implementation of the framework utilizes various algorithms and methods introduced in diferent research papers and combines them in one framework to allow the community access to these methods.Therefore, this subsection briefy summarizes essential formulations and references related papers for further details.
In general, MHP4HRI ofers a planning algorithm for robot manipulators in dynamic environments that can be connected to various robots.The main diference to Model Predictive Control (MPC) approaches is that MHP4HRI does not control the robot directly but sends commands for the velocity controllers of the robot joints.
Similar to MPC approaches, MHP4HRI also relies on optimization methods.The discrete optimal control problem is formulated as a NLP problem with a cost function and inequality constraints h: u,x subject to The optimization variables are the control inputs, corresponding to the robot joint velocities, u ∈ R , and the states, corresponding to the robot joint angles, x ∈ R for a robot with degrees of freedom.The time step Δ = +1 − defnes the length between time steps of the discretization grid with 0 and horizon length .The discrete states x with = 0, 1, . . ., and controls u with = 0, 1, . . ., − 1 are the result of a full discretization transcription method [22].
The cost function = G (x) + C (u) + O (x) consists of three parts concerning the quadratic costs G and C for the distance to the goal state and the controls, as well as the weighted costs O for high proximity to collisions with obstacles or self collisions.The proximity costs are defned as: with the robot collision objects R ⊂ ℜ, the obstacle collision objects O ⊂ , the distance function , and an adaptation of the potential function () from Mohri et al. [16].The adaption decouples the dependency of the potential function at = 0 from the threshold value 0 and allows a separate scaling for each obstacle [13].
The inequality constraints h consider the joint limits of the robot, the joint velocity limits, and the acceleration limits.Furthermore, it is possible to activate collision constraints that prohibit the robot from getting too close to obstacles or itself.Note that more constraints increase the computational burden for solving the problem.For further information about the obstacle representations and variations, please refer to [13].
If forecasts of human motions are available, it is possible to consider them in Equation ( 2) or in collision constraints in h.Otherwise, MHP4HRI applies a snapshot method and assumes that every obstacle is static during each planning cycle.To forecast human motions, MHP4HRI ofers diferent opportunities.Based on a neural network approach, it is possible to use a trained network to predict human motions [1].Alternatively, a reduced human skeleton model allows the extrapolation of joint angle motions.An inverse kinematic determines the current joint angles of the skeleton model based on the current human body part poses that can be measured, for example, with a motion capture system.Diferent state estimations for the joint velocity and acceleration fll the state vector for the extrapolation of each skeleton joint [18].Since the extrapolation of human motions exhibits an error compared to the ground truth, an extension of the extrapolation approaches is included.This extension estimates the uncertainty of the extrapolation online with Gaussian Mixture Models [4] and applies this to the representation of the human in the workspace [19].
Installation and Dependencies
In order to use MHP4HRI, several installation options are available, and every user can customize the installation.We provide two main options for installing MHP4HRI, which consists of multiple ROS packages.The frst one is a Docker installation that provides the basic functionality of MHP4HRI and the possibility for customized tests and experiments.The Docker installation is independent of the operating system.
The second option is the manual installation of MHP4HRI.This option is recommended to optimize the performance, customize multiple aspects, or apply it to a real robot.For more details about the installation, please refer to the wiki of the MHP4HRI repository.
Multiple dependencies are required during the installation, which are explained in detail in the wiki.The frst dependency is related to solving the NLP problem.MHP4HRI uses IPOPT as a solver for the NLP problem [30].IPOPT relies on diferent linear solvers like the Harwell Subroutine Library (e.g., Multifrontal Approach 27 [6] or 57 [5]) or MUMPS [2].Other dependencies include external libraries for linear algebra, like Eigen [9] or Armadillo [23] or the prediction approach for human motions [1].
Requirements for the hardware are not specifed in detail since MHP4HRI runs on various operating systems and hardware.Nevertheless, the performance of the optimization solution is highly dependent on the hardware and varies.The wiki provides information about the hardware used during the development and constitutes hardware combinations solving the problem in time.Depending on the number of obstacles and background processes, our test systems also temporarily exceed the solving time for real-time capability.
CODE
This section provides an overview of the code structure and maintenance plan.Please note that MHP4HRI is licensed under the GNU General Public License v3.0 and is free for everyone [8].If you use MHP4HRI in your work, please cite the paper at hand.
Code structure
The GitHub repository of MHP4HRI (https://github.com/rsttu-dortmund/mhp4hri) is extensive and consists of multiple packages with various purposes and a wiki, recommended for every user to check for more details.
Signifcant parts of the code are written in C++ and integrated into ROS to provide easy usage for diferent robots and simulations.Small parts, primarily related to the machine learning part of human motion prediction [1], are written in Python but still integrated into ROS.MHP4HRI mainly comprises three ROS packages, each required for diferent parts of the solution: mhp_planner.This package provides functions related to the optimization problem and its solution strategy.Knowledge about the robot, or in general, the plant, the task, and the environment is required to defne and solve the optimization problem.Therefore, this package defnes interface classes for these elements that allow for implementing inheritance classes for problem variations.The interface classes thereby defne required functions that are equal for all derived classes or declare virtual functions that have to be implemented by derived classes.Furthermore, mhp_planner includes the hypergraph strategy and the integration of IPOPT as a solver for the NLP problem.An adaptable planning frequency of 10 Hz is applied to the local planner to reach real-time capability.mhp_robot.This package provides functions related to the robot and the environment.Like the mhp_planner package, interface classes determine functionalities for derived classes regarding robot kinematics, control, dynamics, and collision objects in the robot workspace.Further elements in this class treat the environment and adaptions like human motion forecasting and uncertainty estimation of extrapolation errors.A workspace-monitor updates the environment defnition of the robot iteratively.Therefore, the workspace-monitor executes various processes related to the environment in a fxed sequence.For example, the processes implement functions like updating obstacles and human positions based on measurements or estimating and extrapolating human motions.To ensure an actual environment in each planning iteration of the local planner, the workspace-monitor updates with a higher frequency of 25 Hz.mhp_robot_ur10_example.This package provides an example application of the mhp_robot and mhp_planner packages for the Universal Robots UR10 (UR10).mhp_robot_ur10_example is a ROS metapackage and groups diferent packages related to the UR10 and the planning problem.In particular, packages related to the gazebo simulation, description of the UR10, and drivers for the real robots are integrated into this metapackage [3,15].To utilize the functions of the planner and the environment packages, derived classes for the interface classes of the mhp_planner and mhp_robot packages are implemented.
Maintenance
The maintenance plan for MHP4HRI intends to provide a stable trajectory planner that increases functionality and efciency over time.Therefore, the code is hosted and maintained on the GitHub account of our institute to guarantee accessibility in the future, independent of single researchers.Due to a running research project, the code is continuously extended and improved and will also take issues and feature recommendations of the community into account.
USAGE
This section provides information about the usage and possible adaptions of MHP4HRI.Furthermore, exemplary experiments and results are presented to provide an insight into the capabilities.
Simulation and Reality
The simulation is ready to use out-of-the-box for the UR10 simulation and allows frst insights and tests with the planner and its possibilities.As usual for ROS, the user starts a launch fle that starts the simulation and the planner.The simulation allows different modes to test diferent scenarios and environments without risks of damaging the robot and the environment or harming humans.Therefore, two demonstrations extend the repository to show the planner's capabilities and environment module.The demonstrations allow testing the planner functions with a human inside the workspace and with two diferent workspace modes.One option is the simulation of human motions by defning the start and end states of the skeleton angles.Nevertheless, simulated human motions are unrealistic and do not consider errors due to measurements or sudden motion changes.Therefore, a second option replays a recorded human motion from a motion capture system.An Optitrack motion capture system records a demonstrative motion that is transformed into pose messages in the ROS network.The human poses are fnally recorded and can be replayed in the simulation.For more details of the demonstrations and required commands, please refer to the usage page of the GitHub wiki.
MHP4HRI also provides a launch fle to start the planner with a real UR10 robot and control it.The option to frst check the planner with a simulation and then apply it to a robot allows everyone a safe and easy start and brings HRI into the real world.A live demonstration of the planner with a UR10 is available on the wiki.
Customize MHP4HRI
A modular structure allows users to extract only parts of the framework they require for their work.This structure allows for adapting the framework to other robots, use-cases, and adding diferent obstacles and humans.The mhp_robot_ur10_example package represents an example of the implementation for a specifc robot.These adaptions are also possible for other robots, and the user can customize MHP4HRI to their needs.Furthermore, changing parts of MHP4HRI to adapt it to varying problems is possible.For example, the cost functions or constraints can be changed to consider further aspects like energy consumption or environmental impacts.
Experiments and Results
This section presents experimental results to show the capabilities of MHP4HRI concerning uncertainties of human motions.Therefore, we apply MHP4HRI in the UR10 simulation with a recorded human motion and compare the results with uncertainty consideration against the Safe Human-Robot Coexistence and Collaboration through Reachability Analysis (SaRA) approach [24] that stops the robot by intersections with human occupancy regions.Due to the potential functions and the uncertainty representation, MHP4HRI plans a trajectory around the motion of the human and does not stop the robot.
MHP4HRI leads to a faster motion of the robot and reduces the time to complete the robot task from 11.7 s for the SaRA approach by nearly 25 % to 8.8 s with MHP4HRI.A further criterion, especially in HRI, is safety.The minimum distances between the robot and human body parts (see Fig. 1a) conclude the experiments during the motion.Note that this is not an extensive safety analysis and only provides a brief insight into the performance of MHP4HRI in dynamic environments.MHP4HRI increases the averaged minimum distances between the robot and the human rounded with factor 3. A reason for the high diference besides the proactive collision avoidance is that we use the recorded motion of a human.Even if the robot is close to the human and SaRA already stopped the robot, the human moves further and gets closer.Nevertheless, this brief comparison shows possible use-cases and capabilities of the planner and its environment module for HRI.For videos and details, please refer to the wiki's experiments section of the usage page.
FUTURE RELEASES
We hope the community will contribute and add recommendations for features and issues for code improvement to MHP4HRI.
On the technical side, we aim to improve the performance of the planner and upgrade the code to ROS2.On the theoretical side, we aim to add various features and extensions.These features include further collocation methods, potential functions, and a task space extension.Furthermore, an extension concerning suboptimal solutions of local planners by diferent initializations and adaptive intermediate goals is planned to increase performance in workspaces with humans [12].Other features include new functions concerning the robot environment, like modeling environments and human motion tracking for data gathering.
(a) Visualization of the MHP4HRI (b) Real situation with a human in the workspace
Figure 1 :
Figure 1: Overview of the MHP4HRI in a real-life application.For full videos, please refer to the wiki at GitHub.
Figure 2 :
Figure 2: Overview of the MHP4HRI structure for a humanrobot scenario. | 4,284 | 2024-03-11T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Phosphomevalonate kinase is a cytosolic protein in humans
In the past decade, a predominant peroxisomal localization has been reported for several enzymes functioning in the presqualene segment of the cholesterol/iso-prenoid biosynthesis pathway. More recently, however, conflicting results have been reported raising doubts about the postulated role of peroxisomes in isoprenoid biosynthesis, at least in humans. In this study, we have determined the subcellular localization of human phosphomevalonate kinase using a variety of biochemical and microscopic tech-niques, including conventional subcellular fractionation studies, digitonin permeabilization studies, immunofluorescence, and immunoelectron microscopy. We found an exclusive cytosolic localization of both endogenously expressed human phosphomevalonate kinase (in human fibroblasts, human liver, and HEK293 cells) and overexpressed human phosphomevalonate kinase (in human fibroblasts, HEK293 cells, and CV1 cells). No indication of a peroxisomal localization was obtained. Our results do not support a central role of peroxisomes in isoprenoid bio-synthesis.— Hogenboom, S., J. J. M. Tuyp, M. Espeel, J. R. and H. Phosphomevalonate reveal a distinct granular pattern, reflecting a peroxisomal localization of AGT.
In the past 10 years, several reports have appeared suggesting a central role of peroxisomes in isoprenoid biosynthesis (1). These reports indicated that many if not most of the enzymes involved in the presqualene segment of the isoprenoid biosynthesis pathway may be located partly or even predominantly in peroxisomes, subcellular organelles implicated in a variety of metabolic processes (2)(3)(4)(5)(6). The isoprenoid biosynthesis pathway supplies cells with intermediates for the biosynthesis of a variety of compounds with important functions in cellular processes. These compounds include, among others, the side chains of heme A and ubiquinone, dolichol, isopentenyl tRNA, and farnesyl and geranylgeranyl moieties used for the isoprenylation of proteins that function in intracellular signaling. In addition to these nonsterol isoprenoids, the pathway produces sterol isoprenoids such as cholesterol, a structural component of membranes and precursor for bile acids and steroid hormones (7).
Isoprenoid biosynthesis starts with three molecules of acetyl-CoA, which in a series of six different enzyme reactions are converted to isopentenyl pyrophosphate, the basic C5 isoprene unit used for the synthesis of all isoprenoids (7). Phosphomevalonate kinase (PMK; EC 2.7.4.2.) catalyzes the fifth reaction of the pathway, which is the phosphorylation of phosphomevalonate to produce pyrophosphomevalonate. Several observations have led to the claim that PMK would be located predominantly in peroxisomes. First, selective permeabilization with digitonin of monkey kidney (CV1) cells revealed latency of endogenous PMK activity similar to that of peroxisomal catalase (CAT), suggesting that both enzymes are localized in the same subcellular compartment (5). Second, (immuno)fluorescence microscopy performed with CHO cells overexpressing a 200-amino acid carboxyl-terminal fragment of human PMK fused to the carboxyl terminus of green fluorescent protein (GFP) revealed a colocalization of this fusion protein with peroxisomal CAT (4). Third, human PMK contains a carboxyl-terminal serine-arginine-leucine (SRL), which matches the consensus peroxisomal targeting sequence type 1 (PTS1), suggesting that PMK may be targeted to peroxisomes via the PTS1-mediated protein import pathway (4,8). This suggestion was reinforced by the finding that the above-mentioned GFP-PMK fusion protein remained in the cytosol when expressed in PTS1 pro-tein import-deficient fibroblasts (4). Transient expression of this fusion protein in fibroblasts deficient in the import of PTS2-containing proteins revealed a punctate (peroxisomal) pattern in immunofluorescence experiments (4). Finally, in some livers of patients with Zellweger syndrome (ZS), a markedly deficient PMK activity was found. Because the loss of peroxisomes, which occurs in ZS cells, leads to mislocalization of peroxisomal enzymes to the cytosol often followed by the inactivation and/or degradation of these enzymes, this latter finding has been interpreted as indicative of a peroxisomal localization of PMK (9).
More recent data, however, do not support a peroxisomal localization of PMK. First, selective permeabilization of rat hepatoma H35 cells with digitonin resulted in a 91% release of PMK activity, similar to the release of the cytosolic marker lactate dehydrogenase, whereas peroxisomal CAT activity was completely retained in the cells after permeabilization, suggesting that PMK is cytosolic (10). Second, we recently found completely normal PMK activity and PMK protein levels in fibroblasts and liver homogenates of patients with a peroxisome biogenesis defect and in liver homogenates of PEX5 knockout mice (11,12). Moreover, we demonstrated that the deficient PMK activities reported in some livers of ZS patients are a result of the bad condition and/or preservation of the livers, rather than of the presumed mislocalization of the protein (11). Finally, in conventional subcellular fractionation studies that we performed with rat liver tissue, cultured human fibroblasts, and HepG2 cells, and in digitonin permeabilization experiments with cultured human fibroblasts, we never were able to demonstrate a peroxisomal localization of PMK activity (our unpublished data).
In summary, one has to conclude from the combined data that it remains unclear whether PMK is a true peroxisomal enzyme under physiological conditions. This prompted us to initiate a thorough study to determine the subcellular localization of human PMK. To avoid inconclusive results with tagged and reporter proteins, we generated highly specific antibodies that recognize the authentic human PMK, enabling localization studies in cells under normal conditions as well as in cells overexpressing human PMK. Using a variety of biochemical and microscopic techniques, we found a cytosolic localization of both endogenous and overexpressed human PMK and no indication of a peroxisomal localization.
Cell lines and culture conditions
Primary skin fibroblasts were obtained from a healthy control subject, from a ZS patient who was a homozygote for an insertion mutation in the PEX19 gene (13), and from a patient homozygous for familial hypercholesterolemia (FHC) (GM00701; Corriel cell repositories). The fibroblasts were cultured in HAM F-10 medium (GIBCO) containing 10% FCS and 1% penicillin/streptomycin in a temperature-and humidity-controlled incubator (95% air, 5% CO 2 as the gas phase) at 37 Њ C. Before experiments, the cells were grown to 70-80% confluence, after which the medium was substituted with HAM F-10 medium containing 10% lipo-protein (cholesterol)-depleted FCS. Experiments were performed after 72 h of culturing in lipoprotein (cholesterol)-depleted medium.
For PMK expression studies, the human embryonic kidney (HEK293) Flp-In and CV1 Flp-In cell lines (Invitrogen) were used and cultured in DMEM (GIBCO) containing 10% FCS, 1% penicillin/streptomycin, and 100 g/ml hygromycin in a temperature-and humidity-controlled incubator (95% air, 5% CO 2 as the gas phase) at 37 Њ C. Before experiments, the cells were grown to 70-80% confluence, after which the medium was substituted with DMEM containing 10% lipoprotein (cholesterol)depleted FCS. Experiments were performed after 24 h of culturing in lipoprotein (cholesterol)-depleted medium.
Fig. 1.
A and B: Subcellular fractions of human fibroblasts derived from a control subject (A) or a Zellweger syndrome (ZS) patient (B) were obtained by Nycodenz equilibrium density gradient centrifugation as described in Materials and Methods. Fractions were analyzed for the cytosolic marker phosphoglucoisomerase (PGI; black bars), the peroxisomal marker catalase (CAT; gray bars), and phosphomevalonate kinase (PMK; white bars). Relative activities were expressed as the percentage of total gradient activity present in each fraction. The pattern of distribution of PMK activity and PMK protein as determined by immunoblot analysis with an affinity-purified antibody raised against human PMK is similar to the pattern of PGI activity. C and D: Human fibroblasts derived from a control subject (C) or a ZS patient (D) were incubated with increasing concentrations of digitonin as described in Materials and Methods. Supernatant (open symbols) and pellet (closed symbols) fractions were analyzed for the activities of the cytosolic marker PGI (squares), the peroxisomal marker CAT (triangles), and PMK (circles). Relative activities were expressed as the percentage of total activity (supernatant plus pellet) present in each fraction. The pattern of latency of PMK activity and PMK protein as determined by immunoblot analysis with an affinity-purified antibody raised against human PMK is similar to the pattern of PGI activity. The bars below the immunoblots indicate the relative digitonin concentrations.
Generation of cell lines stably overexpressing human PMK
The open reading frame of control human PMK cDNA was amplified by PCR from cDNA prepared from human skin fibroblast RNA and ligated as a Bam HI-Xho I fragment under transcriptional control of the cytomegalovirus (CMV) promoter in the pcDNA5/FRT vector (Invitrogen). The entire insert was sequenced to ensure the absence of Taq polymerase-introduced errors.
HEK293 Flp-In cells or CV1 Flp-In cells were cultured in DMEM containing 10% FCS and 1% penicillin/streptomycin. Stable PMK-expressing cell lines were generated by cotransfection of CV1 and HEK293 cells with pOG44 and pcDNA5/FRT-PMK using Lipofectamine Plus reagent in growth medium without Zeocin according to the manufacturer's recommendations (Invitrogen). Forty-eight hours after transfection, hygromycin B was added to the medium to a final concentration of 100 g/ml, and the medium was changed every 3-4 days until hygromycinresistant colonies were evident. Control hygromycin-resistant cell lines were generated by cotransfection with pOG44 and the empty pcDNA5/FRT vector. For expression studies, the HEK293 Flp-In cell lines stably expressing human PMK (HEK-PMK), the CV1 Flp-In cell lines stably expressing human PMK (CV1-PMK), and the control cell lines transfected with empty pcDNA5/FRT (HEK Ϫ or CV1 Ϫ ) were cultured in DMEM containing 10% FCS, 1% penicillin/streptomycin, and 100 g/ml hygromycin. The PMK activity in cells overexpressing human PMK was approximately five times higher than that in the control cell lines.
Subcellular fractionation
For subcellular fractionation studies, cells were cultured in 162 cm 2 Falcon flasks, harvested, and washed three times with PBS and two times with fractionation buffer (0.25 M sucrose, 1 mM EDTA, 10 mM HEPES, and 1 mM phenylmethylsulfonyl fluoride, pH 7.4). Next, the cells were homogenized using a ballbearing cell cracker (EMBL), after which the postnuclear supernatant (PNS; 10 min, 500 g ) was layered on top of a continuous Nycodenz gradient (15-35%) with a cushion of 1 ml of 50% Nycodenz in 0.25 mM sucrose, 5 mM MOPS, 1 mM EDTA, and 2 mM KCl, pH 7.3. Gradients were centrifuged for 2.5 h in a ver-tical rotor (MSE; 8 ϫ 35) at 19,000 rpm ( ف 32,000 g ) at 4 Њ C. After centrifugation, 16-19 fractions were collected from the bottom of the gradient.
Cell permeabilization with digitonin
Cell permeabilization experiments were performed with cells attached to plates essentially as described by Biardi and Krisans (5) with a few modifications. HEK293 and CV1 cells were seeded on 60 mm plates at a density of 3.0 ϫ 10 5 cells/plate and fibroblast cells were seeded at a density of 2.0 ϫ 10 5 cells/plate. After culturing for 1 or 3 days in DMEM or HAM F-10 medium, respectively, containing 10% lipoprotein (cholesterol)-depleted FCS, cells were washed twice with ice-cold KH buffer (50 mM HEPES and 110 mM KOAc, pH 7.2). The plates were then transferred to ice and incubated in KHM buffer (20 mM HEPES, 110 mM KOAc, and 2 mM MgOAc, pH 7.2) containing various concentrations of digitonin (0, 20, 50, 150, 500, or 1,000 g/ml) or, as a control, 0.1% (w/v) Triton X-100. After 5 min, the buffer was collected as "supernatant" fractions and kept on ice. Subsequently, cells were incubated in KH buffer containing 1,000 g/ml digitonin, which results in total permeabilization. After 30 min, the buffer was collected and kept on ice. These latter fractions were referred to as "pellet" fractions. Enzyme measurements were performed immediately in both fractions.
Enzyme assays
PMK activity was measured by a radiochemical assay as described previously (11). Phosphoglucoisomerase (PGI) (14) and CAT (9) activities were measured by spectrophotometric assays as described.
Immunoblot analysis
Proteins were separated by SDS-PAGE and transferred onto nitrocellulose by semidry blotting (15). The highly specific affinitypurified antibody directed against human PMK (11) was used at a 1:500 dilution. Antigen-antibody complexes were visualized with goat anti-rabbit IgG-alkaline phosphatase conjugate and CDP-star (Roche Chemicals). As a control for the transfer of protein, each blot was reversibly stained with Ponceau S before incubation with antibodies.
Immunofluorescence
Cells were seeded on coverslips in six-well plates and cultured as indicated above. Immunofluorescence was performed as described (16). Cells were double labeled with polyclonal antibodies directed against human PMK (11) and monoclonal antibodies directed against the peroxisomal marker CAT (17) or the cytosolic marker metallo-matrix protein 7 (MMP7) (Ab-1, clone 1D2; Labvision). PMK antibodies were visualized using biotinylated donkey anti-rabbit Ig (Amersham) and streptavidin-labeled fluorescein isothiocyanate (DAKO). CAT and MMP7 were visualized using goat anti-mouse-labeled Alexa 568 (Molecular Probes). Photographs were taken using a confocal laser scanning microscope (Leica).
Immunocytochemistry of liver samples
Human liver biopsies were fixed in 4% formaldehyde in 0.1 M sodium cacodylate buffer, pH 7.3, containing 1% calcium chloride and processed for Unicryl embedding or for cryostat sectioning as described (18,19). Ultrathin sections of Unicryl embedded samples were incubated with polyclonal antibodies against PMK (11) or the peroxisomal alanine/glyoxylate aminotransferase (AGT) (20), decorated with colloidal gold, and examined by electron microscopy as previously described (18). Cryo-stat sections (7 m) were immunostained with colloidal gold against PMK or AGT followed by silver enhancement and examined by light microscopy as previously described (19). Negative controls were incubated with normal rabbit serum.
Localization of the GFP-PMK fusion protein
Using the same strategy as Olivier et al. (4), a 600 bp Pst I-Apa I fragment of human PMK cDNA was subcloned in frame with the coding sequence of GFP by means of insertion into the Pst I-Apa I sites of pEGFP-C3 (Clontech). The GFP-PMK expression plasmid was transfected into cultured fibroblasts [control (Ctrl) and ZS] and CV1 cells using Lipofectamine Plus (Gibco BRL). The transfection medium consisted of 1 ml of DMEM, 1 g of plasmid DNA, and 6 l of Lipofectamine Plus reagent in which the cells were incubated for 2 h at 37 Њ C. After this 2 h incubation period, 2 ml of DMEM containing 20% FBS and 1% penicillin/streptomycin was added to the culture medium. After 24 h, cells were prepared for immunofluorescence as described above using monoclonal antibodies against CAT or against a peroxisomal membrane protein, deficient in X-linked adrenoleukodystrophy (ALDP), and goat anti-mouse Alexa 568 (Molecular Probes). The localization of the GFP-PMK fusion protein was determined by examining the intrinsic fluorescence of GFP. Fig. 3. A-C: Subcellular fractions of human fibroblasts derived from a familial hypercholesterolemia (FHC) patient (A) or HEK293 cells (B) and CV1 cells (C) overexpressing full-length human PMK were obtained by Nycodenz equilibrium density gradient centrifugation as described in Materials and Methods. Fractions were analyzed for the cytosolic marker PGI (black bars), the peroxisomal marker CAT (gray bars), and PMK (white bars). Relative activities were expressed as the percentage of total gradient activity present in each fraction. The pattern of distribution of PMK activity and PMK protein as determined by immunoblot analysis with an affinity-purified antibody raised against human PMK is similar to the pattern of PGI activity. D-F: Human fibroblasts derived from an FHC patient (D) or HEK293 cells (E) and CV1 cells (F) overexpressing full-length human PMK were incubated with increasing concentrations of digitonin as described in Materials and Methods. Supernatant (open symbols) and pellet (closed symbols) fractions were analyzed for the activities of the cytosolic marker PGI (squares), the peroxisomal marker CAT (triangles), and PMK (circles). Relative activities were expressed as a percentage of total activity (supernatant plus pellet) present in each fraction. The pattern of latency of PMK activity and PMK protein as determined by immunoblot analysis with an affinity-purified antibody raised against human PMK is similar to the pattern of PGI activity. The bars below the immunoblots indicate the relative digitonin concentrations.
Subcellular fractionation of PMK in human fibroblasts
To determine whether in human cells PMK is localized in the cytosol, the peroxisomes, or both, we first performed subcellular fractionation studies with human skin fibroblasts. As a control, we included fibroblasts from a ZS patient lacking any peroxisomal remnants (13). After growth of the cells in lipoprotein-depleted medium to ensure a good induction of the isoprenoid biosynthetic pathway, we prepared a PNS, which was further fractionated by Nycodenz equilibrium density gradient centrifugation. In the normal fibroblasts, this resulted in a clear separation of peroxisomes and cytosol, as reflected by the distribution of the peroxisomal marker enzyme CAT and the cytosolic marker enzyme PGI ( Fig. 1A ). In the ZS fibroblasts, both marker enzymes colocalize, as expected from the absence of peroxisomes that leads to the cytosolic localization of peroxisomal enzymes (Fig. 1B). When PMK activity was measured in the gradient fractions, the activity showed the same distribution as that of the cytosolic marker PGI in both the normal fibroblasts (Fig. 1A) and the ZS fibroblasts (Fig. 1B). Immunoblot analysis of the fractions from the same density gradients using affinity-purified antiserum against human PMK revealed a similar distribution pattern for PMK protein as for PMK activity (Fig. 1A, B).
Digitonin permeabilization studies in human fibroblasts
As an alternative approach to study the subcellular localization of PMK in human fibroblasts, we exposed the cells to increasing concentrations of digitonin. When we measured the enzyme activities of CAT and PGI in the supernatant and pellet fractions of normal fibroblasts, we found a clearly different enzyme-release profile for CAT compared with PGI (Fig. 1C). This indicates that the plasma membrane was disrupted at a lower concentration of digitonin, resulting in the release of cytosolic PGI, whereas the peroxisomal membranes were permeabilized only at higher concentrations of digitonin, resulting in the release of the peroxisomal matrix content, including CAT. As expected, in the ZS fibroblasts lacking peroxisomes, no difference was observed in the release of PGI and CAT by digitonin (Fig. 1D). When we measured PMK activity in all pellet and supernatant fractions, we found that the release of PMK from the normal fibroblasts into the supernatant fractions occurs at the same concentration of digitonin as that of cytosolic PGI (Fig. 1C). In the ZS fibroblasts, PGI, CAT, and PMK were released from the cells at the same digitonin concentration (Fig. 1D). Immunoblot analysis of the various fractions using the antiserum against human PMK revealed a similar distribution pattern for PMK protein as for its activity (Fig. 1C, D). Thus, also in digitonin permeabilization studies, human PMK behaves similar to cytosolic PGI and clearly different from peroxisomal CAT.
Immunofluorescence studies in fibroblasts
We also studied the subcellular localization of human PMK by immunofluorescence microscopy. To this end, we performed double labeling of fibroblasts cultured in lipo- protein-depleted medium using the polyclonal anti-PMK antiserum and monoclonal antibodies directed against human peroxisomal CAT or against human cytosolic MMP7, a cytosolic marker ( Fig. 2 ). When we compared the immunolabeling of PMK in normal fibroblasts and ZS fibroblasts, we observed a similar cytosolic distribution pattern of the fluorescent signal in both cell lines, indicating that the presence or absence of peroxisomes does not affect the localization of PMK. Moreover, there was no colocalization of PMK and CAT in the normal fibroblasts, whereas in the ZS fibroblasts, the distribution pattern of CAT is superimposable on that of PMK, indicating colocalization of CAT and PMK in the cytosol. Also, when we compared the fluorescent signals obtained with anti-PMK and anti-MMP7, we found clear colocalization in the cytosol in both normal fibroblasts and ZS fibroblasts.
Subcellular localization of human PMK in overexpressing cell lines
The results of our various localization studies in human fibroblasts all indicate that endogenous PMK is predominantly, if not exclusively, located in the cytosol and not in peroxisomes. These results are in contrast to the reported peroxisomal localization of the GFP-PMK fusion protein upon overexpression in CHO cells (4). To determine whether this discrepancy in localization might be attribut-able to the overexpression, we also studied the subcellular localization of overexpressed PMK in various cell types. These include CV1 and HEK293 cells stably transfected with human PMK cDNA under the control of the CMV promoter and human FHC fibroblasts.
After fractionation of the various PNS fractions of these cell lines by Nycodenz equilibrium density gradient centrifugation followed by the measurement of PGI, CAT, and PMK activities and PMK protein content in all fractions, we again found a distribution pattern of PMK similar to that of cytosolic PGI and clearly distinct from that peroxisomal CAT in all cell lines ( Fig. 3A-C ). This was the case for endogenously overexpressed human PMK (Fig. 3A, FHC), constitutively overexpressed human PMK (Fig. 3B, HEK ϩ PMK, and Fig. 3C, CV1 ϩ PMK), and endogenously expressed human PMK (HEK Ϫ cells; data not shown) and monkey PMK (CV1 Ϫ cells; data not shown). Also, after selective permeabilization of the cellular membranes using increasing concentrations of digitonin, we found that both endogenously and constitutively overexpressed human PMKs behave similar to cytosolic PGI ( Fig. 3D-F). Moreover, immunofluorescence labeling of the endogenously and constitutively overexpressed human PMK shows a cytosolic localization superimposable on that of cytosolic MMP7 protein and clearly different from the localization of CAT in these cell lines ( Fig. 4 ).
Immunocytochemical studies in human liver
Although our combined data show that, at least in humans, PMK is predominantly a cytosolic protein, they cannot exclude the possibility that a minor amount of PMK is localized in peroxisomes. Therefore, we also performed immunocytochemical studies with ultrathin sections and cryostat sections of human liver, the organ with the highest expression of the enzymes of the presqualene segment of the isoprenoid biosynthesis pathway.
In immunogold labeling experiments using antibodies directed against human PMK, we found only occasional labeling in the cytoplasm of liver parenchymal cells. Although we carefully checked a large number of peroxisomes, we were unable to detect any labeling of PMK in these peroxisomes ( Fig. 5A ). Moreover, even after incubation with higher concentrations of antibodies, as a result of which nonspecific labeling strongly increased, no peroxisomal labeling could be observed (data not shown). As a control, we also performed immunogold labeling experiments on liver sample sections with antibodies against peroxisomal AGT. This revealed a distinct label in the peroxisomal matrix (Fig. 5B), whereas no label was observed in negative controls.
Because cytosolic localization of a nonabundant protein is difficult to demonstrate in ultrathin sections, we also performed immunocytochemistry on cryostat sections of human liver using antibodies against PMK and AGT using the sensitive silver enhancement technique. Although overall staining with the PMK antibodies was rather weak, we observed only a diffuse staining in the cytosol of hepatocytes (Fig. 5C). This pattern is similar to the pattern typically found for the localization of CAT in ZS livers (data not shown). In contrast, a distinct punctate pattern of peroxisomes was obtained when the sections were incubated with antibodies against AGT (Fig. 5D).
Subcellular localization of the GFP-PMK fusion protein
Our results are in marked contrast to those of Olivier et al. (4), who postulated a predominant peroxisomal localization of human PMK based on expression studies with a GFP-PMK fusion protein. We repeated their experiment and expressed the same GFP-PMK fusion protein in different cell lines. Transient expression in human control fibroblasts (control) and CV1 cells revealed a cytosolic localization of the GFP-PMK fusion protein in ف 60% of the GFP-positive cells as determined by GFP fluorescence (data not shown). The other GFP-positive cells displayed a punctate fluorescence suggesting a peroxisomal localization ( Fig. 6A , E ). Surprisingly, however, immunofluorescence labeling of peroxisomes in these cells using monoclonal antibodies directed against peroxisomal CAT (Fig. 6B, F) or the peroxisomal membrane protein ALDP (data not shown) revealed a punctate pattern that clearly differed from the punctate fluorescence of GFP-PMK (Fig. 6A, E). Indeed, in overlays of the different fluorescence patterns, we observed no colocalization of GFP-PMK with CAT or ALDP (data not shown). Moreover, expression of GFP-PMK in human ZS fibroblasts revealed the same fluorescence patterns as in the control and CV1 cells, including the punctate fluorescence of GFP-PMK in ف 40% of the cells (Fig. 6C). These results show that the punctate fluorescence of GFP-PMK is not the result of peroxisomal localization. The punctate fluorescence also does not reflect a lysosomal localization, as determined by subse- quent studies with Lysotracker (Molecular Probes) (data not shown). In addition to the punctate fluorescence, most transfected cells also displayed relatively large fluorescent GFP signals, which might reflect protein aggregates (Fig. 6A, C, E).
DISCUSSION
Compartmentalization of cellular processes into different subcellular compartments is one of the major characteristics of eukaryotic cells. Since their discovery in the 1960s, an increasing number of important metabolic pathways have been attributed to peroxisomes. In the past decade, a predominant peroxisomal localization has also been reported for several enzymes functioning in the presqualene segment of the cholesterol/isoprenoid biosynthesis pathway, including 3-hydroxy-3-methylglutaryl CoA reductase (2), mevalonate kinase (3), PMK (4), mevalonate pyrophosphate decarboxylase (5), isopentenyl pyrophosphate isomerase (6), and farnesyl pyrophosphate synthase (9, 10). However, conflicting results have been reported, raising doubts about the postulated role of peroxisomes in isoprenoid biosynthesis, at least in humans. In this study, we have sought confirmation for the claim that PMK would be predominantly peroxisomal and, as a consequence, that peroxisomes would play a central role in the biosynthesis of isoprenoids, including cholesterol. To this end, we studied the subcellular localization of human PMK using a variety of biochemical and microscopic techniques. In all cases, we found only a cytosolic localization of both endogenously expressed human PMK (in human fibroblasts, human liver, and HEK293 cells) and overexpressed human PMK (in human FHC fibroblasts, HEK293 cells, and CV1 cells). Indeed, no indication of a peroxisomal localization of human PMK was obtained.
Our results are in agreement with our recent finding of normal PMK activity in cells of patients who suffered from ZS (11,12) but are in contrast to those published by Olivier et al. (4), who postulated a predominant peroxisomal localization of human PMK based primarily on expression studies with a GFP-PMK fusion protein. One plausible explanation for the fact that the authentic nonmodified human PMK is localized in the cytosol and the GFP-PMK appeared peroxisomal could be that the fusion of PMK to GFP alters the protein conformation of PMK, thereby exposing its carboxyl-terminal PTS1-like SRL sequence and leading to peroxisomal import. When we tested this possibility by expressing the same GFP-PMK fusion protein in different cell lines, however, we observed a punctate pattern that did not colocalize with the punctate pattern of the peroxisomal CAT or ALDP. The fact that this punctate pattern of GFP-PMK was even observed in peroxisomedeficient ZS cells implies that the punctate pattern is not attributable to a peroxisomal localization of the protein.
We have no explanation for the punctate pattern observed with GFP-PMK, but we found that GFP-PMK is also not localized in the lysosomes. Our combined data imply that one should be very careful in drawing definite conclusions from studies with overexpressed reporter proteins when these are not confirmed by studies with the authentic nonmodified protein under physiological conditions. Another observation arguing against a peroxisomal localization of PMK is the fact that several organisms, including yeast, contain a PMK that has no similarity to mammalian PMKs and, moreover, do not possess a putative PTS signal, although the peroxisomal import machinery is well conserved among yeast and mammals (21). Now that we have shown that, at least in human cells, PMK is not localized in peroxisomes but in the cytosol, one can raise questions regarding the supposed peroxisomal localization of other enzymes functioning in the presqualene segment of the isoprenoid biosynthetic pathway. In fact, Michihara et al. (22,23) recently reported a predominant cytosolic localization of rat and mouse mevalonate pyrophosphate decarboxylase, which also had been postulated to be peroxisomal. Moreover, using an approach similar to that used for human PMK in this study, we found that human mevalonate kinase (24) and mevalonate pyrophosphate decarboxylase (our unpublished observations) are localized in the cytosol and not in peroxisomes. All of these data strongly suggest that peroxisomes in humans are not involved in isoprenoid/cholesterol biosynthesis and corroborate our previous findings that functional peroxisomes are not required for isoprenoid biosynthesis (11,12). | 6,442.8 | 2004-04-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
TiSiCN as Coatings Resistant to Corrosion and Neutron Activation
The aim of the present paper was to evaluate the effect of neutron activation on TiSiCN carbonitrides coatings prepared at different C/N ratios (0.4 for under stoichiometric and 1.6 for over stoichiometric). The coatings were prepared by cathodic arc deposition using one cathode constructed of Ti88 at.%-Si12 at.% (99.99% purity). The coatings were comparatively examined for elemental and phase composition, morphology, and anticorrosive properties in 3.5% NaCl solution. All the coatings exhibited f.c.c. solid solution structures and had a (111) preferred orientation. Under stoichiometric structure, they proved to be resistant to corrosive attack in 3.5% NaCl and of these coatings the TiSiCN was found to have the best corrosion resistance. From all tested coatings, TiSiCN have proven to be the most suitable candidates for operation under severe conditions that are present in nuclear applications (high temperature, corrosion, etc.).
Introduction
In the last few years, ceramics gained more popularity in nuclear fusion applications due to their high insulation properties, heat, and radiation resistance when compared to other commonly used isolators (i.e., plastics). Ceramic materials can be used for the elements of plasma facing (i.e., in diverters and as insulation coating of coils) or for the construction components of tokamaks. In the first case, the materials are permanently exposed to different types of ionizing radiation generated inside the fusion chambers and they should possess certain properties; on the other hand, in the second case, the irradiation will be applied just in case of emergency and the properties should be different than those necessary for constant exposure.
In the present paper, our attention is focused on materials subjected to continuous irradiation. This dangerous phenomenon is associated with the ion component of the internal tokamak plasma due to the deceleration length of ions inside the plasma-facing severe corrosion dissolved steel components by forming precipitates which blocked the system [36]. Moreover, the corrosion leads to a loss of mechanical integrity of materials. Thus, knowledge related to corrosion behavior of materials used in nuclear fusion is very important. For example, Cr based alloys are commonly used for molten salt reactors and information about the effect of Cr amount variation in the alloy and temperature on the corrosion of materials in the chloride salts is not clear. Further systematic studies should be well conducted for the understanding of the chloride salt corrosion mechanism. The challenges about the electrochemical processes in the molten salt reactor are considered to be relatively innovative and few papers on this subject are published; thus, the corrosion of materials is a more challenging topic in the case of molten salt nuclear systems compared to traditional water reactors due to the formation of a passive oxide layer on top of the surface, which is thermodynamically unfavorable in molten salts, leading to a restriction of materials is this area.
The TiSiCN coatings deposited on Ti6Al4V substrate were not so much discussed in the literature, despite their excellent wear and corrosion resistance suitable for many industrial applications, such as navigation, aerospace, engine turbine, and ocean exploration. Wang et al. reported that the C/N ratio in TiSiCN coatings obtained by arc ion plating has a great influence on the tribocorrosion resistance in artificial seawater (standard ASTM D1141-98) [24].
The aim of the present paper is to study the effect of neutron activation on TiSiCN carbonitride coatings prepared by the cathodic arc evaporation method in a reactive gas mixture of C 2 H 2 and N 2 . The coatings were obtained at different C/N ratio (0.4 and 1.6) to study the effect of C/N ratio variation on the neutron activation. The obtained coatings are investigated in terms of elemental and phase composition, crystalline structure, morphology, and corrosion resistance in aggressive saline solution (3.5% NaCl), in order to be considered suitable candidates for operation under severe conditions present in nuclear applications.
Deposition of the Coatings
The cathodic arc deposition system was used for the preparation of TiSiCN on Ti6Al4V alloy. The unit was equipped with one cathode constructed of Ti 88 at.%-Si 12 at.% (99.99% purity). The base pressure prior to coating was of 6 × 10 −4 Pa; the samples were biased at −1000 V for 10 min in an Ar atmosphere at a pressure of 0.2 Pa. For each deposition the Si content was carefully adjusted up to 6 at. % such as the final (C + N)/(metal + Si) ratio to range between 0.6 and 0.8. Depending on the C/N ratio, under stoichiometric coating (0.4) is labeled TiSiCN-1 and over stoichiometric (1.6) as TiSiCN-2. The gas mass flow rates for TiSiCN-1 coating were kept at 25 sccm for CH 4 and at 65 sccm for N 2 ; for TiSiCN-2 the gas flow rates were inversed. Arc current was at 110 A and the substrate bias was −100 V. All parameters were kept constant for maximum 40 min, leading to a deposition temperature of 320 • C and thickness of the coatings of~3 µm. More preparation details could be found in [7,34].
Corrosion Testing and Characterization of the Coatings
The corrosion resistance has been evaluated by the polarization technique in 3.5% NaCl (purchased from Sigma-Aldrich, Hamburg, Germany), using a potentiostat/galvanostat (VersaStat, Princeton Applied Research, Oak Ridge, TN, USA). A typical three-electrode cell was used: sample as working electrode (WE), platinum foil used as counter electrode (CE), and saturated Ag/AgCl-electrode as reference electrode (RE). The open circuit potential (OCP) was monitored until equilibrium occurred. The potentiodynamic curves were plotted from −1 V to +2 V vs. OCP. Corrosion potential (E corr ) and corrosion current density (i corr ) parameters were extracted from the Tafel extrapolations in the range of ±50 mV. Polarization resistance (R p ) was calculated according to the procedure described in ASTM G59-97 standard (reapproved 2014). The corrosion rate (CR) has been calculated by the following formula, assuming that uniform corrosion has taken place over the whole immersed surface: CR mm year = weight loss, g × cm 3 material density, g × 1 exposed area, cm 2 × 10 mm 1 cm × 1 exposed time, h × 8760 h 1 year (1) The anti-corrosive properties of the coatings were estimated by calculating the protective efficiency (P e ) [37]: where i corr,coating and i corr,substrate are the corrosion current densities of coating and substrate, respectively. The total porosity (P) of a coating was assessed using Elsener's empirical equation [37]: where R p,substrate and R p,coating are the polarization resistances of the substrate and coating, respectively, ∆E corr represents the difference between the corrosion potentials of the coatings and substrate, and β a is the anodic slope of the Tafel extrapolation of the substrate. The EIS measurements were performed over 0.5 ÷ 10 3 Hz frequency range by applying a sinusoidal signal of 10 mV RMS vs. EOC. Data were recorded by VersaStudio software and the fitting procedure was performed using ZView software.
The morphology of the surface before and after corrosion tests was investigated using a scanning electron microscope (Hitachi 3030PLUS, Tokyo, Japan). The surface roughness before and after corrosion tests was evaluated using a surface profilometer (Dektak 150, Bruker, Tucson, Arizona) with a stylus diameter of 2.5 µm over a length of 10 mm on two replicates in 10 randomly selected different areas in the centre of each sample. R a (arithmetic average) and S k (profile symmetry relative to the mean line) parameters were used to estimate the roughness of the investigated surfaces.
The elemental composition of both TiSiCN coatings was investigated using an equipped energy dispersive X-ray detector (EDS, Bruker) and a scanning electron microscope (Hitachi 3030PLUS). The crystalline structure and the phase composition were obtained by means of X-ray diffraction (XRD) using a Rigaku SmartLab diffractometer, with Cu Kα radiation (1.5405 nm) set up in θ/2θ geometry range of 30-80 • with a step of 0.02 • /min. Additionally, grazing incidence XRD (GRXRD) was performed by fixing the incident angle at 3 • . The phase identification was aspired by Rietveld profile fitting using the FullProf program and the crystallite sizes were determined from the XRD peak widths using the Scherrer formula.
Fast Neutrons Irradiation and the Neutron Activation Analysis (NAA)
The neutron irradiation of the prepared samples was performed at the IBR-2M pulsed reactor at the Frank Laboratory of Neutron Physics (JINR, Dubna, Russia). The conditions during irradiation were the nominal power of 1450 kW (average power of the reactor during the period cycle) and the cycle time of 288 h. The neutron activation analysis was used for the determination of fast neutrons flux density (by nickel) and fluence on the specimens. The investigated isotope (line in the spectrum) was 58 Co (810.7 keV) and the accepted effective section was 0.092 barn. Laboratory gamma-spectrometer Canberra GC10021 and Lynx multichannel analyzer were used to conduct NAA with high accuracy.
Elemental Composition Surface Morphology (EDS)
In Table 1
X-ray Diffraction Analysis of TiSiCN
For confidence, in order to distinguish reflections belonging to the substrate and the coatings, the substrate without coating was measured alone (Figure 1a). The main substrate peaks were indexed as (100), (002), (101), (102), (110), (103), (112), and (201). Titanium alloy reflection (200) was also present, but with a lower intensity. Most of the reflections were indexed in space group: P63/mmc (α-Ti, hexagonal close packed). The lattice parameters proved to be a = b = 2.93706 Å and c = 4.69061 Å. According to R. Pederson [38], some residual amount of β phase (SG Im3m) should remain and the peak at 2θ = 39.67 • could be indexed as (110) of β-Ti. One can notice that the texture of the substrate is extreme, which is not surprising since Ti alloys tend to exhibit a very strong texture after the hot-rolling manufacturing process. Usually, it is described as {11-20}(0001). In our case, we find (10-10) normal component to be the strongest. The texture was considered as fiber described by FullProf incorporated exponential function P h = G 2 + (1-G 2 )exp(G 2 α 2 ) with texture parameter G 2 > 7 (from FullProf). The texture was very strong with main component , which diverges from (0001) mentioned in the literature. This, however, is perfectly possible, since we are unaware of the direction of cutting from the ingot. In the case of C/N ratio 1.6 (TiSiCN-2), it is expected that TiC will prevail in the coating. TiC crystallizes in the same group Fm3̅ m. XRD of the coating with predominant carbon content was shown in Figure 1c. An effect was observed, which is the slight shift of the coatings peaks towards lowest angles that indicated a content of TiC with the same crystal symmetry, but smaller unit cell parameters than TiN and correspondingly higher diffraction angles. The second difference is a drastic intensity decrease in (111) the reflection in comparison with TiN case and, hence, the decrease in the sharp (111) texture to a negligible one (FullProf texture parameter 0.12470). Lattice parameter was found a = 4.32 Å.
The comparison of XRD spectra before and after corrosion procedure in saline solution did not show any significant changes; however, for TiSiCN-2, we observe considerable decrease in intensities of the diffraction peaks. It could be that some, although little, X-ray intensity is taken away by the oxide layer formed on top of the surface. XRD spectra The scattering density of the substrate, Ti6Al4V, is considerably higher than that of the coating; therefore, the superposition of substrate reflections was expected. Upon inspection of data from the TiSiC-1 sample (Figure 1b), it can be seen that the spectrum displayed complexity with superposition of individual phases of different features. The spectrum Materials 2023, 16, 1835 6 of 16 was characterized by much sharper peaks presumably belonging to Ti6Al4V substrate. Clearly, several substrate peaks emerged, at 2θ = 42.00 • , 60.4 • , and 72 • ; also, a broadened peak (2θ = 36 • ) superimposing (100) Ti6Al4V was observed. These positions coincide roughly with the peaks of the TiN structure found in the literature. The space group was Fm3m (cubic) and the lattice parameter a = 4.31263 Å; however, inside the coating, the lattice appears distorted, with lattice parameter slightly above the literature value of 4.28 Å. One could assume that since TiC possesses the same space group, but higher a TiC phase is present in the coating, but the value for TiC falls out of the range of possible a. Therefore, more probable is the situation where the composition of the coating is not fixed, but C/N ratio has some continuous distribution and, hence, results in continuous dispersion of the cell dimensions (spanning). This novel result is reported for the first time. All these peaks are much broadened, and an attempt was made to estimate the size of coherent scattering blocks using Scherrer equation from (111) peak width. It yields upper limit value for d = 10 nm, which is the evidence for highly dispersed nanostructured coating. The specimen featured very sharp axial texture with overwhelming component (111) in diffraction spectrum; therefore, direction [111] is normal to the surface, with FullProf texture exponential parameter equals 9. The strength of the texture is a very strong texture component.
In the case of C/N ratio 1.6 (TiSiCN-2), it is expected that TiC will prevail in the coating. TiC crystallizes in the same group Fm3m. XRD of the coating with predominant carbon content was shown in Figure 1c. An effect was observed, which is the slight shift of the coatings peaks towards lowest angles that indicated a content of TiC with the same crystal symmetry, but smaller unit cell parameters than TiN and correspondingly higher diffraction angles. The second difference is a drastic intensity decrease in (111) the reflection in comparison with TiN case and, hence, the decrease in the sharp (111) texture to a negligible one (FullProf texture parameter 0.12470). Lattice parameter was found a = 4.32 Å.
The comparison of XRD spectra before and after corrosion procedure in saline solution did not show any significant changes; however, for TiSiCN-2, we observe considerable decrease in intensities of the diffraction peaks. It could be that some, although little, X-ray intensity is taken away by the oxide layer formed on top of the surface. XRD spectra of grazing geometry showed an intriguing phenomenon for the investigated coatings (see Figure 2). In this case, the diffraction power decrease is much stronger than in the Bragg-Brentano geometry case and this time it mainly concerns diffraction from the coatings. The peak erosion is stronger in TiSiCN-2 than in TiSiCN-1, meaning under stoichiometric coating, which is more corrosion-resistant than the over stoichiometric, i.e., nitrogen has a positive influence on the chemical protection of the coating. All peaks are eroded after corrosion, but the effect manifests itself better on the peak at 2θ = 39.6 • . The same is observed on the TiSiCN-2. The same peak exhibits the strongest decrease. We suspect that this reflection, although on the place of β-Ti (110), does not belong to it but to TiO 2 -rutile phase (200) synthesized during the high-temperature coating deposition. Rutile, although not soluble in saline solution, apparently has worse adhesion with the rest of the coating layer and is washed away during the corrosion treatment. a positive influence on the chemical protection of the coating. All peaks are eroded after corrosion, but the effect manifests itself better on the peak at 2θ = 39.6°. The same is observed on the TiSiCN-2. The same peak exhibits the strongest decrease. We suspect that this reflection, although on the place of β-Ti (110), does not belong to it but to TiO2-rutile phase (200) synthesized during the high-temperature coating deposition. Rutile, although not soluble in saline solution, apparently has worse adhesion with the rest of the coating layer and is washed away during the corrosion treatment.
Surface Morphology
The surface morphology images (SEM), alongside the elemental mapping profiles of the TiSiCN coatings on Ti6Al4V alloy, before and after corrosion testing in 3.5% NaCl are presented in Figure 3.
Surface Morphology
The surface morphology images (SEM), alongside the elemental mapping profiles of the TiSiCN coatings on Ti6Al4V alloy, before and after corrosion testing in 3.5% NaCl are presented in Figure 3. Many micro-droplets with different diameters could be seen on the surfaces of the deposited pre-corrosion coatings (Figure 3a,b). The appearance of such micro-sized morphology features is typical not only for the cathodic arc technique but is observed as well for the same coatings that are fabricated using PEMS, CVD, or arc ion plating methods [24,[39][40][41][42]. They are unavoidable and generated during the deposition process and tend to be more pronounced in thicker coatings [40]. From the figures, it could be seen that the micro-droplets differ in size and surface density. On the surface of TiSiCN-2, more and larger droplets are observed, with some reaching up to 20 µm in size. That could be in connection with the difference in the gas mass flow rates (CH 4 and N 2 ) in both samples during deposition, since intensity of defects and growth-induced stress are strongly dependent on the process parameters [43,44].
An assumption could be made regarding the formation process of such microdroplets in the samples under study here. In many papers, it is reported that TiSiCN coating consists predominantly of TiC x N y nanosized crystals embedded in SiC x N y )/a-C amorphous matrix [11,40,41]. It is possible that the lesser amount of nitrogen in the TiSiCN-2 system compared to TiSiCN-1 hinders the amorphous matrix formation, which is reported to play a major role in the nanocrystal grain growth, since it is an interface phase [9,45]. That could lead to more nucleation sites, larger particles to evolve (microdroplets), and also a greater likelihood of occurrence of inter-particle interactions; so, a nitrogen deficiency could lead to more vacant Ti sites available for oxidation and formation of a TiO2 layer on top, as illustrated in [46]. The EDS data for TiSiCN-2 (Table 1) somewhat reinforce such an assumption-lower nitrogen content coincides with increased percentage values for both Ti and O.
It should be noted that similar large surface formations have been shown to occur also on coatings without carbon (TiSiN) [46,47]. Studies on such formations reveal that they consist predominantly of Ti, N, and O [46,47]; thus, it seems that the different carbon content in the investigated TiSiCN samples should not play a role in the difference in both number and density of the microdroplets. On the other hand, it is reported that higher carbon content (as in TiSiCN-2) enhances the amorphous phase amount and favors reducing the size of crystallites [9,44]. On the carbon elemental profile map, some areas with increased concentration could be seen on both coatings (Figure 3a,b). In the titanium and nitrogen mapping profiles, the same areas lack in color, thus unveiling the absence of both elements. Since there is no titanium in these areas, the presence of carbide structures should be excluded and only the accumulation of amorphous carbon should be regarded in these locations. Again, it seems that the difference in carbon content in both coatings does not play a role in the formation of such carbon piles, since they are observed in both TiSiCN coatings.
Despite the presence of microdroplets, both coatings were uniformly deposited, without any major defects, such as pinholes, voids, or cracks. The different C/N ratio in the coatings is easily recognizable from the difference in density on C and N elemental profiles but does not have a major effect on the titanium homogeneity in the coatings. The smoother surface morphology in TiSiCN-1 (SEM images in Figure 3), could be associated with the bigger quantity of nitrogen, since it is essential for the formation of both TiN and Si 3 N 4 amorphous matrix components.
Despite the aggressive corrosion medium, there was no change in the surface morphology. A decrease in the number of micro and macro particles of different sizes was observed only on the surface of the sample. The surface morphology of TiSiCN coatings can be due to carbon content, as previously reported [9]. Usually, these defects constitute preferential diffusion trails of aggressive species and can result in an accelerated coating failure [34]. In addition, Li investigated the surface morphology of TiSiCN coatings at the C/Si 4:0, 3:1, 2:2, 1:3, and 0:4 ratios and concluded the ideal surface morphology is formed in the ratio C:Si = 2:2 when no surface degradation occurs [21]. Table 2 contains the roughness parameters values before and after corrosion testing performed in 3.5% NaCl. R a is the average arithmetic deviation from the mean line linked with the peaks profile height/depth and valleys present on the surface [48]. Both coated surfaces exhibited much higher R a values than the uncoated substrate before and after corrosion investigation, indicating rougher surfaces. Such results were expected because of the observed morphology patterns in SEM images, including microdroplets and pores visible on the surface of both coatings. Again, in good agreement with SEM observations, TiSiCN-2 coating exhibits rougher surfaces than TiSiCN-1 coating before corrosion. It seems that the lower C/N ratio leads to smoother surface of the coating. Although the uncoated substrate showed its smoothest surface before corrosion (R a~3 7 nm), after the corrosion attack R a decreased to 67.9% of its original value, indicating a severe corrosive process. On the other hand, the deviation in the R a values for both coated surfaces of 2.4% and 0.2% for TiSiCN-1 and TiSiCN-2, respectively, indicate that the coatings were almost intact after corrosion testing, signifying a less pronounced corrosion of these surfaces. The difference in the R a values of the two coatings is statistically insignificant; thus, it could be concluded that both are not prone to significant changes upon corrosion attack. It could also be noted that the TiSiCN-2 rougher surface seems to be less susceptible to corrosion, compared to the smoother surface of the TiSiCN-1 sample. Table 2. Roughness parameters of the investigated surfaces before and after corrosion tests (R aarithmetic average deviation from the mean line; S k -skewness factor).
Substrate and Samples
Before Considering the skewness value, it is recognized that a positive value indicates better corrosion resistance [49]. By comparing the S k values before corrosion, one may see that the uncoated substrate has negative value, unlike the two film specimens, indicating that the surface has more valleys than peaks, probably due to the polishing process. After corrosion testing, the S k value is more pronounced, displaying the depression depth increase, meaning that the corrosive solution dug deep in the initial cavities found on the Ti6Al4V surface. The corrosion experiment has a different effect on the two coatings. While for the TiSiCN-1 the S k value lowers, meaning some kind of smoothing, with the filling of available pits and the peaks becoming less pronounced, in the case of the TiSiCN-2 coating an opposite process unfolds ( Table 2). This suggests a possible difference in the chemical composition of the constituents of the surfaces of the two coatings. The S k values were close before and after corrosion of TiSiCN-1; thus, it could be concluded that the corrosive solution did not affect the coating surface to a big extent. On the contrary, in the case of TiSiCN-2, a significant increase in S k value after corrosion could be observed, meaning that a severe corrosive process has taken place on the surface. It is possible that a lower C/N ratio in TiSiCN-1 to make the surface more prone to passivation process and quick development of an oxide layer, consequently increasing its corrosion resistance. Films with higher C/N ratio are more abundant in microdroplets and they are larger. It results in a rougher surface, which leads to a larger interfacial area with the corrosive environment and, therefore, an enhanced rate of the corrosion process is expected [49].
Corrosion Behaviour
The EIS data obtained for TiSiCN-coated TiAlV alloy are presented in Figure 4 as both Nyquist and Bode amplitude plots. As observed in Nyquist impedance plots, slightly higher semicircles were obtained for TiSiCN-1 coating, indicating a higher charge transfer resistance. The main frequency semicircle is modeled as a combination of parallel resistance and capacitance elements in series with electrolyte resistance. The electrical equivalent circuit (EEC) used for fitting the impedance data is also presented as an insert in Figure 4a. Two interfaces were considered: the porous outer oxide layer/coating-electrolyte interface (characterized by CPE coat and R pore parameters) and the inner layer formed at the electrolyte/dense film interface (characterized by CPEdl and Rct parameters). The same EEC was used also for the Ti6Al4V substrate, the model being an indicated circuit used for coated surfaces. According to Pan et al., a bilayer structure of oxide film is formed on the surface of titanium during the immersion in a saline environment: TiO 2 dense inner layer and porous Ti6Al4V alloy outer layer [50]. According to literature reports, the thickness and characteristics of the oxide film depends on the testing solution [50], but this is mainly formed of TiO 2 , Ti 3 O 5 , Ti 2 O 3 , and TiO oxides [51][52][53]. I. Milošev et al. used the same model to fit the impedance data taking into consideration the formation of the mentioned oxides and suboxides formed on the Ti-based alloys immersed in different media, as indicated by XPS analysis [51,54]. Table 3 presents the fitted parameters of impedance data obtained, along with χ 2 parameter as an indication of the goodness of fit. For Ti6Al4V alloy, notice the high value of the Qdl (resistance associated to the current flow through the pores), which shows that the formed oxide layer allows the electrolyte to ingress through the pores. Indeed, the pore resistance indicated the lowest value as compared with the coated surfaces (Rpore = 4 kΩ cm 2 ). Instead, the highest value was showed by TiSiCN-1 coating (Rpore = 3626 kΩ cm 2 ), indicating a more compact structure with beneficial effects on electrochemical behavior when immersed in 3.5% NaCl solution. This coating also showed the highest αcoat parameter (0.77), followed by TiSiCN-2 (0.63), which are values related to non-uniform current distribution along the surface [55]. At the interface with the substrate, there is an almost defect-free surface, since αdl showed high values near 1; thus, the constant phase element (CPE) used in this case could be seen as an ideal capacitor. Instead, at electrolyte-coating interface αcoat showed low values and a CPE is needed to take into consideration possible deviations from the ideal dielectric behavior [56]. As pointed out in literature, there are multiple factors associated with these deviations, which include surface disorder or inhomogeneity, geometric irregularities, working electrode porosity, or ether surface roughness [56]; thus, the surface properties of the electrode under investigation can have a possible contribution in electrochemical results. Considering the values presented in Table 2, one can note that the roughness measured for TiSiCN-1 before corrosion examination proved a smother surface since the Ra parameter showed a value of ~473 nm, whereas for TiSiCN-2 Ra, it was ~545 nm.
Sample
Rs Table 3 presents the fitted parameters of impedance data obtained, along with χ 2 parameter as an indication of the goodness of fit. For Ti6Al4V alloy, notice the high value of the Q dl (resistance associated to the current flow through the pores), which shows that the formed oxide layer allows the electrolyte to ingress through the pores. Indeed, the pore resistance indicated the lowest value as compared with the coated surfaces (R pore = 4 kΩ cm 2 ). Instead, the highest value was showed by TiSiCN-1 coating (R pore = 3626 kΩ cm 2 ), indicating a more compact structure with beneficial effects on electrochemical behavior when immersed in 3.5% NaCl solution. This coating also showed the highest α coat parameter (0.77), followed by TiSiCN-2 (0.63), which are values related to non-uniform current distribution along the surface [55]. At the interface with the substrate, there is an almost defect-free surface, since α dl showed high values near 1; thus, the constant phase element (CPE) used in this case could be seen as an ideal capacitor. Instead, at electrolyte-coating interface α coat showed low values and a CPE is needed to take into consideration possible deviations from the ideal dielectric behavior [56]. As pointed out in literature, there are multiple factors associated with these deviations, which include surface disorder or inhomogeneity, geometric irregularities, working electrode porosity, or ether surface roughness [56]; thus, the surface properties of the electrode under investigation can have a possible contribution in electrochemical results. Considering the values presented in Table 2, one can note that the roughness measured for TiSiCN-1 before corrosion examination proved a smother surface since the Ra parameter showed a value of~473 nm, whereas for TiSiCN-2 R a , it was~545 nm.
The high-frequency parameters Q coat and R pore represent the properties of the reactions at the electrolyte-coating interface. The results show that coating capacitance exhibits higher values for TiSiCN-coated Ti6Al4V, indicating a more active exchange with the NaCl solution, especially in the case of TiSiCN-2. Since there are insufficient data in the low-frequency region in the measured frequency range, it was not possible to determine the R ct parameter. The χ 2 parameter was also presented and the low values have indicated a validation of the fitting procedure. Table 3. The fitting results of EIS data for the investigated samples after 1 h immersion in 3.5% NaCl.
Sample
Rs (Ω cm 2 ) Qcoat (µF s (α−1) cm −2) α coat R pore (kΩ cm 2 ) Q dl (µF s (α−1) cm −2 ) α dl R ct (kΩ cm 2 ) χ 2 The evolution of the open circuit potential (E oc ) during the 1 h immersion is given in Figure 5a. Both coatings exhibited a constant evolution of E oc , indicating a good coating stability in 3.5% NaCl solution during 1 h immersion tests. Under stoichiometric coatings (TiSiCN-1) exhibited more electropositive E oc values compared to the over stoichiometric coatings (TiSiCN-2). The uncoated alloy displayed an unstable surface due to the fluctuations, despite the fact that it exhibited electropositive value of E oc . The evolution of the open circuit potential (Eoc) during the 1 h immersion is given in Figure 5a. Both coatings exhibited a constant evolution of Eoc, indicating a good coating stability in 3.5% NaCl solution during 1 h immersion tests. Under stoichiometric coatings (TiSiCN-1) exhibited more electropositive Eoc values compared to the over stoichiometric coatings (TiSiCN-2). The uncoated alloy displayed an unstable surface due to the fluctuations, despite the fact that it exhibited electropositive value of Eoc. The potentiodynamic polarization curves of the investigated surfaces are shown in Figure 5b, while the electrochemical parameters are presented in Table 4. The resistance to corrosion of the investigated samples could be estimated according to the following principles: (1) Electropositivity signifies good resistance to corrosion: more electropositive corrosion potential value (Ecorr) means that the material is nobler in the used electrolyte. Table 4, note that both coatings exhibit higher Rp value than the uncoated substrate; (4) Porosity of the coatings was also considered: surfaces with low porosity have good anticorrosive properties. TiSiCN-1 coating has low porosity, indicating that the increase in C/N ratio leads to an increase in porosity, meaning the loss of corrosion resistance; (5) The protective efficiency (Pe) was also considered: the highest Pe value was obtained for the TiSiCN-1 coating. The potentiodynamic polarization curves of the investigated surfaces are shown in Figure 5b, while the electrochemical parameters are presented in Table 4. The resistance to corrosion of the investigated samples could be estimated according to the following principles: (1) Electropositivity signifies good resistance to corrosion: more electropositive corrosion potential value (E corr ) means that the material is nobler in the used electrolyte. Table 4, note that both coatings exhibit higher R p value than the uncoated substrate; (4) Porosity of the coatings was also considered: surfaces with low porosity have good anticorrosive properties. TiSiCN-1 coating has low porosity, indicating that the increase in C/N ratio leads to an increase in porosity, meaning the loss of corrosion resistance; (5) The protective efficiency (P e ) was also considered: the highest P e value was obtained for the TiSiCN-1 coating.
To summarize the electrochemical investigations, the increase in C/N ratio leads to a loss of the anticorrosive properties of the TiSiCN coatings in 3.5% NaCl solution. Table 4. Electrochemical parameters of the investigated samples after 1 h immersion in 3.5% NaCl: E corr : corrosion potential; i corr : corrosion current density; R p : polarization resistance; P: porosity; P e : protective efficiency.
Neutron Activation Analysis for Determination of Fast Neutrons Flux Density
The neutron activation analysis (NAA) principle lies in the interplay between neutrons and the nuclei from the investigated material. It generates gamma radiation, and its intensity is being determined. In the present research, we measured the neutrons fluency through the samples, as well as the density of the neutron flux in a certain energetical diapason. The neutron spectrum was taken down via neutron activation satellite sample. Nickel (Ni) was activated to 58 Co and its gamma activity was used for the estimation of the fast neutron flux density. What makes Ni appropriate is its initial capture cross-section from 1 MeV for neutrons. The reactions are: 60 Ni + f ast neutron → triton + 58 Co (4) 58 Ni + f ast neutron → protons + 58 Co (5) In Figure 6 shows the energy cross-sections relations of the reactions. Judging by the upper right graph, only reaction (5) takes place in the energy range that is of importance for us. The reaction cross-section on the plateau part of the diagram equals tenths of a barn. Reaction (4) starts at > 16 MeV with a cross-section of tenths of a millibarn, as illustrated in the lower right graph ( Figure 6).
Despite the general rule that the energy should not surpass 3 MeV in laboratory conditions, registered X-ray and X-ray radiation energies vary between 40 keV to 10 MeV. The Canberra GC10021 spectrometer has integral nonlinearity of 0.025% and that distinctly points to an energy-channel adequacy; also, the FWHM is 1.1 keV for the 122 keV line and 1.8 keV for the 1332 keV line.
For the neutron flux densities estimation, an operative cross-section of σ eff = 92 mb was employed, as well as considering the 60 Ni nuclei to 58 Ni nuclei ratio in the measurement wire and cross-sections integral convolutions of the fulfilled reactions. Figure 7 presents the NAA satellite gamma-spectrum after illumination with specimens.
The location of the total absorption peak at E y = 810.7 keV matches 58 Co (Figure 7). Its activity could be used in the fast neutrons flux density evaluation for each sample. Calculations determined the fast neutrons flux density and the fluency of both samples to be of 1.65 × 10 7 n cm −2 s −1 and of 1.71 × 10 13 n cm −2 , respectively. The gamma dose rate was estimated via dosimeter after the specimen removal from the irradiation unit. Prompt gamma decay, rather than delayed beta decay, would indicate sample activation. The dosimeter background level indications of 0.1-0.2 µSv\h revealed no sample activation. The absence of nucleus with significant cross-section to interact with the fast neutrons in the studied sample predicts this aspect. Bearing in mind the vast fast neutron flux and the fluencies reached, the samples non-activation could indicate their potential use under reactor analogous conditions. That would be so since it is well known that the most severe defect formation process is caused by fast neutrons. Future studies regarding the resulting number of defects in the crystal structure of such are necessary.
In Figure 6 shows the energy cross-sections relations of the reactions. Judging by the upper right graph, only reaction (5) takes place in the energy range that is of importance for us. The reaction cross-section on the plateau part of the diagram equals tenths of a barn. Reaction (4) starts at > 16 MeV with a cross-section of tenths of a millibarn, as illustrated in the lower right graph (Figure 6). Despite the general rule that the energy should not surpass 3 MeV in laboratory conditions, registered X-ray and X-ray radiation energies vary between 40 keV to 10 MeV. Тhe Canberra GC10021 spectrometer has integral nonlinearity of 0.025% and that distinctly points to an energy-channel adequacy; also, the FWHM is 1.1 keV for the 122 keV line and 1.8 keV for the 1332 keV line.
For the neutron flux densities estimation, an operative cross-section of σeff = 92 mb was employed, as well as considering the 60 Ni nuclei to 58 Ni nuclei ratio in the measurement wire and cross-sections integral convolutions of the fulfilled reactions. Figure 7 presents the NAA satellite gamma-spectrum after illumination with specimens.
The location of the total absorption peak at Ey = 810.7 keV matches 58 Co (Figure 7). Its activity could be used in the fast neutrons flux density evaluation for each sample. Calculations determined the fast neutrons flux density and the fluency of both samples to be of 1.65 × 10 7 n cm −2 s −1 and of 1.71 × 10 13 n cm −2 , respectively. The gamma dose rate was estimated via dosimeter after the specimen removal from the irradiation unit. Prompt gamma decay, rather than delayed beta decay, would indicate sample activation. The dosimeter background level indications of 0.1-0.2 μSv\h revealed no sample activation. The absence of nucleus with significant cross-section to interact with the fast neutrons in the studied sample predicts this aspect. Bearing in mind the vast fast neutron flux and the fluencies reached, the samples non-activation could indicate their potential use under reactor analogous conditions. That would be so since it is well known that the most severe defect formation process is caused by fast neutrons. Future studies regarding the resulting number of defects in the crystal structure of such are necessary.
Conclusions
TiSiCN carbonitride coatings prepared at different C/N ratios of 0.4 and 1.6, respectively, were developed by cathodic arc evaporation. The SEM investigation showed the presence of micro-droplets of different sizes present on the surfaces, especially for the high carbon coating. Some shallow oxides might form during the immersion, as indicated by XRD, since there was a decrease in the diffraction lines intensities after the corrosion assessment; however, there was no visible change in the SEM surface morphology. Modification of surface roughness as compared with the pre-tested coatings was observed
Conclusions
TiSiCN carbonitride coatings prepared at different C/N ratios of 0.4 and 1.6, respectively, were developed by cathodic arc evaporation. The SEM investigation showed the presence of micro-droplets of different sizes present on the surfaces, especially for the high carbon coating. Some shallow oxides might form during the immersion, as indicated by XRD, since there was a decrease in the diffraction lines intensities after the corrosion assessment; however, there was no visible change in the SEM surface morphology. Modification of surface roughness as compared with the pre-tested coatings was observed mainly in the case of TiSiCN obtained at 1.6 C/N ratio (TiSiCN-2). According to EIS-fitted parameters, the highest pore resistance value was shown by TiSiCN-1 coating, indicating a more compact structure with beneficial effects on electrochemical behavior; also, a more uniform current distribution along the surface was observed in this case, which could be linked with a smoother surface. During samples immersion, more electropositive E oc values, lower i corr , and a higher polarization resistance were exhibited by the under stoichiometric coatings (TiSiCN-1). On the other hand, the increase in C/N ratio leads to an increase in porosity and, consequently, to surfaces with lower anticorrosive properties. In the case of irradiation with 1.65 × 10 7 n/cm 2 /sec fast neutron flux, short and long-lived isotopes were not observed in the samples, not even after 288 h. These results proved that both TiSiCN coatings can be successfully used in the severe conditions, which are characteristic to nuclear plants. | 9,347.8 | 2023-02-23T00:00:00.000 | [
"Materials Science"
] |
Instrument Development: Chinese Radiometric Benchmark of Reflected Solar Band Based on Space Cryogenic Absolute Radiometer
: Low uncertainty and long-term stability remote data are urgently needed for researching climate and meteorology variability and trends. Meeting these requirements is di ffi cult with in-orbit calibration accuracy due to the lack of radiometric satellite benchmark. The radiometric benchmark on the reflected solar band has been under development since 2015 to overcome the on-board traceability problem of hyperspectral remote sensing satellites. This paper introduces the development progress of the Chinese radiometric benchmark of the reflected solar band based on the Space Cryogenic Absolute Radiometer (SCAR). The goal of the SCAR is to calibrate the Earth–Moon Imaging Spectrometer (EMIS) on-satellite using the benchmark transfer chain (BTC) and to transfer the traceable radiometric scale to other remote sensors via cross-calibration. The SCAR, which is an electrical substitution absolute radiometer and works at 20 K, is used to realize highly accurate radiometry with an uncertainty level that is lower than 0.03%. The EMIS, which is used to measure the spectrum radiance on the reflected solar band, is designed to optimize the signal-to-noise ratio and polarization. The radiometric scale of the SCAR is converted and transferred to the EMIS by the BTC to improve the measurement accuracy and long-term stability. The payload of the radiometric benchmark on the reflected solar band has been under development since 2018. The investigation results provide the theoretical and experimental basis for the development of the reflected solar band benchmark payload. It is important to improve the measurement accuracy and long-term stability of space remote sensing and provide key data for climate change and earth radiation studies.
Introduction
The impact of human activities on the Earth's ecosystem is gradually increasing. The world is facing significant environmental challenges [1,2]. The phenomena of glacier melting, sea level rise, land drought, and more extreme weather show that the climate system is changing [3,4]. Climate change is closely related to human reproduction. In the future, climate change will be more noticeable and even pose various dangers. Solar radiation, which is reflected from the Earth's surface, clouds, etc., back to space, constitutes a powerful and highly variable feature of the climate system through changes in snow cover, sea ice, land use, aerosol, and cloud properties. Systematic and spatially resolved We need to have both high accuracy and high stability data to recognize the long-term changes of the Earth. Wielicki researched the relationship between absolute calibration accuracy and the accuracy of global average decadal climate change trends [47]. The research results suggested the need for a perfect observation system as well as varying levels of instrument calibration uncertainty. The measurement uncertainty dramatically affects both climate trends and the time required to detect trends. The remote sensor should not only ensure comparability between various short-term data, but also the comparability of data across decades or even longer periods.
On-satellite calibration cannot realize traceability at present. Thus, remote sensing data are incomparable between different countries, different series of the same country, and different satellites of the same series. Total solar irradiance (TSI) is a pivotal parameter of climate models. TSI has been continuously measured on satellite platforms since 1978 [48]. The spatial TSI data from different remote sensors and different counties constitute the data chain needed to ensure long-term accuracy. Spatial TSI data have obvious 0.5% deviations between different remote sensors and different counties according to the research of Suter [49]. The World Radiation Center (WRC) established the World Radiometric Reference (WRR) to unify the solar radiometric scale in 1978. The worldwide solar absolute radiometers are traced back to the WRR by the International Pyrheliometer Comparison (IPC), which is held every five years. The TSI payloads are calibrated by the WRR transfer radiometers before being launched. Then, the uniformity of the spatial TSI data is improved from 0.5% to 0.3%. NASA developed the TSI Radiometer Facility (TRF) to calibrate the TSI radiometers using an end-to-end methodology. The TRF used the cryogenic absolute radiometer as the radiometric benchmark. TSI radiometers and cryogenic absolute radiometers measure the same incident light. The TRF provides the calibration means of TSI radiometers in vacuum-like spatial operating conditions and improves the calibration accuracy of TSI instruments [50]. The TSI measurement accuracy was further improved to 0.035%, such as the Active Cavity Radiometer Irradiance Monitor (ACRIM) launched by the ACRIM satellite (ACRIMSAT), the Total Irradiance Monitor (TIM) launched by the Solar Radiation and Climate Experiment satellite (SORCE), the Total Irradiance Monitor (TIM) launched by the U.S. Air Force Space Test Program spacecraft of the Total Solar Irradiance Calibration Transfer Experiment (TCTE), etc. [51]. However, the long-term stability of the spatial TSI data has huge uncertainty due to the lack of TSI calibration after launching. Once the data chain is broken, the absolute accuracy of spatial TSI data is negatively affected.
Spatial radiometric benchmarks must be established to improve the calibration accuracy of the remote sensing payload. The spatial applications of ultra-high accuracy measurement, such as the cryogenic absolute radiometer and phase-change blackbody, face considerable challenges. The high cost of a radiometric benchmark will exceed the cost of the payload. It is not economically feasible to equip each payload with an expensive calibration system. The benchmark satellite is an economic and effective means to solve this problem. The benchmark satellite records ultra-high accuracy measurements. The radiometric scale is transferred to other remote sensors via cross-calibration. The in-orbit calibration can ensure the comparability of multi-source remote sensing data and products.
The key technologies of the space-based radiometric benchmark were researched, including the cryogenic radiometric benchmark, the phase-change blackbody, and lunar calibration. The goal of the space-based radiometric benchmark satellite is to accurately measure and calibrate the emitted earth spectrum, the incident solar spectrum, and the reflected solar spectrum.
The Chinese Radiometric Benchmark on the reflected solar band has been under development based on the SCAR. The operating models of the radiometric benchmark satellite are self-calibration, TSI calibration, uniform field calibration, and lunar calibration, as shown in Figure 1. The self-calibration model is periodically applied to check the measurement accuracy. The TSI monitor is calibrated by the synchronous measurement with the SCAR in the TSI calibration model. The remote sensors on the other satellite are calibrated by the radiometric benchmark satellite in the uniform field calibration model. When the orbits cross over uniform fields, such as deserts, lakes, and ice, the remote sensors on the other satellite are calibrated by synchronous observation under the same conditions. The moon is also a uniform field. The cross-calibration is achieved by synchronous radiance observation of the moon.
Principle of Radiometric Benchmark on Reflected Solar Band
The radiometric benchmark on the reflected solar band uses the SCAR to establish the calibration system, instead of using a solar diffuser, standard lamps, vicarious calibration methods, and groundbased calibration techniques, as shown in Figure 2.The calibration system consists of the SCAR and BTC. The SCAR is an electrical substitution radiometer that works at 20 K. The detector of the SCAR is a blackbody cavity with super high absorptance. The incident light is converted into the temperature rise of the blackbody cavity by multiple reflection and absorption. The power of the incident light can be obtained by precisely measuring the electrical power when the temperature rise is replaced by electrical power. The heating effect of the incident light and electrical heater is highly equivalent at 20 K. The measurement accuracy of the SCAR is thus improved. The radiance of the Earth and moon is measured by the EMIS. The EMIS needs a high signal-to-noise ratio and a large dynamic range. The power benchmark of SCAR is converted into a radiance benchmark by the BTC. Based on the multiwavelength laser diodes and the halogen tungsten lamp, the EMIS can be radiance calibrated by the transfer radiometer. The linearity of the EMIS can be calibrated by the sun attenuator. The goal of the radiometric benchmark on the reflected solar band is shown in Table 1. The radiance uncertainty is expected to reach 1%.
Principle of Radiometric Benchmark on Reflected Solar Band
The radiometric benchmark on the reflected solar band uses the SCAR to establish the calibration system, instead of using a solar diffuser, standard lamps, vicarious calibration methods, and ground-based calibration techniques, as shown in Figure 2. The calibration system consists of the SCAR and BTC. The SCAR is an electrical substitution radiometer that works at 20 K. The detector of the SCAR is a blackbody cavity with super high absorptance. The incident light is converted into the temperature rise of the blackbody cavity by multiple reflection and absorption. The power of the incident light can be obtained by precisely measuring the electrical power when the temperature rise is replaced by electrical power. The heating effect of the incident light and electrical heater is highly equivalent at 20 K. The measurement accuracy of the SCAR is thus improved. The radiance of the Earth and moon is measured by the EMIS. The EMIS needs a high signal-to-noise ratio and a large dynamic range. The power benchmark of SCAR is converted into a radiance benchmark by the BTC. Based on the multiwavelength laser diodes and the halogen tungsten lamp, the EMIS can be radiance calibrated by the transfer radiometer. The linearity of the EMIS can be calibrated by the sun attenuator. The goal of the radiometric benchmark on the reflected solar band is shown in Table 1. The radiance uncertainty is expected to reach 1%.
Principle of Radiometric Benchmark on Reflected Solar Band
The radiometric benchmark on the reflected solar band uses the SCAR to establish the calibration system, instead of using a solar diffuser, standard lamps, vicarious calibration methods, and groundbased calibration techniques, as shown in Figure 2.The calibration system consists of the SCAR and BTC. The SCAR is an electrical substitution radiometer that works at 20 K. The detector of the SCAR is a blackbody cavity with super high absorptance. The incident light is converted into the temperature rise of the blackbody cavity by multiple reflection and absorption. The power of the incident light can be obtained by precisely measuring the electrical power when the temperature rise is replaced by electrical power. The heating effect of the incident light and electrical heater is highly equivalent at 20 K. The measurement accuracy of the SCAR is thus improved. The radiance of the Earth and moon is measured by the EMIS. The EMIS needs a high signal-to-noise ratio and a large dynamic range. The power benchmark of SCAR is converted into a radiance benchmark by the BTC. Based on the multiwavelength laser diodes and the halogen tungsten lamp, the EMIS can be radiance calibrated by the transfer radiometer. The linearity of the EMIS can be calibrated by the sun attenuator. The goal of the radiometric benchmark on the reflected solar band is shown in Table 1. The radiance uncertainty is expected to reach 1%.
SCAR
The SCAR is used to realize high accuracy radiometric measurements on satellites. The SCAR adopts the principle of the electrical substitution that has been used for nearly 100 years. The cryogenic electrical substitution measurement is the main method used for absolute radiometric calibration. The detector of the SCAR is the blackbody cavity, which has an ultra-high absorption ratio of incident light. The incident light is converted into the temperature rise of the blackbody cavity by multiple reflections and absorptions. The power of the incident light can be obtained by preciously measuring the electrical power when the temperature rise is replaced by electrical power. The incident light power (P O ) can be determined by: where ∆T O is the temperature rise caused by optical power, ∆T E is the temperature rise caused by the equivalent electrical power, α is the absorption of the blackbody cavity, N is the inequivalence coefficient, V E is the heating voltage, I E is the heating electric current, and η is the vacuum window transmittance. The SCAR uses cryogenic and vacuum technology to reduce the thermal noise from radiation and air convection compared with normal environmental temperature and pressure. The superconductivity application can eliminate the ohmic heat loss on the lead and improve the measurement uncertainty. The blackbody cavity is composed of oxygen-free copper. In the cryogenic condition, the thermal diffusivity of oxygen-free copper increases by four orders of magnitude, and it is possible to increase the volume for developing a blackbody cavity with an ultra-high absorption ratio. The ground-based cryogenic radiometer generally adopts a liquid helium cooled mode, and works at 4-10 K. However, the liquid helium cooled mode cannot be applied in space. The mechanical Stryn-type Pulse Tube Cryocooler (SPTC) has been under development to reduce its volume and mass and improve its working life. The SCAR was designed for a working temperature of 20 K due to the limitation of the refrigeration efficiency.
EMIS
The EMIS is used to measure the radiance of the Earth and moon. The optical design of the EMIS consists of a telescope and a hyperspectral imaging spectrometer, as shown in Figure 3. The telescope adopts a four-mirror Anastigmat (4MA) design to eliminate aberration. The 4MA consists of four aspheric mirrors. The hyperspectral imaging spectrometer adopts an Offner structure. The prism is used as the dispersion element. The telescope uses the image space telecentric structure to achieve the matching conditions with the pupil of the spectrometer. The influence of stray light is reduced by setting baffles at the intermediate image plane. The EMIS makes a trade-off between the spectral and spatial resolution, with spectral and spatial sampling that are better than 10 nm and 100 m, respectively. The swath width is about 50 km at the nadir from a 600 km orbit. The EMIS adjusts the parameters of the lens, such as thickness and curvature. Then, the EMIS obtains a lower dispersion non-linear value and a better spectral performance is obtained.
BTC
The radiometric scale of the EMIS can be traced back to the SCAR. The measurement accuracy and long-term stability of the EMIS can be improved by on-board hyperspectral calibration. The principle of hyperspectral calibration on a satellite is shown in Figure 4. The hyperspectral calibration on the satellite consists of the SCAR and the BTC. The SCAR is the primary on-board benchmark that realizes stable and highly accurate long-term radiation measurement. The transfer radiometer (TR) is calibrated and used as the working benchmark using laser power measurement and comparison.
The multi-spectral radiance of the tungsten halogen lamp is calibrated by the TR. The whole spectral radiance of the tungsten halogen lamp is reconstructed using the fitting algorithm. The hyperspectral calibration of the EMIS can then be realized by the tungsten halogen lamp. The response linearity of the EMIS is calibrated by the solar attenuator.
Development of Radiometric Benchmark on Reflected Solar Band
The key radiometric benchmark technologies on the reflected solar band are being researched.
The prototype of the SCAR has been under development since 2015, supported by the National High-
BTC
The radiometric scale of the EMIS can be traced back to the SCAR. The measurement accuracy and long-term stability of the EMIS can be improved by on-board hyperspectral calibration. The principle of hyperspectral calibration on a satellite is shown in Figure 4. The hyperspectral calibration on the satellite consists of the SCAR and the BTC. The SCAR is the primary on-board benchmark that realizes stable and highly accurate long-term radiation measurement. The transfer radiometer (TR) is calibrated and used as the working benchmark using laser power measurement and comparison. The multi-spectral radiance of the tungsten halogen lamp is calibrated by the TR. The whole spectral radiance of the tungsten halogen lamp is reconstructed using the fitting algorithm. The hyperspectral calibration of the EMIS can then be realized by the tungsten halogen lamp. The response linearity of the EMIS is calibrated by the solar attenuator.
BTC
The radiometric scale of the EMIS can be traced back to the SCAR. The measurement accuracy and long-term stability of the EMIS can be improved by on-board hyperspectral calibration. The principle of hyperspectral calibration on a satellite is shown in Figure 4. The hyperspectral calibration on the satellite consists of the SCAR and the BTC. The SCAR is the primary on-board benchmark that realizes stable and highly accurate long-term radiation measurement. The transfer radiometer (TR) is calibrated and used as the working benchmark using laser power measurement and comparison.
The multi-spectral radiance of the tungsten halogen lamp is calibrated by the TR. The whole spectral radiance of the tungsten halogen lamp is reconstructed using the fitting algorithm. The hyperspectral calibration of the EMIS can then be realized by the tungsten halogen lamp. The response linearity of the EMIS is calibrated by the solar attenuator.
Development of Radiometric Benchmark on Reflected Solar Band
The key radiometric benchmark technologies on the reflected solar band are being researched.
The prototype of the SCAR has been under development since 2015, supported by the National High-
Development of Radiometric Benchmark on Reflected Solar Band
The key radiometric benchmark technologies on the reflected solar band are being researched. The prototype of the SCAR has been under development since 2015, supported by the National High-tech Research and Development Plan (the 863 Plan).
Cryogenic Absolute Measurement
The prototype of the SCAR consists of a cryogenic absolute radiation detector, the SPTC, a measurement system, and a vacuum pump, as shown in Figure 5. We have solved the problems of radiation heat leakage, high thermal resistance connection, and cryogenic refrigeration. The sensitivity of the blackbody cavity achieved 3.95 K/mW. The blackbody cavity was designed according to Fang's effective emissivity simulation results. The blackbody cavity adopts the cylindrical cavity with a cone bottom to increase the reflection times of the incident light. The internal surface of the blackbody cavity is sprayed with black paint. Experimental measurement results illustrated that the blackbody cavity absorption is 0.999928 ± 0.000006 (1σ) at a wavelength of 632 nm [52][53][54][55].
tech Research and Development Plan (the 863 Plan).
Cryogenic Absolute Measurement
The prototype of the SCAR consists of a cryogenic absolute radiation detector, the SPTC, a measurement system, and a vacuum pump, as shown in Figure 5. We have solved the problems of radiation heat leakage, high thermal resistance connection, and cryogenic refrigeration. The sensitivity of the blackbody cavity achieved 3.95 K/mW. The blackbody cavity was designed according to Fang's effective emissivity simulation results. The blackbody cavity adopts the cylindrical cavity with a cone bottom to increase the reflection times of the incident light. The internal surface of the blackbody cavity is sprayed with black paint. Experimental measurement results illustrated that the blackbody cavity absorption is 0.999928 ± 0.000006 (1σ) at a wavelength of 632 nm [52][53][54][55]. The three-stage heat transfer structure is designed for the connection between the blackbody cavity and the SPTC. The three-stage heat transfer structure consists of a main heat sink, a temperature-controlled heat sink, and a cryogenic platform. The two-stage precise temperature controller is designed based on the Proportional Integral Differential (PID) algorithm to establish a high stable thermal environment for the blackbody cavity. The temperature stability of the main heat sink was improved from 13 to 0.5 mK. The high stability of the working environment in the 20 K temperature region was thus established. The thermoelectric repeatability of the blackbody cavity was experimentally tested. The blackbody cavity was repeatedly heated by a group of electrical The heat quantity of the cool head is exported by the pulse tube. The SPTC uses a coaxial configuration to facilitate the coupling between the cool head and the absolute radiation receiver. The pulse tube and the compressor are combined together to reduce the volume. The experimental results illustrated that the cryogenic platform temperature stabilizes at 21.8 K and the temperature stability of the cryogenic platform is 13 mK. The temperature stability of the SPTC is the main thermal noise for the blackbody cavity. The spaceborne SPTC is being developed to provide a refrigerating capacity of 350 mW at 20 K.
The three-stage heat transfer structure is designed for the connection between the blackbody cavity and the SPTC. The three-stage heat transfer structure consists of a main heat sink, a temperature-controlled heat sink, and a cryogenic platform. The two-stage precise temperature controller is designed based on the Proportional Integral Differential (PID) algorithm to establish a high stable thermal environment for the blackbody cavity. The temperature stability of the main heat sink was improved from 13 to 0.5 mK. The high stability of the working environment in the 20 K temperature region was thus established. The thermoelectric repeatability of the blackbody cavity was experimentally tested. The blackbody cavity was repeatedly heated by a group of electrical powers. The electrical power was increased gradually from 0.1 to 1 mW. The increase in electrical power was 0.1 mW. The temperature response curves were measured and the equilibrium temperatures were obtained as shown in Figure 6. The experimental results illustrated that the thermoelectric repeatability of the blackbody cavity is 0.1 mK with different electrical power inputs. The thermoelectric repeatability is optimized by the improvement in environmental stability.
powers. The electrical power was increased gradually from 0.1 to 1 mW. The increase in electrical power was 0.1 mW. The temperature response curves were measured and the equilibrium temperatures were obtained as shown in Figure 6. The experimental results illustrated that the thermoelectric repeatability of the blackbody cavity is 0.1 mK with different electrical power inputs.
The thermoelectric repeatability is optimized by the improvement in environmental stability. Therefore, the high accuracy optical power measurement can be realized via SCAR. The payload prototype of SCAR is being developed to establish the radiometric satellite benchmark.
Whole Spectral Light Source
Whole spectral light is needed for the full spectral radiance calibration of the EMIS. We adopted a tungsten halogen lamp with a smooth spectral distribution curve as the whole spectral light source.
The spectral distribution curve of the tungsten halogen lamp can be precisely obtained using an inversion algorithm according to the multispectral calibration data. The spectral radiance measurement system was established. The long time-scale variation of the spectral distribution curve has been experimentally measured. The spectral radiance attenuation of the tungsten halogen lamp was obtained as shown in Figure 7. The experimental results illustrated that the spectral radiance achieves red shift with time. The results provide key support for inversion algorithm investigation. The SCAR adopts the measurement algorithm of double electrical calibration. The non-linearity of sensitivity is compensated by a linear fitting in a small power region. The measurement repeatability of the SCAR is 0.019% for 0.4 mW of incident light. The synthetic uncertainty of the SCAR is 0.021% according to the experimental measurements and theoretical analysis. The standard uncertainty of the SCAR is 0.029% by combining the repeatability uncertainty and synthetic uncertainty. The standard uncertainty of the SCAR was tested by the indirect comparison with the cryogenic radiometer of the National Institute of Metrology (NIM).
Therefore, the high accuracy optical power measurement can be realized via SCAR. The payload prototype of SCAR is being developed to establish the radiometric satellite benchmark.
Whole Spectral Light Source
Whole spectral light is needed for the full spectral radiance calibration of the EMIS. We adopted a tungsten halogen lamp with a smooth spectral distribution curve as the whole spectral light source. The spectral distribution curve of the tungsten halogen lamp can be precisely obtained using an inversion algorithm according to the multispectral calibration data. The spectral radiance measurement system was established. The long time-scale variation of the spectral distribution curve has been experimentally measured. The spectral radiance attenuation of the tungsten halogen lamp was obtained as shown in Figure 7. The experimental results illustrated that the spectral radiance achieves red shift with time. The results provide key support for inversion algorithm investigation. The tungsten halogen lamp is similar to the high temperature blackbody. Huang established an inversion algorithm which considers the impact of emissivity [56]. When the fitting parameters (c0, c1, c2, c3, c4, c5) are determined via multispectral calibration, the spectral radiance of the full spectral band (F(λ)) was obtained, which is expressed as: The tungsten halogen lamp is similar to the high temperature blackbody. Huang established an inversion algorithm which considers the impact of emissivity [56]. When the fitting parameters (c 0 , c 1 , c 2 , c 3 , c 4 , c 5 ) are determined via multispectral calibration, the spectral radiance of the full spectral band (F(λ)) was obtained, which is expressed as: According to Formula (2), the spectral radiance of the standard lamp was rebuilt as shown in Figure 8.
The average relative error of the rebuilt spectral radiance was 0.6%. We are researching the inversion algorithm to reduce the average relative error from 0.6% to 0.3%. The tungsten halogen lamp is similar to the high temperature blackbody. Huang established an inversion algorithm which considers the impact of emissivity [56]. When the fitting parameters (c0, c1, c2, c3, c4, c5) are determined via multispectral calibration, the spectral radiance of the full spectral band (F(λ)) was obtained, which is expressed as: According to Formula (2), the spectral radiance of the standard lamp was rebuilt as shown in Figure 8. The average relative error of the rebuilt spectral radiance was 0.6%. We are researching the inversion algorithm to reduce the average relative error from 0.6% to 0.3%. The long-term spectral radiance data of tungsten halogen lamp are accumulated for researching the attenuation property. The tungsten halogen lamp can be used as the whole spectral light source for the radiometric benchmark on the reflected solar band. The inversion algorithm of the full spectral band radiance has been investigated, and needs optimization to reduce the error.
Signal-to-Noise Ratio (SNR) Analysis of EMIS
The SNR equation of the Visible Near Infrared Radiance spectrometer (VNIR) is depicted by: The long-term spectral radiance data of tungsten halogen lamp are accumulated for researching the attenuation property. The tungsten halogen lamp can be used as the whole spectral light source for the radiometric benchmark on the reflected solar band. The inversion algorithm of the full spectral band radiance has been investigated, and needs optimization to reduce the error.
Signal-to-Noise Ratio (SNR) Analysis of EMIS
The SNR equation of the Visible Near Infrared Radiance spectrometer (VNIR) is depicted by: where N e is the electrical noise, N OP is the amplifier noise, N AD is the analog-to-digital converter noise and S eVNIR (λ) is the signal electric number of VNIR, which is expressed by: where F is the focal length, τ 0 (λ) is the transmittance of the front optical system, η(λ) is the transmittance of the VNIR, ∆λ is the spectral resolution, and L VNIR (λ) is the pupil radiance of the VNIR optical system. Under typical conditions, the pupil radiance is as shown in Figure 9a. According to Formula (4), the signal electric number of VNIR is as shown in Figure 9c. The SNR of the VNIR spectrometer is better than 300, as shown in Figure 9e.
where F is the focal length, τ0(λ) is the transmittance of the front optical system, η(λ) is the transmittance of the VNIR, △λ is the spectral resolution, and LVNIR(λ) is the pupil radiance of the VNIR optical system. Under typical conditions, the pupil radiance is as shown in Figure 9a.
According to Formula (4), the signal electric number of VNIR is as shown in Figure 9c. The SNR of the VNIR spectrometer is better than 300, as shown in Figure 9e. Considering the thermal radiation noise of the instrument (S T ), the SNR equation of the Short Wave Infrared Radiance spectrometer (SWIR) is depicted by: where S T is the environment thermal noise, and S eSWIR (λ) is the signal electric number of SWIR, which is depicted by: where F is the focal length, τ 0 (λ) is the transmittance of the front optical system, η(λ) is the transmittance of the SWIR, ∆λ is the spectral resolution and L SWIR (λ) is the pupil radiance of the SWIR optical system. The pupil radiance is shown in Figure 9b at the typical condition. The signal electric number is shown in Figure 9d. The SNR of the SWIR spectrometer is better than 300 at the non-absorption region, as shown in Figure 9e. The measurement repeatability (σ) of EMIS can be calculated as: The SNR analysis results of the VNIR and SWIR spectrometers illustrated that the radiance measurement repeatability of EMIS is better than 0.3%. The optical design can meet the requirements of high-precision hyperspectral remote sensing.
Polarization Correction
The dual Babinet depolarizer is used by the telescope to reduce the polarization sensitivity of the optical system. The dual Babinet depolarizer consists of a two-series Horizontal-Vertical (H-V) depolarization device. The Muller matrix of the first H-V depolarization device is expressed as: and δ1x is depicted by: where λ is the wavelength, x is the normalized position coordinates, n o is the refractive index of O light, n e is the refractive index of X light, r is the radius of depolarizer, and β is the angle between the two wedge plates.
The second H-V depolarization device is placed behind the first H-V depolarization device. The angle between the two depolarizers is 45 • . The Muller matrix of the second H-V depolarization device is expressed as: The Muller matrix of the dual Babinet depolarizer is: Each beam incident on the depolarizer is described by Formula (5). After polar integration, the spatial average Muller matrix of a circular incident beam is expressed as: Considering that the parameters of the two H-V depolarization devices are coincident, δ = δ 1 = δ 2 , Formulas (7)-(10) can be simplified as: The Stokes vector of an arbitrary polarized light is: Then, the Stokes vector of the outgoing light is: According to the calculation formula of the degree of polarization of light wave, which is expressed as: the residual degree of polarization of the dual Babinet depolarizer is: Based on the theoretical model, the residual polarization degree of the depolarizer is designed to 2% (λ = 920 nm).
Discussion
The uncertainty of the radiometric benchmark on the reflected solar band is analyzed in this section. The uncertainty decomposition is shown in Table 2. The radiometric uncertainty of the SCAR is 0.03%. The radiance uncertainty of the EMIS is 0.3%. Thus, the uncertainty of the radiance calibration is expected to reach 0.5% to realize hyperspectral radiance measurement with less than 1% uncertainty. The laser diodes with 10 wavelengths are coupled by optical fibers. The radiometric scale of the CSAR is transferred to the TR by measuring the multiwavelength laser diodes separately. The impact factors conclude the power stability of the laser diodes (u 1 ) and the optical power measurement uncertainty of the TR (u 2 ). Then, the TR is used as the secondary standard in radiance calibration. The radiometric scale is converted from power to radiance by the TR. The spectral curve of the tungsten halogen lamp is corrected by multispectral calibration and a spectral reconstruction algorithm. The light of the tungsten halogen lamp is reflected by the diffuser, and measured by the TR and the EMIS with the same angle. Then, the EMIS can be calibrated by the TR. The uncertainty impact factors conclude the uncertainty of the optical-radiance conversion (u 3 ), radiance measurement (u 4 ), photodiode detector (u 5 ), spectral radiance stability (u 6 ), spectral radiance reconstruction accuracy (u 7 ), diffuser reflection uniformity (u 8 ), and stray light (u 9 ).
Conclusions
The radiometric benchmark on the reflected solar band has been under development to improve the accuracy and long-term stability of the hyperspectral radiance data for climate change research. The radiometric benchmark was designed based on the SCAR referencing the ground-based optical standard. The radiometric scale of the SCAR is converted and transferred by the TR. The TR is used as the secondary standard in radiance calibration. The EMIS is calibrated in-orbit to improve the measurement accuracy and long-term stability.
The SCAR is used to record highly accurate light power measurements on satellites. The SCAR is an electrical substitution radiometer, and it works at 20 K. The prototype of the SCAR consists of an absolute radiation receiver, the SPTC, a measurement system, and a vacuum pump. The experimental results illustrated that the measurement uncertainty is 0.029% for 0.4 mW incident light. The uncertainty was tested by an indirect comparison with the cryogenic radiometer of the NIM.
The EMIS measures the hyperspectral radiance of the Earth and the moon. The optical design of the EMIS consists of a telescope and a hyperspectral imaging spectrometer. The telescope adopts the four-mirror anastigmat (4MA) design to eliminate aberration. The 4MA consists of four aspheric mirrors. The dual Babinet depolarizer is used by the telescope to reduce the polarization sensitivity of the optical system. The hyperspectral imaging spectrometer adopts an Offner structure. The prism is used as the dispersion element. The SNR of the SWIR and VNIR spectrometers is improved to 300 by optimizing the optical design.
The BTC is used to realize the calibration of multispectral, whole spectral, and linearity calibration. The BTC consists of the TR, multiwavelength laser diodes, a tungsten halogen lamp, a diffuser, and a solar attenuator. The TR is used to convert the benchmark from the light power to radiance. The EMIS is calibrated by the tungsten halogen lamp and TR based on the hyperspectral curve reconstruction. The EMIS is calibrated in-orbit by the BTC to improve the measurement accuracy and long-term stability.
The uncertainty decomposition of the radiometric benchmark on the reflected solar band was analyzed. The radiometric uncertainty of the SCAR is 0.03%. The radiance uncertainty of the EMIS is 0.3%. Thus, the uncertainty of the radiance calibration is expected to reach 0.5% to realize less than 1% uncertainty of hyperspectral radiance measurements.
The results provide the theoretical and experimental basis for the payload design of the radiometric benchmark on the reflected solar band. The radiometric benchmark will significantly improve the measurement accuracy and long-term stability of hyperspectral radiance, and provide remote sensing data for climate change research. The radiometric scale can be transferred to the remote sensors on the other satellites via track crossing calibration over uniform fields. | 7,966 | 2020-09-03T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Three-dimensional imaging of single nanotube molecule endocytosis on plasmonic substrates
Investigating the cellular internalization pathways of single molecules or single nano-objects is important to understanding cell-matter interactions and to applications in drug delivery and discovery. Imaging and tracking the motion of single molecules on cell plasma membrane require high spatial resolution in three dimensions (3D). Fluorescence imaging along the axial dimension with nanometer resolution has been highly challenging but critical to revealing displacements in trans-membrane events. Here, utilizing a plasmonic ruler based on the sensitive distance dependence of near-infrared fluorescence enhancement (NIR-FE) of carbon nanotubes on a gold plasmonic substrate, we probe ~10 nm scale trans-membrane displacements through changes in nanotube fluorescence intensity, enabling observations of single nanotube endocytosis in 3D. Cellular uptake and trans-membrane displacements show clear dependences to temperature and clathrin assembly on cell membrane, suggesting that the cellular entry mechanism for a nanotube molecule is via clathrin-dependent endocytosis through the formation of clathrin-coated pits on cell membrane.
Introduction
The interactions of molecules and nanostructured materials with mammalian cells have aroused a great deal of scientific interest with implications to many biological and medical applications in drug discovery, nanomedicine and toxicology. 1 The uptake pathway and subsequent intracellular trafficking have been intensely studied and debated for a broad range of nanomaterials, including fullerene, 2 quantum dots (QDs), 3 magnetic nanoparticles 4 and carbon nanotubes (CNTs). [5][6][7] An interesting question is in regards to the cellular internalization pathways and the dependence of the pathways on the size of molecules or nanomaterials. It is 2 important to investigate the upper size limit for molecules simply inserting and diffusing through the cell plasma membrane, and the lower size limit for nanomaterials becoming too small for wrapping by a highly curved lipid bilayer to undergo endocytosis.
As an example, two distinct pathways for carbon nanotube cellular entry have been suggested, direct insertion or diffusion through the lipid bilayer, 5,[8][9][10] and clathrin-dependent endocytosis. 6,[11][12][13] In most cases, ensembles of CNTs (including bundles and aggregates of tubes) are investigated in cellular uptake experiments through imaging of fluorescent dye labels, 5,6 and have suggested endocytotic internalizations of CNTs 6,13,14 except in several reports proposing direct insertion. 5,8 At the individual nanotube level, imaging CNT-cell interactions through detecting the intrinsic near-infrared (NIR) photoluminescence 7,15 of CNTs has been performed.
However, thus far, there has been no direct three dimensional (3D) imaging of single nanotube molecules traversing cell membrane to directly observe either insertion or endocytosis pathway.
In a theoretical study, Gao et al. suggested high curvature of cargo particles such as a bare single-walled CNT (diameter down to 1 nm) could cause elevated elastic energy associated with lipid-bilayer wrapping involved in endocytosis, and an optimum radius of curvature of ~14 nm exists for endosome formation around a cylindrical particle. 16 The question of whether an individual CNT (rather than bundles or aggregates of tubes) can undergo endocytosis remains an interesting open problem.
Imaging and tracking single events of specific molecules on cell membrane can offer new insights into various mechanisms of interest to biological systems. 17 Imaging single molecule trans-membrane motion requires nanometer spatial resolution along the axial dimension. Recent progresses in 3D single particle tracking have led to various new techniques to resolve the location of a single nanoparticle with high precision and elucidate interactions between the tracked nanoparticle with its surroundings. 18 For instance, by confining illumination to a ~ 100 nm optical section with evanescent waves travelling at the cover slip-cell interface, total internal reflection fluorescence microscopy (TIRFM) has been developed to image cell membrane, nearby cytoplasm and membrane-related events. 19 Further, combined with two-dimensional (2D) super-resolution techniques, 20 high axial resolution can also be achieved by imposing zdependent asymmetry into two orthogonal axes x and y, 21,22 or using two stimulated emission depletion (STED) beams to generate a central zero in three dimensions. 23 Single-molecule axial tracking has also been realized by feedback tracking, including focusing two circularly scanning 3 laser beams at different z-depths. 24 Still, it remains highly challenging to image trans-membrane motion of a single molecule (such as a single nanotube) with sensitivity on the order of the thickness of the plasma membrane (~10 nm), thus probing the pathway and kinetics of singlemolecule transportation across the plasma membrane.
Here we report the use of an NIR fluorescence enhancement (NIR-FE) phenomenon on plasmonic gold substrates [25][26][27][28] to track cellular internalization of individual single-walled carbon nanotubes (SWCNTs) in 3D, with an axial resolution on the order of ~ 10 nm owing to the highly sensitive dependence of fluorescence enhancement to the distance between SWCNT and gold. 25 SWCNTs exhibit intrinsic band-gap fluorescence in the 0.9-1.4 m NIR II region upon excitation in the visible or NIR I (600-900 nm). [29][30][31][32][33][34][35][36][37] Recently, we observed fluorescence enhancement of SWCNTs by > 10 fold on a solution-phase grown gold films (called Au/Au films) containing patchworks of nanostructured Au islands. The fluorescence enhancement rapidly decreases as the distance between SWCNT and Au increases, with an exponential decay distance (1/e decay distance) of a mere ~6 nm. 25,27 By taking advantage of this interesting effect, we demonstrate single molecule trans-membrane imaging with high sensitivity to axial motion and elucidate the cellular internalization pathway for individual nanotubes.
Results
Plasmonic ruler based on fluorescence enhancement. We used water-soluble high-pressure CO conversion (HiPCO) SWCNTs (diameter ~0.7-1 nm) in our cell entry imaging experiments.
The nanotubes were stably suspended by mixed surfactants of 75% C18-PMH-mPEG (5kD for each PEG chain, 90kD in total) and 25% DSPE-PEG(5kD)-NH 2 (see Methods) with amine groups covalently conjugated with RGD peptide ligands capable of selectively binding to α v β 3integrin over-expressed on U87-MG glioblastoma cells. 38 Since the radius of gyration of a 5kD PEG chain is ~3.5 nm in aqueous solution, 39 a water-soluble SWCNT is wrapped by a ~7 nm thick polymer coating radially to form a cylinder with ~15 nm diameter (inset of Figure 1b), greater than the diameter of ~1 nm of bare nanotubes. Atomic force microscope (AFM) imaging of SWCNTs on glass substrate after calcination showed most nanotubes lying horizontally with an average length of ~1m (Supplementary Figure S1). We found that the length distribution did not affect the major results of our experiments (Supplementary Figure S2), suggesting a general nanoscopic 'ruler' applicable to different lengths of SWCNTs.
4 Figure 1a shows a scanning electron microscope (SEM) image of an NIR fluorescence enhancing Au/Au film comprised of gold nano-islands with abundant gaps in between. The UV-Vis-NIR extinction spectrum of the same Au/Au film (Figure 1a inset) shows a plasmon resonance peak located at ~800 nm, facilitating fluorescence enhancement in the NIR region. 40 To reveal the distance dependence of fluorescence enhancement, we coated Au/Au films with progressively thicker Al 2 O 3 by atomic layer deposition (ALD), and deposited nanotubes on the substrates by drop-drying an aqueous nanotube suspension. The surface density of nanotubes within the drop-dried spot can be seen from the AFM image in Supplementary Figure S1.
Comparing the nanotube fluorescence intensity on Au/Au films with different spacer thicknesses to that on bare glass, we found a decreasing trend of enhancement factor, from ~8-fold enhancement to almost no enhancement, as the spacer increases in thickness ( Figure 1b). The exponential fit showed a surprisingly short 1/e decay distance of ~6 nm considering the average length of our SWCNTs is much greater, ~ 1m. Since the nanotubes are long and may not lie perfectly flat on the substrate, the separation of any part of a long tube from the gold surface is greater than or equal to the thickness of Al 2 O 3 spacer. For this reason, we suggest that the measured enhancement vs. distance data corresponds to the minimum distance between Au and the length of a nanotube in the case when the nanotube is non-parallel to the Au surface.
3D tracking of single nanotubes at 37 °C. We exploited the ultra-sensitive fluorescence enhancement of SWCNTs to gold-nanotube distance for probing motion of nanotubes in the direction normal to the gold surface. We stained trypsinized U87-MG cells at 4 º C by highly diluted SWCNTs (~20 pM) with PEG and RGD functionalization and placed the cells on an Au/Au substrate kept at 4 º C. Imaging with an InGaAs 2D detector (excitation ~658 nm) in the 1.1-1.7 m emission range revealed a brightly fluorescent spot overlaying with a single cell (Figure 1c inset), attributed to a single nanotube sandwiched between Au/Au and cell membrane on the proximal side to the substrate (see Supplementary Figure S3 for further evidence of bright nanotubes sandwiched between cell and Au), exhibiting strong NIR fluorescence enhancement due to proximity to the gold surface. 27 Due to trypsinization and staining at 4 º C to prevent unwanted endocytosis before tracking started, most cells lost extracellular matrix and turned into round shape, but they still remained viability and could internalize nanomaterials from the outside once temperature allowed. 27 We identified the single nanotube by one single peak in the 5 emission spectrum [~1150 nm in Fig.1c, corresponding to (7,6) chirality] and the sinusoidal dependence 41 of fluorescence on the polarization of laser excitation (Figure 1d).
Once an individual nanotube was identified, we increased the incubation temperature from 4 º C to 37 º C in situ and tracked the fluorescence of the SWCNT over time at a frame rate of 0.3 frame/sec after the temperature stabilized at 37 º C. The (7,6) tube in Fig.1c SWCNTs have been reported to rotate freely in water. 43 However, in our case the nanotubes showed small rotations during endocytosis due to confinement and interactions with the membrane and the long 3s integration time that had averaged out the rotational effects.
It is also possible that some nanotubes were bound to the cell membrane perpendicular to the Au-cell interface initially and endocytosed vertically as suggested recently for large multiwalled nanotubes. 44 The laser excitation used in our experiments was not able to excite and 6 resolve these tubes efficiently. Also noticeable is that we had not observed a brightly fluorescent nanotube evolving into a dim state in a time scale of several seconds (the estimated rotation time if assuming free rotation in a viscous medium corresponding to the slow translational diffusivity measured), suggesting none of the SWNTs imaged had changed orientation from in the x-y plane to pointing to the z-direction during endocytosis.
Given the control experiments ruling out other possibilities of fluorescence decay, the observed fluorescence decrease of single nanotubes on cells at 37 º C hinted axial motion of nanotubes away from the Au/Au substrate due to ultra-sensitive Au-SWCNT distance dependence of nanotube fluorescence intensity. Based on the nanoscopic ruler effect of fluorescence enhancement, we rationalized that during 4 º C staining, RGD functionalized SWCNTs selectively attached to the α v β 3 -integrin receptors on the cell membrane ( Figure 2f) without entering the cytoplasm due to blocked endocytosis at 4 º C. 6 At 37 º C, endocytosis was activated with the formation of a vesicle wrapping around the surfactant-coated nanotube via clathrin-associated invagination of the plasma membrane (Figure 2g), followed by vesicle pinching-off ( Figure 2h) and clathrin uncoating to undergo the endocytotic pathway. 45 In the first ~ 20 s at 37 º C, the nanotube was sandwiched between the plasma membrane and the underlying Au/Au film (Figure 2f), exhibiting high fluorescence due to close proximity to the fluorescence enhancing surface (Figure 2b). The distance from the membrane to substrate could reach 4~8 nm 46 in less than 5 min 47 according to the Derjaguin-Landau-Verwey- Overbeek (DLVO) model. 48 Over time (from Figure 2b to 2c), clathrin assembled on the inner surface of plasma membrane and the membrane bent inwards to form clathrin-coated pits and wrapped around the nanotube. Due to invagination, the nanotube was displaced away from the Au substrate (Figure 2g), leading to reduced fluorescence enhancement (Figure 2c). At later times (from Figure 2c to 2d), the clathrin-coated pit continued to grow and finally pinched off to form a complete vesicle enclosing the nanotube. At this point the nanotube was >20 nm away from the Au substrate due to the two lipid bilayers in between (Figure 2h). This large spatial separation led to very little enhancement from the plasmonic substrate ( Figure 2d). A complete sequence of such regulated events including clathrin assembly, pits formation, budding and clathrin uncoating usually takes tens of seconds to a few minutes at 37º C, depending on the size of the cargo molecule. 45 In the case of this particular SWCNT, the time required to complete 7 fluorescence decrease was ~ 250 s (Figure 2e), within the reported time range to complete clathrin cluster assembly 49 but on the higher side, similar to the time needed to internalize relatively large cargo molecules (~400 s) such as reovirus particles (~85 nm in diameter). 45
3D tracking of single nanotubes at various temperatures.
To support that trans-membrane motion of single nanotubes at 37 º C was indeed the cause of fluorescence decrease, we imaged single nanotubes on U87-MG cells at several other temperatures, including 4 º C, 25 º C and 42 º C.
It is known that at temperatures lower than 37 º C, cell functions such as active uptake are impaired, and endocytosis is completely blocked at 4 º C. 6 On the other hand, cell functions are more active at elevated temperatures until excessive heating begins to cause damages. 50 In a typical experiment at 4 º C, we observed a single nanotube (evidenced by polarization dependence in Figure 3a where the average length of our SWCNTs was 2a = 1 m, and they were coated with long PEG chains with an overall radius of b ~7.5 nm. The as-calculated effective radius lied in the vesicle wrapping region for nanoparticles but was on the high side, as suggested by the model of Gao et al. 16 An optimum size for the cellular entry of both Au nanoparticles and SWCNTs has been reported 15,55 to be ~25 nm in effective radius, smaller than the effective radius of the SWCNTs used in the current work. Therefore, although endocytosis mediated by membrane wrapping was more favorable than direct insertion in our case, the large effective capture radius slowed down this process with a relatively high apparent activation barrier.
Block of endocytosis. Previous work has found that cellular internalization of ensembles of
CNTs involves the clathrin-dependent endocytosis pathway. 6 Potassium depletion, as well as hypertonic medium incubation, are the two methods known to perturb endocytosis by removing membrane-associated clathrin lattice. 59
Discussion
This work exploited the sensitive distance dependence of NIR fluorescence enhancement of single carbon nanotube molecules on a gold plasmonic substrate to probe ~10 nm scale transmembrane displacements through changes in nanotube fluorescence intensity, presented the first 3D tracking of individual SWCNTs and established the nanotube entry pathway to be clathrindependent endocytosis. Compared to other existing 3D single particle imaging and tracking techniques that either require sophisticated implementation or are limited with insufficient axial and/or temporal resolution, 18 the sensitive distance dependence of fluorescence enhancement based on the plasmonic effect presents a facile, inexpensive and sensitive probing of sub-10 nm distance changes in biological systems. Notably, spatial sensitivity and range of distance 11 measurements are difficult to optimize simultaneously. Our current method allows for probing subtle distance changes of <10 nm along the z-axis, but tracking displacements beyond ~20 nm becomes difficult due to the short plasmonic ruler length. It is necessary to resort to other fluorophores with different plasmonic ruler lengths, such as synthetic dyes, fluorescent nanoparticles or fluorescent proteins to extend the measurable range along the z-axis. Our preliminary results have indeed identified that the plasmonic fluorescence enhancement vs. distance profile is characteristic to each fluorophore and also depends on the type of plasmonic substrates. Suitable combinations of fluorophore-substrate could lead to a library of 'nanoscopic rulers' spanning various ranges of distances to probe molecular motions at the nanoscale.
We envisage that the reverse process of endocytosis, exocytosis 7,60 of single nanotubes or molecules could also be investigated by utilizing fluorescence enhancement phenomena on plasmonic substrates. Further, similar to trans-membrane processes, imaging with suitable plasmonic rulers may also offer a platform to decipher important biological pathways involving translocation motions of molecules inside cells. Distance information and protein conformational changes inside the cytoplasm could also be revealed by introducing plasmonic metal nanoparticles and fluorophores inside live cells.
Methods
Preparation of Au/Au films. The solution-phase Au/Au film synthesis can be found in detail in another publication of our group. 26 Briefly, a glass slide was immersed in a 25 mL solution of 3 mM chloroauric acid, to which 400 L of ammonia was added under vigorous agitation. The substrate was then allowed to sit in the seeding solution with gentle shaking for 1 min, after which it was rinsed with water. Then the substrate was submerged into a 25 mL solution of 1 mM sodium borohydride on an orbital shaker at 100 rpm for 5 min. Following a second rinsing step, the seeded substrate was soaked in a 25 mL solution of 1 mM chloroauric acid and 1 mM hydroxylamine under agitation for 15 minutes. It was then rinsed with water and soaked in 1 mM cysteamine ethanol solution for 1 h to render it hydrophilic and biocompatible. UV-Vis-NIR absorbance measurements. UV-Vis-NIR absorbance curve of the as-made Au/Au film on glass substrate was measured by a Cary 6000i UV-Vis-NIR spectrophotometer, background-corrected for any glass contribution. The measured range was 400-1200 nm. Scanning electron microscopy (SEM) imaging. Au/Au film grown on glass was imaged via SEM. Image was acquired on an FEI XL30 Sirion SEM with FEG source at 5 kV acceleration voltage.
Atomic layer deposition (ALD) process. Low temperature ALD process was used to coat the as-made Au/Au film with desirable thicknesses. Deposition was carried out on amine group functionalized hydrophilic Au/Au substrate at 100 ℃ in ~300 mTorr pure nitrogen environment where trimethylaluminum (TMA) and water vapor were used as precursors. For each cycle of ALD, water pulse was 0.5 s in duration, followed by a 40 s purging time, a 0.5 s TMA pulse and a 30 s purging time. To deposit the Al 2 O 3 layers of 5nm, 10 nm, 15 nm and 20 nm, 50, 100, 150 and 200 cycles were used, respectively. Preparation of water soluble SWCNT-PEG-RGD bioconjugate. The preparation of water soluble SWCNT fluorophores can be found in detail in another publication of our group with some modification. 34 In general, raw HiPCO SWCNTs (Unidym) were suspended in 1 wt% sodium deoxycholate aqueous solution by 1 hour of bath sonication. This suspension was ultracentrifuged at 300,000 g to remove the bundles and other large aggregates. To further remove any remaining bundles and keep only bright single nanotubes, a gradient separation was used to purify the as-made SWCNTs. The supernatant was first concentrated and then layered to the top of a 10 wt%/20 wt%/30 wt%/40 wt% sucrose step gradient, followed by ultracentrifugation at 300,000 g for 1 hour. Only the top 1 mL was retained by careful fractionation and 0.75 mg/mL of C18-PMH-mPEG (5 kD for each PEG chain, 90 kD in total) (poly(maleic anhydride-alt-1-octadecene)-methoxy(polyethyleneglycol, 5000) MW = 90,000 in total), synthesized by our group) along with 0.25 mg/mL of DSPE-PEG(5 kD)-NH 2 (1,2distearoyl-sn-glycero-3-phosphoethanolamine-N-[amino(polyethyleneglycol, 5000)], Laysan Bio) was added. The resulting suspension was sonicated briefly for 5 min and then dialyzed at pH 7.4 in a 3500 Da membrane (Fisher) with a minimum of six water changes and a minimum of two hours between water changes. To remove aggregates, the suspension was ultracentrifuged again for 1 hour at 300,000 g. This surfactant-exchanged SWCNT sample has lengths ranging from 100 nm up to 3.0 μm, with the average length of ~1 μm as shown in Supplementary Figure S1. These amino-functionalized SWCNTs were further conjugated with RGD peptide according to the protocol that has been used in our group. Briefly, an SWCNT solution with amine functionality at 300 nM after removal of excess surfactant, was mixed with 1 mM sulfo-SMCC at pH 7.4 for 2 h in PBS at pH 7.4. After removing excess sulfo-SMCC by filtration through 100-kDa filters (Amicon), RGD-SH (cyclo-RGDFC, Peptides International) was added together with tris(2-carboxyethyl)phosphine (TCEP) at pH 7.4. The final concentrations of SWCNT, RGD-SH and TCEP were 300 nM, 0.1 mM and 1 mM, respectively. The reaction was allowed to proceed for 2 days at 4 º C before purification to remove excess RGD and TCEP by filtration through 100-kDa filters. Atomic force microscopy (AFM) imaging. AFM image of the as-made SWCNT conjugate was acquired with a Nanoscope IIIa multimode instrument in the tapping mode. The sample for imaging was a drop-dried sample on glass, the same one for calibration curve measurement of the distance dependence of fluorescence enhancement, prepared by drop-drying 0.5 L watersoluble PEGylated and functionalized SWCNTs (0.45 nM) solution containing 0.05 wt% Triton X-100 on a bare glass substrate and calcination at 350 ºC for 15 min.
Distance-dependence of plasmonic fluorescence enhancement.
To determine the calibration curve, on the bare glass substrate and each Au/Au substrate with a certain thickness of Al 2 O 3 coating, 0.5 L water-soluble, PEGylated and functionalized SWCNTs (0.45 nM) solution containing 0.05 wt% Triton X-100 was drop-dried to form a uniform spot with diameter of ~2 mm. All spots were imaged in epifluorescence setup with a 658-nm laser diode (100 mW, Hitachi) focused to a 750 μm diameter spot by focusing the laser near the back focal length of a ×10 objective lens (Bausch & Lomb). The resulting NIR photoluminescence (PL) was collected using a liquid-nitrogen-cooled, 320 × 256 pixel, two-dimensional InGaAs camera (Princeton Instruments) with a sensitivity ranging from 800 to 1,700 nm. The excitation light was filtered out using an 1,100 nm long-pass filter (Omega) so that the intensity of each pixel represented light in the 1,100 -1,700 nm range. The exposure time was 100 ms. Images were taken in a 2D scanning mode, flat-field-corrected to account for non-uniform laser excitation, and then stitched automatically using LabVIEW to recover the original shape of the spot. Integral intensity for each stitched spot was done using the roipolyarray function in MATLAB software. Cell incubation and staining. The U87-MG cell medium containing 1 g· L -1 D-glucose, 110 mg· L -1 sodium pyruvate, 10% fetal bovine serum, 100 IU· mL -1 penicillin, 100 μg·mL -1 streptomycin and L-glutamine was used. Cells were maintained in a 37 ℃ incubator with 5% CO 2 . To stain cells with SWCNT-PEG-RGD, trypsinized U87-MG cells were mixed with SWCNT-PEG-RGD at a nanotube concentration of 1 nM (for imaging on glass; this higher concentration helped to find the brighter single nanotubes from a distribution including many dim tubes) or 20 pM (for imaging on Au/Au which helped to visualize the otherwise dim tubes by enhancement) at 4 º C for 1 h, followed by washing the cells with 1x PBS to remove all free conjugates in the suspension. For the hypertonic treatment, the original cell medium was completely removed and the cells were incubated in 1x PBS supplemented with 0.45 M sucrose at 37 º C for 0.5 h before trypsinized and stained. For the potassium depletion treatment, the original cell medium was removed and the cells were incubated in a potassium-free buffer containing 0.1 M HEPES (4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid), 1.4 M NaCl and 25 mM CaCl 2 at 37 º C for 0.5 h before trypsinized and stained. Note that for the K + -depletion treatment, the potassium-free buffer was used to replace 1x PBS wherever 1x PBS was needed throughout the entire procedure. High-magnification NIR photoluminescence imaging. 5 L of stained U87-MG cell suspension was transferred to 200 L of 1x PBS or potassium-free buffer (for potassium depletion treatment), placed into an 8-well chamber slide (Lab-Tek™ Chambered Coverglass, 1.0 Borosilicate). An Au/Au coated glass substrate or a bare glass chip was then placed on top. Capillary force formed a very thin layer of liquid between the substrate and coverglass, allowing a monolayer of cells residing in between. The chamber slide was kept in a temperature controlled chamber (BC-260W, 20/20 Technology, Inc.) for epifluorescence imaging. Temperature was always kept at 4 º C at the beginning for at least 5 min to ensure the cell membrane in close contact with the hydrophilic gold surface. The temperature of the imaging cell was controlled by heat exchanger (HEC-400, 20/20 Technology, Inc.), and the CO 2 gas flow was kept at 1 L/min by a gas purging system (GP-502, 20/20 Technology, Inc.). Single nanotube imaging and tracking were done using a 658-nm laser diode excitation with an 80 μm diameter spot focused by a ×100 objective lens (Olympus). The resulting NIR photoluminescence was collected using a liquid-nitrogen-cooled, 320 × 256 pixel, two-dimensional InGaAs camera (Princeton Instruments) with a sensitivity ranging from 800 to 1,700 nm. The excitation light was filtered out using an 1100 nm long-pass filter (Thorlabs) so that the intensity of each pixel represented light in the 1,100 -1,700 nm range. To initiate the active uptake of single nanotubes, the chamber temperature was increased from the initial temperature of 4 º C and stabilized at the set temperature (usually took 2 min). The camera took images to record endocytotic process continuously with an exposure time of 3 s. Matlab 7 was used to process the images for any necessary flat-field correction and extract trajectories and time courses of PL changes from the video. Figure S5. Average fluorescence intensity of SWCNTs immobilized on Au/Au substrate at different pH at 37 º C. All intensities were normalized based upon the maximum intensity at pH 7.4. SWCNTs were wrapped in PEG and conjugated with SPDP ligand (N-Succinimidyl 3-[2-pyridyldithio]-propionate) with activatable SH terminus and anchored onto the gold surface via the thiol-gold chemistry. We soaked the Au/Au substrate into SWCNT-PEG-SPDP solution to have enough thiolated SWCNTs adsorbed on the gold surface for ensemble measurement, removed the unbound nanotubes, placed the substrate in 1x PBS at 37 º C to mimic the cell imaging condition and imaged the nanotubes through ×100 objective when pH was adjusted between 5 and 9 by adding 0.1 M HCl or 0.1 M NaOH (as the pH of endosomes and lysosomes has been reported as around 5) 42 . Error bars were obtained by taking the standard deviation of all 319×256 pixels in the field of view (69 m by 86 m). | 6,161.4 | 2012-01-01T00:00:00.000 | [
"Biology",
"Chemistry",
"Physics"
] |
DNA virome composition of two sympatric wild felids, bobcat (Lynx rufus) and puma (Puma concolor) in Sonora, Mexico
With viruses often having devastating effects on wildlife population fitness and wild mammals serving as pathogen reservoirs for potentially zoonotic diseases, determining the viral diversity present in wild mammals is both a conservation and One Health priority. Additionally, transmission from more abundant hosts could increase the extinction risk of threatened sympatric species. We leveraged an existing circular DNA enriched metagenomic dataset generated from bobcat (Lynx rufus, n = 9) and puma (Puma concolor, n = 13) scat samples non-invasively collected from Sonora, Mexico, to characterize fecal DNA viromes of each species and determine the extent that viruses are shared between them. Using the metaWRAP pipeline to co-assemble viral genomes for comparative metagenomic analysis, we observed diverse circular DNA viruses in both species, including circoviruses, genomoviruses, and anelloviruses. We found that differences in DNA virome composition were partly attributed to host species, although there was overlap between viruses in bobcats and pumas. Pumas exhibited greater levels of alpha diversity, possibly due to bioaccumulation of pathogens in apex predators. Shared viral taxa may reflect dietary overlap, shared environmental resources, or transmission through host interactions, although we cannot rule out species-specific host-virus coevolution for the taxa detected through co-assembly. However, our detection of integrated feline foamy virus (FFV) suggests Sonoran pumas may interact with domestic cats. Our results contribute to the growing baseline knowledge of wild felid viral diversity. Future research including samples from additional sources (e.g., prey items, tissues) may help to clarify host associations and determine the pathogenicity of detected viruses.
Introduction
Infectious diseases are playing a critical role in wildlife population conservation (Lewis et al., 2017). Through reducing the survival and reproduction of individuals (Woodroffe, 1999;Deem et al., 2001), parasites and infectious diseases can generate trophic cascades (Frainer et al., 2018;Baruzzi et al., 2022), contributing to significant wildlife population declines, affecting multiple species in a community (Pedersen et al., 2007). Furthermore, the transmission of generalist parasites or infectious agents to threatened species can increase the extinction risk of wild animals (Daszak et al., 1999;Woodroffe, 1999;Lafferty and Gerber, 2002;Pedersen et al., 2007). This is evident in carnivores (Woodroffe, 1999;Lafferty and Gerber, 2002;Pedersen et al., 2007), which appear to be particularly susceptible to long term impacts of epizootic diseases at the population-level (Malmberg et al., 2021).
Pathogen surveillance and understanding the viral diversity and potential disease threats associated with more abundant host species may reveal emerging infectious diseases and help prevent future outbreaks and subsequent potential loss of sympatric threatened species. More than just a conservation concern, an increased understanding of the viral diversity in wildlife and the potential for spillover to other species is essential for effectively managing future outbreaks (Olival et al., 2017;Carroll et al., 2018) and is a One Health priority (at the nexus of human, animal and environmental health; Mazzamuto et al., 2022).For example, a study on juvenile and adult red foxes (Vulpes vulpes) in peri-urban areas in Croatia noted the dominant presence of fox picobirnavirus and parvovirus in fecal samples, as well as a novel fox circovirus (Lojkić et al., 2016). With red foxes being the most abundant carnivore in the Northern Hemisphere and the novel fox circovirus being very similar to circoviruses isolated from diseased dogs in USA and Italy, it seems possible to posit that red foxes could serve as wildlife virus reservoirs (Lojkić et al., 2016). Such virome characterizations of carnivores are particularly relevant to advancing One Health priorities, with the order Carnivora being ranked among the top five mammalian orders as a source of zoonotic pathogens (Keesing and Ostfeld, 2021). Monitoring wildlife diseases (and particularly wildlife viromes) using non-invasive fecal sample collection is a nascent field (Pannoni et al., 2022;Mazzamuto et al., 2022;Schilling et al., 2022) and could bridge the gap between passive (e.g., voluntary disease reporting) and active wildlife disease surveillance (e.g., submission of samples from hunted game; Cardoso et al., 2022).
The sociality of a species can also affect the spread of infectious diseases (Sah et al., 2018). In group living or social species, group size was thought to influence disease transmission dynamics (Kappeler et al., 2015;Sah et al., 2018), with larger groups and animals living at higher population densities having higher parasite prevalence and burden (Patterson and Ruckstuhl, 2013;Albery et al., 2020). However, recent research suggests that animals can spatially organize their groups to minimize infections (Albery et al., 2020) and that the interactions within a social group and not only its size drive the spread of infectious diseases (Sah et al., 2018).
Social interactions are not limited to group living, as some relatively solitary species have been shown to exhibit complex social networks with a variety of social partners and interactions (Sah et al., 2018). Thus, despite many felids being solitary, the use of shared environmental resources, as well as occasional conflict or predation within and among felid species, creates opportunities for inter-and intra-specific pathogen transmission.
The Sonoran desert is a unique ecoregion home to four species of wild solitary felids: two being more common, bobcat (Lynx rufus) and puma (Puma concolor), and two listed as endangered, the ocelot (Leopardus pardalis) and jaguar (Panthera onca). Currently, little is known about the exact disease threats and viral diversity associated with these felids. As such, Sonoran desert felids provide both the conservation need and a unique opportunity to assess levels of viral diversity present within and shared between these populations of closely related sympatric host carnivore species. Bobcats and pumas, as the more abundant felids, are easier to sample, and surveys of viral diversity in these species may serve as a proxy for virome characterization or indication of potential viral spillover for the rarer felids.
In this paper, we leveraged a collection of fecal samples of wild bobcat and puma from Sonora, Mexico to determine (1) what DNA viruses are present in wild felids in Sonora and (2) the similarity of fecal DNA viromes between these sympatric species. These data will contribute to the growing field of wildlife viromics and to our understanding of the viral diversity present in wild mammalian species.
Sample collection and processing
Bobcat and puma scat samples were collected from Sonora, Mexico, between 2012 and 2014 ( Figure 1). Scats possibly of felid origin, were collected only if determined to be fresh, based on color, moisture, smell, and texture. Host DNA was extracted using Qiagen's DNeasy Blood and Tissue Kit, and species identification was performed through Sanger sequencing of a region of the mitochondrial cytochrome B gene (Verma and Singh, 2002;Naidu et al., 2011;Cassaigne et al., 2016), as previously described for these samples (Payne et al., 2020). Thirteen puma and nine bobcat scat samples were randomly selected for DNA virome analysis (Payne et al., 2020). Separate viral DNA extraction from scat cross-sections, circular viral DNA amplification, and library preparation followed the protocol in Payne et al. (2020). Sequencing libraries were generated using the TruSeq Nano DNA Sample Preparation kit and sequenced on an Illumina HiSeq 4,000 (2 × 100 bp) at Macrogen Inc. (Korea) in 2018.
Bioinformatics and analyses
The metaWRAP pipeline v. 1.3.2 (Uritskiy et al., 2018) was used to process raw sequencing reads for comparative metagenomic analysis. Within the metaWRAP read_qc module, reads were trimmed using default parameters with Trim Galore v. 0.5.0 (Krueger, 2022) as a wrapper around Cutadapt v. 1.18 (Martin, 2011), human contamination was removed with BMTagger v. 3.101 (Rotmistrovsky and Agarwala, 2011) using the hg38 human genome assembly (GCA_000001405.15), and read quality was assessed with FastQC v. 0.11.8 (Andrews, 2010). Untrimmed reads with human contamination removed were deposited in the Sequence Read Archive. Reads from all samples were co-assembled using metaSPAdes v. 3.14.1 (Bankevich et al., 2012;Nurk et al., 2017) with default parameters outside of metaWRAP, and the scaffolds were then used in the metaWRAP assembly module, allowing for assembly of unused reads with MEGAHIT v. 1.1.3 (Li et al., 2015). We elected to perform one co-assembly across all samples to achieve Frontiers in Ecology and Evolution 03 frontiersin.org better detection of rare taxa and allow for direct comparison of bobcat and puma virome composition. The metaWRAP Kraken 2 module was used to run Kraken 2 (Wood et al., 2019) and generate Krona plots (Ondov et al., 2011) to assess taxonomic composition of contigs in the final assembly, reads from individual samples (subset to 1,000,000 reads), and all pooled reads for each host species, using the premade Kraken 2 viral database (v. 9/8/2022 available at https://benlangmead. github.io/aws-indexes/k2). Contig abundances (in genome copies per million reads) were estimated with Salmon v. 0.13.1 (Patro et al., 2017) using the metaWRAP quant_bins module, and contig taxonomy was assigned using a megablast search against the NCBI nucleotide database (available at https://ftp.ncbi.nlm.nih.gov/blast/db/, downloaded 10/20/2022) using blast v. 2.13.0 (Altschul et al., 1990) outside of metaWRAP for v5 database compatibility, with the output further processed by the metaWRAP classify_bins module for pruning blast hits and assigning taxonomy with Taxator-tk (Dröge et al., 2015).
CheckV v. 1.0.1 (Nayfach et al., 2020) was used to assess quality and completeness of viral contigs. Contigs assigned as viral (and not designated as phages) by classify_bins and which were determined to be complete or high-quality (>90% complete) viral genomes with CheckV were retained for community analyses in R. All downstream analyses were repeated with a second set of viral contigs (>66.7% genome completeness), referred to as our "lower-completeness" set (resulting figures in Supplementary Figures S31-S37). The CheckV quality summary, final taxnomic assignment, top blast hit, and abundance per sample of each contig in the high and lower completeness set can be found in Supplementary Table 1. The R package vegan v. 2.5-7 (Oksanen et al., 2020) was used to conduct viral community ecology analyses on the contigs representing high-quality viral genomes. Alpha diversity metrics (species richness, Simpson Diversity Index, and Shannon Diversity Index) were calculated for each sample, and Wilcoxon rank sum tests were used to determine if significant differences in alpha diversity exist between bobcats and pumas. Beta diversity metrics were calculated among all pairs of samples, both considering contig abundances (Bray-Curtis Dissimilarity Index; abundances were in genome copies per million reads, as estimated by quant_bins) and considering contig presence/absence (Jaccard distance). Kruskal-Wallis rank sum tests were used to determine if beta diversity differed significantly between host species pairs, and Dunn's test was used post hoc to determine which comparisons differed significantly using the FSA R package v. 0.9.3 (Ogle et al., 2022).
Vegan was also used to conduct ordination analyses non-metric multidimensional scaling (NMDS) and principal coordinates analysis (PCoA) with both Bray-Curtis and Jaccard distance matrices, and visualizations were generated using ggplot2 (Wickham, 2016) and ggord (Beck, 2022). To further assess differences in virome composition due to host species, permutational multivariate analyses of variance (PERMANOVA) and analyses of similarity (ANOSIM) were conducted Frontiers in Ecology and Evolution 04 frontiersin.org using Bray-Curtis and Jaccard distance matrices with vegan's adonis2 function and anosim function, respectively. To assess correlations between geographic Euclidean distance and beta diversity, separate Mantel tests were performed using Bray-Curtis and Jaccard distance matrices.
MetaWRAP
We obtained 4,044 contigs of 1,000 to 164,510 nts in length after co-assembly (prior to identification of viral sequences), and Krona plots generated by the Kraken 2 module show the viral taxa represented in the assembly (Supplementary Figure S1), bobcat reads (Supplementary Figure S2), puma reads (Supplementary Figure S3), and individual sample reads (Supplementary Figures S4-S25). Of the portion of the final assembly matching the Kraken 2 viral database (with contig taxonomy weighted by contig length and coverage), 48% represented viruses within Monodnaviria, with 28% identified as belonging to circoviruses ( Figure 2). Viruses within the family Anelloviridae comprised 16% of the viral portion of the assembly, and phages within the class Caudoviricetes comprised 33%, reflecting viruses likely associated with enteric bacteria of the felids. When reads from each sample of the same host species were pooled (Figure 2), 96% of bobcat viral reads matched to those of the viruses in the family Circoviridae, while puma viral reads largely represented viruses in the families Genomoviridae (51%), Retroviridae (felispumavirus, 14%), Anelloviridae (8%), and class Caudoviricetes (25%). The computational analysis identified 38 complete genomes and 16 additional high-quality viral contigs (using CheckV and classify_bins taxonomy results, not including phages), which were used for downstream analyses. We included an additional 34 medium-quality contigs in our "lower-completeness" set.
Alpha and beta diversity
Species richness, Shannon Diversity Index, and Simpson Diversity Index were the alpha diversity metrics calculated for all samples.
We observed a wider range of species richness values for pumas ( Figure 3A), and pumas had higher median values for each of these metrics (Supplementary Figures S26, S27), although differences in alpha diversity metrics among bobcats and pumas were not significant (richness: p = 0.1677; Shannon: p = 0.1264; Simpson: p = 0.1264). However, using our "lower-completeness" set of viral contigs, we found richness differed significantly between pumas and bobcats (p < 0.05; Figure 3C). Median beta diversity values were greatest between pumas and bobcats and lowest among pumas ( Figure 3B and Supplementary Figure S28), and both Bray-Curtis and Jaccard distances were significantly different among different host species pairings (p < 0.01 for both distances), with significant differences among puma-puma pairings and other host species pairings (p-adj < 0.05 and p-adj < 0.01 with Bray-Curtis and Jaccard distances, respectively, for puma-bobcat pairings and p-adj < 0.05 with both distances for bobcat-bobcat pairings). However, using our "lowercompleteness" set of contigs and Jaccard distance ( Figure 3D), significant differences among host species pairs were explained by puma-bobcat pairings having significantly higher beta diversity than puma-puma (p-adj < 0.01) and bobcat-bobcat pairings (p-adj < 0.05).
Effect of host species and geographic distance
The NMDS and PCoA plots reveal an extensive overlap between bobcat and puma viral communities (Figure 4 and Supplementary Figures S29, S30), although pumas and bobcats with the highest richness levels tended to cluster separately in the NMDS based on Bray-Curtis distances (stress = 0.155), and both PCoA and NMDS (stress = 0.211) based on Jaccard distances revealed puma samples outlying the region of overlap between the two species clusters. The PERMANOVAs revealed a significant effect of host species on community composition using Jaccard distances (p < 0.05, R 2 = 0.09418), but not Bray-Curtis distances (p = 0.497, R 2 = 0.04395). ANOSIM revealed significant differences between host species using both Jaccard (p < 0.05, R = 0.1901) and Bray-Curtis distances (p < 0.05, R = 0.1441). Differences between host species remained significant using the more complex "lower-completeness" set using Jaccard distances (PERMANOVA: p < 0.01, R 2 = 0.12081; ANOSIM: p < 0.01, R = 0.2639) and ANOSIM with Bray-Curtis distances (p < 0.05, R = 0.1513; PERMANOVA: p = 0.649, R 2 = 0.0398). Mantel tests did not reveal significant correlations between geographic distance and beta diversity for either Bray-Curtis (p = 0.813, r = −0.1011) or Jaccard distances (p = 0.165, r = 0.1191).
Discussion
Determining the viral diversity present in wildlife is essential for the management and control of emerging infectious diseases. Owing to the large potential for zoonotic spillover, characterizing mammalian viromes is vital to achieving One Health priorities. In non-invasively collected scat samples from wild pumas and bobcats, we observed diverse circular DNA viruses, including circoviruses, genomoviruses, and anelloviruses. Given that rolling circle amplification (RCA) was performed prior to sequencing, we expected to find a high proportion of circular DNA viruses present (although non-circular DNA is not Bar plots showing taxonomy of reads matching viral database from bobcat and puma samples, as well as taxonomy of contigs matching viral database. Contigs are from the final assembly generated by metaSPAdes and MEGAHIT, and metaWRAP weights taxonomy results of contigs based on length and coverage. Frontiers in Ecology and Evolution 05 frontiersin.org excluded from sequencing). Specifically, one of the complete viral genomes, with particularly high prevalence in two bobcat samples, was identified as Sonfela circovirus 1, the genome of which was originally identified in these two samples (Payne et al., 2020). Additionally, the high proportion of contigs with homology to anelloviruses was also expected, as the diversity of anellovirus genomes isolated from these samples has been previously characterized (Kraberger et al., 2021). Interestingly, we documented the presence of feline foamy virus (FFV) within our dataset as a 4.4 kb contig primarily derived from one puma sample (72% of the reads). This suggests that this is an integrated FFV and was detected through carry-over of host DNA. FFV, a contact-dependent, multi-host adapted retrovirus, is known to cause lifelong infection in both domestic and wild felid species (Linial, 2000;Dannemiller et al., 2020), including pumas and domestic cats (Felis catus; Kechejian et al., 2019;Kraberger et al., 2020). Most studies document prevalence in domestic cats, with comparatively few detecting the presence of FFV in wild felids .
Pumas have shown a high prevalence of FFV and a high frequency of intraspecies transmission in other studies (Kechejian et al., 2019;Dannemiller et al., 2020;Kraberger et al., 2020). Additionally, frequent cross-species spillover of FFV has been documented from domestic cats to pumas due to depredation events , and the presence of the virus in a Sonoran puma may indicate that interactions between wild and domestic felids have occurred. However, it is also possible that this was a result of social spillover from another puma. While FFV is generally considered apathogenic, clinically silent infection has been associated with histopathological changes in domestic cats (German et al., 2008;Ledesma-Feliciano et al., 2019), and further research is needed to clarify implications for feline health.
Our analyses also indicate extensive overlap between bobcat and puma DNA viral communities. The broader diversity of viruses observed in pumas may result from exposure to a wider variety of prey species. Pumas have been observed to predate a broad range of taxa, including ungulates, mesocarnivores, and small mammals (Cassaigne Violin plots showing alpha and beta diversity. (A) Species richness of bobcats and pumas, using contigs representing high-quality or complete viral genomes. (B) Jaccard distances between bobcats, between pumas and bobcats, and between pumas, using contigs representing high-quality or complete viral genomes. Mean Jaccard distance differed significantly between puma-puma pairings and other groups (p-adj < 0.01 for puma-bobcat pairings and p-adj < 0.05 for bobcat-bobcat pairings). (C) Species richness of bobcats and pumas, using viral contigs in the "lower-completeness" set. Mean species richness differed significantly between bobcats and pumas (p < 0.05). (D) Jaccard distance between bobcats, between pumas and bobcats, and between pumas, using viral contigs in the "lower-completeness" set. Mean Jaccard distance differed significantly between puma and bobcat pairings and other groups (p-adj < 0.01 for puma-puma pairings and p-adj < 0.05 for bobcat-bobcat pairings).
Frontiers in Ecology and Evolution 06 frontiersin.org Meyer et al., 2020), whereas bobcats are known to primarily specialize on rodents and lagomorphs (Hass, 2009;López-Vidal et al., 2014;Meyer et al., 2020). Furthermore, apex predators are known to experience greater bioaccumulation of viruses (Malmberg et al., 2021). Pumas are also known to occasionally prey upon smaller felids such as bobcats (Hass, 2009;Prude and Cain III, 2021), and previous studies have documented pathogen transmission from bobcat to puma, putatively through competitive contact or depredation (Franklin et al., 2007;Lee et al., 2017;Malmberg et al., 2021). While such interactions may facilitate viral transmission to Sonoran pumas, the presence of shared viral taxa in bobcats and pumas does not necessarily indicate cross-species transmission. As both pumas and bobcats are known to prey on small mammals, and bobcats have been known to predate deer on occasion (Leopold and Krausman, 1986;McKinney and Smith, 2007), shared viral taxa may instead reflect dietary overlap or shared environmental resources, such as water sources. For example, we identified complete viral genomes matching the rodent anelloviruses Neotofec virus NeonRodL2_5 and Neotofec virus NeonRodL2_6 in bobcats and a partial genome matching Dipodfec virus NeonRodF1_131 (within the phylum Cressdnaviricota) in pumas. These viruses were first isolated from white-throated woodrats (Neotoma albigula) and Merriam's kangaroo rat (Dipodomys merriami), respectively, which could be suitable prey items for both felid species (Meyer et al., 2020). Alternatively, shared taxa that infect these felids may be co-evolved within each species. However, we were unable to determine the strains present within each sample (and species) since contigs were generated by co-assembly. Future research including samples from other sources (e.g., prey items, tissue samples) might help to clarify such host associations. We found that geographic distance among scat samples did not have a significant effect on DNA virome composition at this spatial scale. Our results support previous findings of low levels of spatial autocorrelation of pathogen exposure in pumas and bobcats in Florida, Colorado, and California (Gilbertson et al., 2016). Instead, DNA virome composition appears to be shaped by a combination of host species dependent and independent factors, with extensive virome composition overlap observed between host species using ordination analyses, while PERMANOVA and ANOSIM revealed small yet significant effects of host species. Although pseudoreplication is not suspected, the possibility of some individuals being represented by more than one sample may contribute to the observed effect of host species on DNA virome composition.
Despite providing insight into the possible interactions between host and viral communities, further research is needed to clarify the implications of these results for Sonoran felid health. Fecal virome characterization of non-invasively collected scat samples includes many novel and known viruses derived from the scat depositors as well as prey species and environmental contacts, so the host associations and pathogenicity of each virus in the metagenome is unknown. Of the major viral taxa identified here, viruses in the family Circoviridae may be of most interest in terms of bobcat and puma health. Circoviruses are known as the smallest animal pathogens that replicate autonomously (Fisher et al., 2020). They are found in a number of species [freshwater fish, birds, bats, chimpanzees, minks, elk, and humans (Rosario et al., 2017;Fisher et al., 2020)], although their presence is often subclinical (Fisher et al., 2020). In some birds, circoviruses are considered potentially immunosuppressive (Todd, 2000), suggesting that concurrent co-infections could increase the symptoms and severity of disease (Fisher et al., 2020). Some circoviruses are known to cause clinical disease such as hemorrhagic gastroenteritis in dogs (Anderson et al., 2017;Kotsias et al., 2019), often fatal postweaning multisystemic wasting syndrome in pigs (Chae, 2005;Segalés et al., 2005) and virus (BFDV) in birds [beak and feather disease virus (Todd, 2000)]. The identification of potentially disease-causing circoviruses in bobcats and pumas is of concern for both wild felid population health and conservation. With their A B
FIGURE 4
NMDS plots generated using (A) Bray-Curtis dissimilarity (stress = 0.155) and (B) Jaccard distance (stress = 0.211), using contigs representing high-quality or complete viral genomes. Each point represents the viral community composition within a specific sample. Points are colored by host species, and point size is proportional to species richness of each sample. Ellipses corresponding to the two host species groups are shown at the 95% confidence level. Axes (MDS1 and MDS2) correspond to the two axes of variation.
Frontiers in Ecology and Evolution 07 frontiersin.org propensity for displaying tissue tropism (Todd, 2000), transmission of these circoviruses from more abundant/common species to threatened wild felids could result in catastrophic population declines, particularly if these felids are already immunocompromised from co-infections. Furthermore, the high abundance of Sonfela circovirus 1 reads in two bobcat samples may suggest an active infection. However, with these viruses being identified from non-invasively collected scat samples, further study is needed to clarify host associations and consequences for feline health. Although important feline pathogens present in the scats may have been missed through analysis of the circular viral DNA enriched dataset, these results contribute to the documentation of viral diversity in wild felids. Future studies coupling the characterization of broader virome composition and disease dynamics across sympatric populations of wild mammals could help with the identification of viral threats to wildlife, as well as potentially to humans and domestic animals.
Data availability statement
The datasets presented in this study can be found in online repositories. The name of the repository and accession number can be found at: NCBI; PRJNA922235. | 5,608.4 | 2023-03-09T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
APPLICATION OF DEPTH-FIRST SEARCH METHOD IN FINDING RECIRCULATION IN MINE VENTILATION SYSTEM
Recirculation of airflows in a mine ventilation system can cause concentration of contaminated air which results in an unsafe working environment for people working in an underground mine. Due to the difficulty in finding the recirculation of airflows in complicated mine ventilation system, depth-first search method is proposed to find the recirculation of airflows in complicated mine ventilation systems. The searching procedure of a simply depth-first search method is introduced briefly. Then the depth-first search method is modified for searching recirculation of airflows in complicated mine ventilation systems. The proposed method is implemented through MATLAB in the form of storing ventilation information in a matrix. A few recirculation of airflows are found and are confirmed by the mine ventilation simulation result. It is concluded that the proposed method is a valuable tool in finding recirculation of airflows in a complicated mine ventilation system.
INTRODUCTION
Nowadays, booster fans are widely employed in an underground mine, which help to balance the air pressure and quantity distribution to provide a suitable working environment for human being [1,2].However, booster fans installed underground can cause recirculation of airflow in the mine ventilation system.Recirculation in a mine ventilation system makes it possible to induce concentration of contaminated air, including dust and gas, which make the people who work underground taking a risk.Therefore, it is imperative to find the reticulation of airflows in a mine ventilation system and to control them.Figure 1 shows an example of recirculation of airflow in an underground mine.All the roads of the mine are depicted in the figure 1 and the red arrows represent the directions of the airflows.A fan is installed underground as indicated by the green arrow.As can be seen, there is a recirculation of airflow as indicated in the white box.The air in the white area is circulating in a few connected roads instead of all the air flowing to the return roads, which significantly reduces the volumetric efficiency.It is easy to find the recirculation of airflows in a simple ventilation system as shown in figure 1 while it is difficult to find the recirculation in a complicated ventilation system which has dozens of booster fans installed underground and hundreds or thousands of roads.
Fig. 1 -An example of recirculation airflow in an underground mine
In terms of mine ventilation, many numerical or experimental researches have been done [3][4][5][6][7].However, as for the recirculation of airflows, rare study has been focused on it.With the fast development of the computer technology, researchers have begun to study the recirculation of airflows on basis of computer technology [8].Since it is of great importance to eliminate the recirculation of airflows and it is difficult to find the recirculation of airflows in a complicated ventilation systems, this paper provides a mathematical method, i.e. depth-first search method, for finding the recirculation in a complicated mine ventilation system and applies the method in a copper mine to search the recirculation of airflows.
DEPTH-FIRST SEARCH METHOD
In this section, the simple depth-first search method is introduced first.Then the simple depth-first search method is modified for suitable finding the recirculation of airflows in a complicated mine ventilation system.
Simple Depth-First Search Method
Depth-first method has been applied in many fields [9][10][11][12][13][14].Since the simple depth-first method had been introduced in detail by Rao and Kumar (1987) [11], the method herein is introduced briefly and the modified depth-first search method for finding the recirculation of airflows in a complicated mine ventilation system is introduced in detail.
Depth-first search method is used to find a path in a directed graph from an initial node to a goal node [11] .The search begins by expanding the initial node, i.e. by generating its successors, and ends when a goal node is found [11] .Consequently, the solution path is constructed by following the path from the initial node to the goal node.Figure 2 illustrates a step-tree generated according to the depth-first method.In figure 2, the numbers in the boxes indicate the order of the boxes generated while the expanding order of the boxes follow the alphabet order as shown in the boxes.As can be seen in figure 2, the first box, i.e.A1, on the top is the initial node which generated three nodes, i.e. 2B, 3I and 4L.Then 2B expands and generates two boxes, 5C and 6D.
Fan Recirculation
After that, the tree generates and expands following the previous rules.More details can be found in the reference [15].
Modified Depth-First Search Method for finding Recirculation in Mine Ventilation System
The simple depth-first search method mentioned above is modified for searching recirculation of airflows in complicated mine ventilation system.The basis searching rules follow simple depth-first search method while a few new rules are added for searching.
The procedure for the modified depth-first method is as follows.
1) The initial node is put in a stack first.
2) The search begins by searching adjacent nodes of the initial node along the arrow directions.
3) The search will visit the searched adjacent node with smaller number if there are more than one adjacent nodes and put the visited number in the stack.
4) The search repeats the rule 2 and 3 untill the initial node is visited again.Then the visited path represents one recirculation.
5) The search will pop backwards to find a visited node which has unvisited adjacent node.
6) The search will repeat step 2 to 5 until all the recirculation paths beginning from the initial node are found.
Figure 3 shows the initial state of a network.The search will start from node 1. in Figure 3, the node 1 and node 1`represent the same node.We put the initial node in two circles for conveniently drawing and explaining the searching procedure in the network.As illustrated in Figure 4, the initial node, i.e. node 1, is put into the stack first according to rule 1.Then, the depth-first search method begins to search the adjacent nodes of node 1 along the arrow direction, i.e. the direction of airflow in a ventilation system, according to rule 2. Two nodes are found, i.e. node 1 and node 3.
The node 2 is visited and put the number into the stack according to rule 3, i.e. visiting the smaller node with smaller number.After that the node 1 and node 2 are marked using red color and yellow color, respectively.In this paper, the red color represents initial node or visited nodes while the yellow color represents the current visiting nodes.Additionally, the green color indicates unvisited nodes.
Figure 5 shows the first recirculation found in the network.After node 2 is visited, the search then continues to find the adjacent nodes of node 2, i.e. node 4 and node 5. Since the search will choose the node with smaller number to visit first, the node 4 is visited put into the stack.The rule 2 and rule 3 are repeated until the node 1` is visited.In the next step, the node 1`, i.e. node 1.Thus a recirculation of airflow is found and the numbers are put into the stack.Since the recirculation of air flow is found, the search will pop backwards to found a visited node that has unvisited adjacent nodes.As shown in Figure 6, the search pop back to node 4 and finds that there are two adjacent nodes, i.e. node 5 and node 7, which are not visited yet.
Fig. 9 -Search from the initial node again
According to rule 5, the search will pop backwards and find that only the initial node has unvisited node.Therefore, the search continues to go from node 1(Figure 8.The node 3 is visited.Since there are not unvisited adjacent nodes for node 3, the search will pop back again.The search pops back to node 1 and the node 1 does not have any unvisited nodes (Figure 9).Therefore, the search has finished and two recirculation of airflows are found.
APPLICATION OF THE MODIFIED METHOD IN FINDING THE RECIRCULATION IN VENTILATION SYSTEM
In this section, the proposed modified depth-first search method is applied in a complicated copper mine ventilation system for finding recirculation of airflows near at the booster fan areas.
All the rules mentioned in Section 2 are made for suitable finding recirculation in ventilation system.The initial node in ventilation system represents the place where the booster fan is.Since the recirculation of airflows is mainly caused by the booster fans installed in underground mine, we mainly focus on finding recirculation of airflows around those areas where booster fans are installed.In section 2, the node with smaller number has the priority to be visited.This is because in our ventilation system the air flow from the ends with smaller number to the ends with bigger number.In another word, the search will follow the directions of airflow.In Section 2, when the initial node is visited again, a recirculation of airflow is found.This means the air flow from the intake air road and to the intake road again instead of passing the return road to the outside.Then, it represents a recirculation of airflow.
According to rules of the depth-first method, application of the method needs to know the airflows of the ventilation system, the position of booster fans and number of the two ends of roads.With the development of the computer technology, a lot of of mine ventilation software are available at this moment.In our research, WENTSIM [16] is adopted for simulating the ventilation system and adding numbers to the two ends of each road.
The implementation of the depth-first method is then use the MATLAB [17] software, which is good at storing matrixes.All the roads will be shown in figure induced by the MATLAB software.The recirculation of airflows in the ventilation system will be marked as red color.10. Figure 12 shows the corresponding recirculation circled in Figure 11.It is confirmed that the depth-first search method can find the recirculation of air flows.Other recirculation of airflows illustrated in Figure 11 is also compared with the mine simulation results and all the recirculation of airflows found by the depth-first method are correct.
DISCUSSION
The implementation of the depth-first search method for finding recirculation of airflows in mine ventilation system in this research is still a little complicated.The ventilation system should be simulated for getting all the airflows information firs.The simulated airflows information is then stored in a matrix.Finally, the depth-first search method is used to search recirculation of airflows in the network stored in the matrix.
More work need to be done to simplify this process.The depth-first search method could be combined in the mine simulation software.Thus, while simulating the ventilation system, the recirculation of airflows is also denoted on the ventilation network.
Although recirculation of airflows mainly occur at the booster fan areas, there are also recirculation of airflows in other places.Therefore, it is necessary to study a method to find those recirculation of airflows.
CONCLUSION
Since the recirculation of airflow in a complicated mine ventilation system makes the contaminated air concentrated and people who work in such an environment take a risk, depth-first method is introduced to find the recirculation airflows in a complicated mine ventilation system for controlling or eliminating the recirculation of airflows.The simple depth-first method is introduced first.Then the modified depth-first method for finding recirculation of airflows is then introduced in detail.The proposed method is implemented through MATLAB by storing the ventilation simulation results in the form of matrix.After that, a copper mine ventilation system is introduced and the proposed method is applied to find recirculation of airflows in this complicated copper mine ventilation system.A few recirculation of airflows are found and compared with the simulation results.The compared results show that the searched recirculation of airflows by the depth-first search method agrees with the simulation results.
It is concluded that the depth-first search method is able to find recirculation in complicated mine ventilation system.Although depth-first search method shows its capability in finding recirculation in complicated copper mine, more work still need to done to simplify the application.
Figure3shows the initial state of a network.The search will start from node 1. Figure3represents part of ventilation network in which all the roads are connected.It should be noted that,
Fig. 3 -
Fig. 3 -Initial state of a network
Fig. 8 -
Fig. 8 -Popping back to the initial node
Fig. 10 -
Fig. 10 -Mine ventilation system of a copper mine
Figure 10
Figure10shows a copper mine ventilation system.The blue lines represent the intake air roads while the red lines indicate the return road for the contaminated air.The fan is denoted by the fan symbols.The copper mine currently has thousands of roads and 38 fans installed underground.The simulated results of mine ventilation system, i.e. the airflow directions and the denoted numbers on two ends of each roads are stored in the MATLAB in the form of matrix.Then the depth-first search begin to search from the initial nodes, i.e. the position of a booster fan, and follows the directions of airflows, i.e. from smaller number to big number.All the nodes which
Figure 11
Figure11illustrates the searched results in which the recirculation airflows are denoted using red color.It can be seen that a few recirculation of air flow are found.The recirculation of airflow at the circled area is compared with the ventilation simulation result in Figure10.Figure12shows the corresponding recirculation circled in Figure11.It is confirmed that the depth-first search method can find the recirculation of air flows.Other recirculation of airflows illustrated in Figure11is also compared with the mine simulation results and all the recirculation of airflows found by the depth-first method are correct. | 3,151.4 | 2017-10-01T00:00:00.000 | [
"Engineering"
] |
REF: A Rapid Exploration Framework for Deploying Autonomous MAVs in Unknown Environments
Exploration and mapping of unknown environments is a fundamental task in applications for autonomous robots. In this article, we present a complete framework for deploying Micro Aerial Vehicles (MAVs) in autonomous exploration missions in unknown subterranean areas. The main motive of exploration algorithms is to depict the next best frontier for the MAV such that new ground can be covered in a fast, safe yet efficient manner. The proposed framework uses a novel frontier selection method that also contributes to the safe navigation of autonomous MAVs in obstructed areas such as subterranean caves, mines, and urban areas. The framework presented in this work bifurcates the exploration problem in local and global exploration. The proposed exploration framework is also adaptable according to computational resources available onboard the MAV which means the trade-off between the speed of exploration and the quality of the map can be made. Such capability allows the proposed framework to be deployed in subterranean exploration and mapping as well as in fast search and rescue scenarios. The performance of the proposed framework is evaluated in detailed simulation studies with comparisons made against a high-level exploration-planning framework developed for the DARPA Sub-T challenge as it will be presented in this article.
Introduction and Background
Rapid exploration and mapping of unknown subterranean environments has become significant interest in the field of autonomous deployment of robots.MAVs have the potential in being a viable solution in terms of mines inspection [25], exploration and mapping [29], [30], [39] and inspection of infrastructures [32] due to their high degree of freedom and fast traversability.The applications of MAVs have also been discussed in developing next generation rotor crafts for mars exploration in [37] and [38].Deploying MAVs for exploration and mapping of dark, dusty and hostile mines and caving systems is particularly challenging because at the beginning of the exploration process, the environment is completely unknown for navigation.In order to map surrounding for safe navigation in such environments, vision-only based navigation techniques are insufficient [36].The unstructured and rocky environment of mines and caves is a major challenge that contribute in uncertainty in sensor measurements [1].In order to explore and map such environments, the crucial requirements for autonomous navigation problem are: a) detecting the frontiers, b) selecting the optimal frontier, and c) safe navigation to such frontier in order to successfully build a map of the environment.In order to safely navigate in an unknown environment, it is crucial that the MAV is backed by a sophisticated on board autonomy for Guidance, Navigation and Control (GNC).In Figure 1 and Figure 2 exploration instances of the proposed method is shown in different environments.The capability of the proposed method to handle exploration of narrow passages as well as wide tall void like structures is evident in Figure 2. The framework introduced in this work selects optimal frontiers based on the idea of continuing the exploration in one direction until there is no new potential information to gain in the particular direction.Planning a safe path to such selected frontiers is crucial when exploring a large environment.The path planning method used in this work takes into account the safety margin of such paths based on the size of the MAV and its ability to traverse through the obstructed areas.The MAVs are also constrained in terms of their limited time of flight.Therefore the proposed framework also accounts for cost based frontier selection while evaluating next optimal area to visit.The proposed framework also complements the idea of efficiently utilising the resources of the vehicle by rapid yet safe navigation.This work presents a rapid exploration framework for safe autonomous navigation of MAVs in caves.The point cloud map of the explored virtual cave environment with the MAV's trajectory is presented in Figure 3.
Related Works
In the original work of frontier based exploration [49], the points lying at the boundary between known (free) space and unknown space are defined as frontier points.In [49] a closest frontier from the robots position is selected to move to such that the boundary at which frontiers lie will also progress towards more unexplored space.The same approach was also extended for the case of multiple robots, as presented in [50].In [21] and [23] frontier based exploration strategies are studied extensively for comparison against different exploration approaches.A 3D Frontier Based Exploration Tool (FBET) for aerial vehicles is presented in [53].The FBET framework uses similar approach to [49] for frontiers generation and the generated frontier are clustered for selection of candidate frontier goal based on cost function that takes into account the cost of moving to such goal point.A Stochastic Differential Equation (SDE) based exploration approach is presented in [45].In the SDE based exploration strategy the authors consider simulating Fig. 3: MAV trajectory while exploring a wide-large cave environment using the proposed approach expansion of system of particles with Newtonian dynamics for evolution of SDE.In [45] the authors consider the region showing significant expansion of particles as a region that would lead the MAV to more unexplored space.In [19] A vision based exploration-mapping problem solving technique is presented that also utilizes MAV to navigate in unexplored areas using continuously updating frontiers.Exploration of unknown environments are also extended to legged or ground robots.Probabilistic Local and Global Reasoning on Information roadMaps (PLGRIM) as presented in [27], discusses a hierarchical value learning strategy for autonomous exploration of large subterranean environments.The methodology presented in [27] uses a hierarchical learning to address local and global exploration of large scale environments while focusing on near optimal coverage plans.A Frontloaded Information Gain Orienteering Problem (FIG-OP) based strategy is presented in [40] that uses topological maps to plan exploration paths in fixed time budget exploration scenario.The method presented in [40] is tested with ground robots in multi-kilometers subterranean environment targeted at time constrained exploration missions.Separated from frontier-selection methods are the methods with integrated exploration behavior in the path planning problem, often based on trying to plan a path in order to maximize the information gain, while minimizing distance travelled or similar metrics.These planners generally fall under the Next-best-view approaches as in [41][4] [9] and have seen great application success, but other methods in similar directions exists, such as ERRT [29] takes into consideration also actuation effort along with information gain in order to yield more efficiency towards exploration of unknown and unstructured areas.Additionally, the Rapid exploration method proposed in [7] is developed to maintain a high MAV velocity, while exploring.Autonomous inspection of structures by utilizing a frontier based algorithm, along with a Lazy Theta * path planner, is presented in [17].Finally, an information driven frontier exploration method for MAV, which uses a hybrid approach between control sampling and frontier based is presented in [8].As state-of-the-art exploration method presented in [12] is tailored and deployed in large-scale exploration mission both in simulations and real world experiments.The developed planner is structured around motion primitives that search for admissible paths, taking advantage of efficient volumetric mapping with collision checks and future-safe path search that evaluates the variation of speed along the path, while also maximizing the exploration gain for an overall fast navigation scheme.Moreover, in [44] an exploration approach that combined frontiers with receding horizon next-best-view planning has been proposed.The frontiers are part of the global planning part, while the next-best-view is responsible for the local exploration part.In [48] a dynamic exploration planner (DEP) for MAV exploration, based on Probabilistic road map has been presented.The sampling nodes are added incrementally and distributed evenly in the explored region, while the planner uses Euclidean Signed Distance Function map to optimize and refine local paths.The exploration scheme in [5] presented the Permutohedral Frontier Filtering, which is based on bilateral filtering with permutohedral lattices to extract the score-based spatial density of the selected frontiers.Multiple studies have also incorporated visual servoing based path planning and control architectures for mobile robots as presented in [13].
The authors in [16] have formulated gaussian functions based control architecture for mobile robot that rely on mainly visual information of surrounding.The authors have extended the work further in [14] that uses decision trees as well as adaptive potential area methods to achieve autonomous control of mobile robots in real life applications.
In the field of sampling based space mapping area the research studies presented in [15], uses bi RRT method to smooth the RRT path using curve fitting methods.In [15] the Ability to navigate from start to goal position using the smooth path by curve fitting also addresses the problem of actuation of robot if extended for MAV in future.More path planning and control approaches have been discussed in a case study presented by the authors in [35] discuss in detail the planning and control of automated ground vehicles in industry.Various planning algorithms have been developed for navigation of aerial platforms in unknown environments, where in general they can be divided in map-based or memory-less approaches or their combination.In [43] a hierarchical planning framework that combines map building from fused depth data, as well as instantaneous depth data, both organized into separate K-D trees has been proposed.The planner relies on a slower global planner to get a goal location, which is evaluated using motion primitives against the K-D trees with the lowest cost candidate primitive to be selected.In [51] a motion planning method for fast navigation of autonomous MAVs has been developed.The algorithm divides the environment modeling in two parts: i) the deterministically visible area within the on board sensor range, and ii) the probabilistically known area beyond the sensor range from a-priory map.The planning method maximizes the likelihood of reaching a goal, where a finite set of candidate trajectories are separated into groups and evaluated for collisions.In [33] a navigation method for MAVs based on disparity image processing has been proposed.More specifically, the disparity image is used for direct collision checking, incorporating C-space expansion of obstacles.The motion planning part verifies obstacle free trajectory, projecting them into the disparity image and comparing their disparity values with the C-space disparity values for collision checking.In [6] a memory-less planner that is partitioning free space in pyramids, using direct depth image measurements has been demonstrated.The use of spatial generation of pyramids of the free spaces, allows for labeling obstacle free trajectories that lie inside the pyramids, while achieving fast generation of large number of candidate trajectories and performs collision checks.In [2] the authors present a reactive navigation system for MAV exploration.The developed algorithm is based on a two layered planning architecture that leverages the global environment map for frontier generation and local instantaneous sensor data for obstacle avoidance based on artificial potential fields.In [46] "FASTER" has been developed, an optimization based planning approach for fast and safe motion in unknown environments.The planner leads to high-speed navigation by allowing to plan in known and unknown configuration space using a convex decomposition in a two trajectory design approach, a fast and a safe trajectory.In [31] a reactive navigation and collision avoidance scheme for MAVs that combines a layer of obstacle detection based on 2D LiDAR with NMPC constraints was proposed for agile local navigation.In [24] a collection of sensor based heading regulation methods have been proposed for aerial platform navigation along underground tunnel areas.In this work the heading regulation methods using i) image centroid calculation from either single image depth estimation, or dark area contour extraction, or CNN for dark area extraction and ii) from processing 2D lidar measurements have been described.In [18] a mapping for motion planning architecture that queries for the minimum-uncertainty view of a point in space, searching a set of recent depth measurements under noisy relative pose transforms has been presented.This work enables the identification of local 3D obstacles in the presence of significant state estimation uncertainty, evaluating motion plans.Table 1 summarizes the SoA exploration strategies, while highlighting the contribution of REF.
Contributions
The exploration, global planning and navigation architecture of this work is part of the development efforts within the COSTAR team [1], [34] related with the DARPA Sub-T competition [11], while it is directly applicable for cave environments.Based on the above mentioned state-of-the-art, the key contributions to this article are listed in the following manner.
-The main contribution of this work stems from the development of safe frontier points generation and local as well as global cost based candidate frontier point selection method.In the presented work we extend the classical and rapid frontier exploration approaches with the improvements concerning safety of MAVs in field as well as maintaining agile nature of exploration.The proposed approach focuses on local frontier selection that take into account the position of such frontier relative to any static or dynamic obstacle in the field of view while also minimizing yaw movement of MAV.When no such frontier exist in the local field of view, the global re-positioning of the MAV is triggered in order to lead the MAV to global frontiers that lead the MAV to more unexplored space.The global re-positioning method is formulated such that it associate cost based on the overall actuation effort required by the MAV to move to a global frontier.The proposed global re-positioning of the MAV considers various factors such as MAV safety, actuation cost as well as how much of the unexplored space will be seen from a potential global frontier.Such contribution differentiate our method from other rapid frontier exploration approaches that directly switch to classical frontier approach, instead in our method MAV globally re-positions itself based on multi layer cost assignment in global frontier selection.As it will be presented, such contribution is particularly important in multi branched tunneling or caving system exploration scenarios.-The second contribution presented in this article is the development of overall autonomy framework which addresses the problem of exploration, safety margin based path planning and reactive navigation through nmpc based control of MAVs.The dedicated risk aware path planning and potential fields based avoidance scheme incorporated within the proposed framework allows to push the limits of exploration in the candidate goal selection process in wide, narrow and obstructed environments.
Extensive simulations are performed for validating the proposed framework in multiple difficult scenarios in order to benchmark the safety, speed and versatility the presented autonomy modules.
The rest of the article is structured as follows.section 2 presents the problem formulation considered in this work.section 3 presents the proposed safe frontier points generation as well as intelligent goal selection with the focus on safe yet fast autonomous exploration addressing the minimizing actuation effort of the MAV.The section also describes the overall autonomy framework which is the combination of exploration, global path planning as well as nmpc based reactive navigation.In section 4, detailed analysis on simulation experiments are presented that prove the efficacy of the proposed scheme.
Finally section 5 provides a discussion with concluding remarks of the proposed work.
Problem Formulation
The problem considered in this work is exploration of bounded 3D space denoted as Occupancy probability based modelling is adapted in order to model free, occupied and unknown space around the robot.The provided framework is targeted for a direct use case in field robotics where autonomous robots are deployed in cave and mines for rapidly mapping the areas.In the case of exploration of bounded space, the exploration will be considered complete when V occupied V f ree = V while V unknown = ∅.In the proposed approach, exploration process is subject to vehicle actuation effort, limitations in time of flight as well as risk margin for safe path generation.
In order to be deployed in real scenario, the exploration and planning framework should adapt based on the available computational resources.The performance evaluation of the proposed framework will be based on exploration time as well as actuation efforts of the MAV.
Proposed Approach
The proposed approach employs a frontier based exploration technique which is modified with the focus on making exploration fast, safe and versatile based on available computational resources.We use occupancy grid maps as a mapping framework, which can generate a 2D or 3D probabilistic map.A value of occupancy probability is assigned to each cell that represents a cell to be either free or occupied in the grid.In this work we are targeting 3D exploration of bounded and unbounded space therefore using the baseline framework of OctoMap [22] we build on top of it in order to develop the proposed 3D occupancy checking framework used in this work.Let us denote a voxel as v(x, y, z).The voxel v is subdivided into eight smaller voxels until a minimum volume is reached.The minimum volume is same as the octree resolution vres.Corresponding to each sensor update if a certain volume in the octree is measured and if it is observed to be occupied, the node containing that particular voxel is marked as occupied.Using ray casting operation for the nodes between the occupied node and the origin (sensor), in the line of ray, can be initialized and marked as free.This process leaves the uninitialized nodes to be marked unknown until the next update in the octree.Let us denote the estimated value of the probability P (N | z 1:t ) of the node N to be occupied for the sensor measurement z 1:t by: where Pn is the prior probability of node N to be occupied.Let us denote the occupancy probability as for node N to be occupied as
MAV current position
Let us define the sensor range R and a sphere of radius r around the MAV.This radius r will be denoted as cleaning radius from here after.The after each update in current octree, if a frontier lie inside this sphere, the frontier is marked as seen and such frontier is deleted from {F}.The cleaning radius is defined such that r < R therefore new frontiers will always be generated at distance R and as the MAV navigate towards frontier, the frontiers lying within the sphere of radius r are deleted and less number of frontiers need to be iterated through in candidate goal selection process.The iterator is defined as it.The meanings of the important notations used in this work is presented in Table 2 The exploration framework presented in this work is is made up of three essential modules, namely the safe frontier point generator, the cost based frontier point selection incorporating also collision check and finally candidate goal publisher as presented in Figure 5.
The first module takes the Lidar point cloud as input and based on the occupancy probability formulation as mentioned earlier, converts the sensor measurement in order to form an octree.The octree is defined as tree data structure in which each sub node is further divided into eight quadrants until minimum volume is reached.The safe frontier point generator module generates all safe frontiers based on the octree as depicted in algorithm algorithm 1.Let us define a risk margin parameter m related to the voxel grid resolution vres.At any instance in the exploration if node N being currently checked for to be considered as a safe frontier then we also check the neighbouring adjacent nodes defined as N adj within the safety margin m.
In our approach we formulate an additional layer of requirement in which we check the neighbouring Voxels of an uninitialized (Unknown) Node N as described earlier and N adj ≤ Pn) than the Node N is considered as safe frontier node and is added to {SF}, where {SF} is a set containing all safe frontiers.This means that a particular node N , it's adjacent node N adj as well as all nodes in the neighbourhood of node N within the range of m * vres are checked and if all such nodes are seen to be free than the node N is considered to be a safe frontier.To be marked as a frontier, each node should have at least n number of minimum unknown or free adjacent nodes.This process makes a big difference in the computational complexity of the process because by specifying a certain risk margin m and minimum unknown or free neighbours k at the start of exploration, the trade off can be made between number of iterations and coverage quality.Another improvement our approach presents is that by not allowing any frontier to be close enough to an occupied node in context of risk margin, we guarantee that inaccessible frontiers can be eliminated which are generated due to the error in probabilistic occupancy mapping.The inaccessible frontiers are defined as the frontiers that are not safe to reach or impossible to reach in terms of MAV size and dynamics to pass through small openings in the map.This simply implies that risk margin can be set in correspondence with the size of the MAV such that the inaccessible areas can be patched and modelled as occupied in the map.The parameters m as well as k are proposed with the focus of testing the proposed approach in extremely difficult areas such as caves and mines where safety of the robot is a major concern.
As defined in algorithm 2, corresponding to each new sensor measurement we check if a N ∈ {F } is still a frontier.We define a candidate frontier set denoted as {C} ⊂ {F} which contains all the valid frontiers which will be examined based on the MAV's position.A 3D Lidar is used in the proposed method to get sensor point cloud and thus, the framework generates frontiers in all directions surrounding the MAV but limited in the vertical directions with field of view V β .In algorithm 2, we classify the frontier nodes in two further sets {L}, {G} {C} named as Local and Global set respectively.Such Local and Global sets contain frontier nodes classified based on the selected horizontal and vertical field of view H θ and V β respectively as shown in Figure 4.
This process allows us to prioritize the unknown space lying ahead of the MAV and if there exist no unknown space ahead of the MAV, the candidate goal is selected based on the global cost based goal assignment.
The frontier points from occupancy formulations are generated in the world frame (W) but the frontier vector f is calculated relative to the MAV body frame {B}.As shown in Figure 4, the angle α is calculated with respect to B. If a frontier f and MAV's current position in inertial frame W is defined as f (x, y, z) and C(x, y, z) respectively then the angle α and γ with respect to body frame B can be computed as, where, ψ is the heading angle of the MAV and h is the vertical height of the footprint of 3D LiDAR field of view.
As discussed previously the algorithm 1 also outputs a list of occupied nodes {O} which has occupancy probability P o higher than 0.5 thus considering the the cluster of occupied points lying in the field of view, the the frontier nodes having a lesser avoidance cost are also favoured to be the N extBestF rontier.The cost formulation for selecting a local or global candidate goal is as follows.If we define the current position of the MAV as C(x, y, z) then the costs for local and global frontier selection can be formulated as, Where, Wo, W h , Wz and W d ∈ R are defined as weights associated to avoidance, heading, height difference and distance cost respectively.We define the actuation effort E as a function of cost such that where, T hover is the minimum thrust required for hovering with zero torques about the MAV arms.Thus, by optimally selecting the next pose reference command for the MAV the actuation effort can be minimized.The MAVs consume high energy to produce yaw torque due to the motor saturation constraints while also keeping the MAV hovering.The overall autonomy scheme of the proposed work is presented in Figure 5.As discussed earlier, the framework uses 3D LiDAR or a camera depth cloud as point cloud input and upon point cloud filtering, the framework generates an octree of occupied, free and unknown nodes.Using the workflow described in algorithm 1, the framework detects frontier points and classifies set of safe frontiers.As presented in the autonomy and navigation scheme (Figure 5), based on local or global frontier, the risk aware global planning module plans a collision free path to the next best frontier.The N BF is then fed into the reactive navigation and control framework to actuate the MAV to navigate to selected frontier point.In Figure 5 APF stands for Artificial Potential Fields that we have incorporated with Nonlinear Model Predictive Control for collision avoidance.The baseline framework for reactive navigation and control used in this framework is inspired from our previous work [30].The Next Best Frontier is sent to a risk aware global planning module which is the extension of D * Lite algorithm but implemented with octomap framework in this case.The global planning module D * + uses the modelled occupied space in order to plan a safe path to the N BF .The risk margin formulation in an expandable octomap grid for global planning is presented in detail in our previous work [26].
Exploration Mission Experiments
In order to validate and test the performance of our proposed exploration approach we use the M100 MAV provided in the open source Rotors Simulator [20] framework.Next-Best-view [4] has been widely used for bench-marking the exploration-planning algorithms.In this work we compare our framework with the latest version of NBV, State-of-the-Art Motion Primitive Based planner (mbplanner) [12] which is developed also as part of the development efforts within DARPA Sub-T challenge.We use a custom cave model with multiple junctions, obstructed walls, narrow openings, steep slopes as well as tunnels with dead-ends for simulation.The cave environment has been made open sourced for public [3].For fair comparison all simulations are performed with same computational unit having Intel core i7 processor and 16 GB memory on ROS Melodic running on Ubuntu 18.04.For mbplanner also the simulations are performed using the cave virtual world where the tuning of parameters such as MAV velocity, mapping resolution and sampling time, was similar to the ones used for the proposed method.In Figure 6 different exploration instances are shown.As described in section 3 the proposed framework (REF) also uses frontier cleaning radius and due to which coverage of large cave like voids can also be performed while exploring.Using the proposed framework the MAV is also able to navigate in narrow and obstructed passages and at the end of such passages if a void like area can also be covered efficiently.The simulation experiment is also carried out to explore a multi-branched virtual cave environment having narrow passages continuing in different heights for a true 3D exploration.The environment is also made open source [28].In Figure 7 the exploration of the virtual cave environment is shown.
In Figure 8 and Figure 9 the explored volume and distance covered by the two exploration frameworks is presented.Figure 8 and Figure 9 depict that our method performs significantly close to the State-of-the-Art mbplanner in terms of exploration volume of the cave environment and distance covered respectively.The proposed approach achieves slightly higher explored volume for the same mission time, this is Fig. 7: Multi-branched large 3D virtual cave world exploration using proposed framework because of the novel Next Best Frontier selection approach as adapted in section 3.As presented in Figure 12, MAV trajectory in our approach is significantly inline with the goal of maximizing the movement into unknown areas while limiting repeated visits to already mapped areas.In Table 3 the exploration volume and distance travelled by the MAV in multiple different runs with different mission duration is presented for both planning framework.As it is evident from Table 3 that the proposed Rapid Exploration Framework (REF) shows higher exploration volume as well as ground covered by the MAV in multiple different runs because of the nature of computing next paths while navigating to current path.All missions considered in Table 3 have same start positions for both planning framework and the MAVs do not return to base in considered cases therefore, showing the exploration capability comparison in given time with same configuration.However, it is also important to mention that even though the V unknown sampling approach in both method is different, the next way points in both case are selected with the focus on maximizing the information gain and exploration volume in same time.
In Figure 11 the exploration mission trajectories are shown for REF and mbplanner with same mission duration (400 s).In Figure 11 it is evident that the MAV covers more ground in given time using the proposed framework.
In Figure 12 in both method the overlap in trajectory is seen.This overlap is mainly due to the lower information gain (corresponding to mbplanner) and {L} = ∅ In this article we proposed a Rapid Exploration Framework for deploying autonomous MAVs in unknown areas such as caves and mines.We present a novel candidate goal selection method with the focus of minimizing the actuation effort of the MAV by employing Look Forward Move Ahead approach.We compare the exploration scenario in the same environment with the motion primitive based planner which is a remarkable extension to Next Best View approach.In terms of volumetric gain and distance travelled, we achieve similar results to that of the mbplanner.We also address the trajectory overlap issue by introducing a simple yet efficient cost based goal selection approach that prevents the MAV to Unnecessarily travel to previously visited areas while also keeping the look forward move ahead approach as priority.As future development efforts we plan to conduct some field experiments to explore abandoned mines and underground cave structures.
Fig. 1 :
Fig. 1: DARPA Sub-T world : Exploration instance.(1) Rapid local exploration behaviour (2) local exploration in very narrow as well as wide cave-void like areas (3) Safe Next Best Frontier (NBF) in obstructed narrow tunnels
Fig. 2 :
Fig. 2: Exploration behaviour using the proposed framework in multiple exploration scenarios in DARPA Sub-T virtual world
Fig. 4 :
Fig. 4: Frontier classification and notations used in the proposed framework
Fig. 5 :
Fig. 5: The proposed overall autonomy and navigation scheme
Fig. 6 :
Fig.6: DARPA-Sub-T virtual world: Exploration of narrow-confined passages as well as large cave like voids using proposed framework.In (1,2,3) the rapid exploration-coverage nature of the proposed framework is shown.In (4) the safe way point selection and risk aware planning to safe frontier is shown
Table 1 :
Different exploration frameworks and their corresponding explorationplanning approach [REF] Safe frontier generation for local and global exploration & local frontier selection based on heading and avoidance cost & heading regulation, height difference and travel to frontier cost based global re-positioning when local exploration gain is low
Table 2 :
Description of the notations used in proposed methodology .
Table 3 :
Exploration volume and distance from multiple runs | 7,144.2 | 2022-05-31T00:00:00.000 | [
"Computer Science"
] |
Biosorbents prepared from pomelo peel by hydrothermal technique and its adsorption properties for congo red
A new kind of biosorbent was prepared from pomelo peel by using potassium hydroxide as activating agent and hydrothermal treatment method. The characteristics of materials were analyzed by SEM, BET and FTIR. Increasing the adsorbent dosage (from 2.5 to 17.5 g l−1) and contents of congo red in solution (from 20 to 50 mg l−1) cause the increment of removal rate of congo red. On the other side, the removal rate is decrement with rising of temperature and pH. The maximum adsorption quantity of biosorbent was 144.93 mg g−1 at 303 K, which calculated by Langmuir model. The pseudo-first-order kinetic model, pseudo-second-order kinetic model and intra-particle diffusion model were used to explain adsorption process. The value of Gibbs free energy (∆G) is −7.63 (kJ/mol) at 303 K and the enthalpy change (∆H) is −31.43 (kJ/mol), meaning that adsorption behavior for congo red is spontaneous.
Introduction
Dyes are generally made by artificial synthesis and contain a variety of chemicals, which is widely used in industry. The use of a large number of dyes causes serious water pollution [1]. As one of the dyes, congo red has wider applications in textile, paper, leather, plastics and related industries [2]. Congo red wastewater brought serious problems because of its toxicity, which poses a major threat to people's health and living environment [3]. Therefore, seeking an efficient way to remove congo red from the wastewater has received more and more attention.
At present, the water treatment methods commonly used in the world are reverse osmosis [4], biological oxidation [5], ion exchange [6], adsorption [7] and membrane filtration [8]. As one of the treatment methods, adsorption is widely used because of its high efficiency, sustainability and convenience [9]. Various adsorbents are used for sewage treatment, such as activated carbon [10], zeolite [11], carbon nanotubes [12], grapheme [13] and agricultural waste peels [14]. In recent years, various biosorbents have been investigated intensively for adsorption of wastewater due to its low cost, easy access and eco-friendly [15]. Orange peel [16], garlic peel [17], banana peel [18], potato peel [19] and pomelo peel were researched as biosorbents to adsorb different containments from wastewaters.
Pomelo trees are cultivated in all tropical and subtropical regions of the world [20]. A large amount of pomelo is consumed in people's daily life and its peel is often thrown away as a waste [21]. The accumulation and decay of pomelo peel will cause environmental pollution and waste of resource. Up to now, the progress is slow for waster utilization of pomelo peel [22]. Pomelo peel has a porous structure. The components of cellulose and hemicellulose endow its various and abundant functional groups [20], which makes it a promising biosorbent in wastewater treatment. However, the finite adsorption capacity and removal efficiency of primitive pomelo limit its practical applications. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Activation and carbonation of pomelo peel to prepare activated carbon is an effective way to improve its adsorption capability. The chemical and physical methods commonly used to prepare activated carbon. The physical activation is to oxidize precursor using oxidizing gases such as O 2 [23], CO 2 [24] and H 2 O [25]. The chemical activation is to treat the precursor using chemical reagents such as H 3 PO 4 [26], H 2 SO 4 [27], KOH [28], ZnCl 2 [29], and K 2 CO 3 [30]. Both physical and chemical activation methods usually need high temperature (400°C-1000°C) to oxidize or etch the precursor to form multi-porous structure. The high temperature increases production cycle and cost.
In this work, pomelo peel was chemically modified with potassium hydroxide agent and then activated by a hydrothermal technique. Hydrothermal technique is widely used to dispose and transform organic solid waste into valuable resources [31]. Compared to conventional activation method, low temperature (200°C) of hydrothermal processing not only decreases production cycle and cost, but also does not produce waste gases such as CO 2 , NO, SO 2 . The adsorption properties of hydrothermal treated pomelo peel for congo red were obtained by batch experiments. The physical properties of hydrothermal treated pomelo peel were researched by FTIR, SEM and BET.
Materials
Congo red (C 32 H 22 N 6 Na 2 O 6 S 2 , >99% in purity) was supplied by Tianjing Dengke Chemical Reagent Co., Ltd. Potassium hydroxide agent was supplied by Sinopharm Chemical Reagent Co., Ltd. The solution preparation all uses the deionized water. The other reagents were analytical grade.
Preparation of biosorbents
The pomelo peel was obtained in the local fruit market. The inner soft structure of pomelo peel was separated and dried in the air. 1 g air-dried primitive pomelo peel was put into the potassium hydroxide solution (20%wt). The mixture was subsequently transferred and sealed in a Teflon-lined stainless steel autoclave. Then put it into the oven for 2 h at 200°C. After heating, the mixture was put into a beaker and washed to neutral. For the purpose of study the effect of different drying methods for adsorption of comgo red, part of hydrothermal treated pomelo peel was naturally dried in the air and the other was dried by the vacuum freeze dryer (FD-1-50, Boyikang Laboratory Apparatus Co., Ltd., China).
Characterization of the biosorbents
The surface morphologies of the air-dried peel and two kinds of modified peel were studied by SEM (TM3000, HITACHI, Japan). Functional groups were detected by FTIR (Nicolet iS 10, Thermo Scientific, USA). The specific surface area of the air-dried pomelo peel and freeze-dried modified pomelo peel were studied by BET (ASAP 2460, Micromeritics, USA).
Batch adsorption experiments
All adsorption experiments were placing 10 mg of freeze-dried modified pomelo peel and 20 ml of congo red solution in a conical flask. Then put the conical flask in the constant temperature gas bath shaker (SHZ-82A) for 48 h at 160 rpm to reach adsorption equilibrium. The UV-visible spectrophotometer was used to detect equilibrium concentration of congo red. The adsorption quantity of freeze-dried modified pomelo peel was obtained by the equation (1): where c 0 (mg/l) is the initial concentration and c e (mg/l) is the equilibrium concentration of congo red solution.
The effect of solution pH was researched by putting freeze-dried modified pomelo peel into the 20 ml congo red solution of 50 mg l −1 . And the pH value of congo red solution ranges from 4 to 10. The effect of adsorbent dosage was researched by putting varying dosage of adsorbent (5-35 mg) into the congo red solution of 50 mg l −1 , respectively.
The influence of temperature was investigated by putting adsorbent into the varying concentration (20-50 mg l −1 ) of congo red solution, which adsorption process was proceeded at different temperature. The relationship between adsorption time and change of solution concentration was measured by the way that put the adsorbent (10 mg) into the dye solution of 50 mg l −1 . The adsorption ability q t (mg/g) might be able to calculate with the equation (2): where c t (mg/g) is the concentration of congo red at time t.
Characterization of adsorbent
To compare the difference of functional groups between air-dried primitive peel and freeze-dried modified peel, the FTIR was analyzed for pomelo peel and shown in figure 1. The FTIR spectrum of primitive peel (figure 1(a)) contains a peak at 1735 cm −1 . It is because stretch of carboxylic acid groups of hemicellulose (C=O) [32]. After the modification (figure 1(b)), this peak is disappeared due to the decomposition of hemicellulose around 453 K [33]. Additionally, the peak at 1414 cm −1 (C=C) also disappeared. It can be attributed to the effect of KOH activation. At around 3346 cm −1 and 2918 cm −1 for peaks are represented O-H groups stretching of cellulose and asymmetric C-H vibration, respectively. The peak at 1642 cm −1 of modified peel may indicate the stretching of carboxylic groups (−COOH). The peak near 1371 cm −1 of modified peel may refer the stretching vibration of −COO − of pectin [32]. And at around 1156 cm −1 can be assigned C-O-C stretching vibration of cellulose [34]. At around 1060 cm −1 reflects stretching vibration of C-OH [20].
After the adsorption process ((figure 1(c)), losing of some peaks or decreasing of transmittance (T %) could be thought as possible interaction of dye molecules and functional groups at these bands [35]. The peak of -COOH stretching at 1642 cm −1 ( figure 1(b)) shifts to 1640 cm −1 (figure 1(c)) with decrement of intensity. This decrement may be due to the influence of dye molecules at this peak. The peaks of O-H (3346 cm −1 ) and C-H (2918 cm −1 ) have the similar change. These functional groups may play an important role in the adsorption process. Figure 2 shows SEM images of the pomelo peel. The air-dried pomelo peel (figure 2(a)) has the continuous and unbroken morphologies. After being modified by potassium hydroxide, both the air-dried (figure 2(b)) and freeze-dried (figure 2(c)) pomelo peel forms more folds and slits, which can increase total surface of pomelo peel.
Adsorption capacity of three kinds of pomelo peels was represented in figure 3. The air-dried primitive pomelo peel is only 13.68 mg g −1 . After modification, it increases to 73.34 mg g −1 for the air-dried modified pomelo peel and 85.41 mg g −1 for the freeze-dried modified pomelo peel. It may be because the functional groups were introduced by potassium hydroxide treatment.
Nitrogen adsorption and desorption isotherm was shown in figure 4(a). The specific surface area is only 1.3003 m 2 g −1 for the primitive pomelo peel. After the modification, it increased to 4.0845 m 2 g −1 . The improvement of the specific surface area is attributed to the hydrothermal treatment and special drying method. Freeze drying method can form porous structure through subliming water molecules at the circumstance of vacuum and low temperature [36]. The BJH pore volume distribution curve ( figure 4(b)) shows that the number of pores of the freeze-dried modified peel is larger than that of the primitive pomelo peel. The larger specific surface area and pores provide more active sites in the adsorption process of congo red. Figure 5(a) shows the relationship of adsorption ability with temperature. At 303 K, the adsorption ability is 85 mg g −1 . At 323 K, it decreases to 64 mg g −1 . The lower adsorption capacity at higher temperature may be due to restrain hydrogen bonding of functional groups with dye molecules [37].
Effect of dosage
As the freeze-dried modified peel dosage rises, the removal rate gradually increases from 51% to 94% ( figure 5(b)). It can be attributed to the increase of a great quantity of adsorption sites [38]. On the other side, the adsorption capacity (q e mg/g) decreases from 102 mg g −1 to 26 mg g −1 with rise dosage of adsorbent. This is because the quantity of dye which adsorbed by per unit weight of the adsorbent is reduced causing the decline of utilization rate of the active sites [39].
Effect of time
The impact of adsorption time for congo red onto the freeze-dried modified pomelo peel was shown in figure 5(c). It is obvious that the former 300 min of adsorption process is faster. It can be attributed any amount of activation sites on the adsorbents that can easily bind with dye molecules [40]. After then, adsorption process tends to be slow until the adsorption equilibrium has reached. This can be explained by the process that the adsorption of dye molecules shifts from surface area toward into the inner pores of the adsorbent [41]. The long adsorption equilibrium time is the consequence of long-range diffusion of congo red enter the inner porous of adsorbent.
Effect of pH
The process of congo red onto the freeze-dried modified pomelo peel is affected by pH of solution, which result was shown in figure 5(d). At pH=4, removal percentage of congo red is 85.83%. However, at pH=10, removal percentage of congo red reduces to 75.11%. This can be explained by the process that carboxyl groups of adsorbent bind with congo red molecules at acidic pH. Oxygen-containing functional groups have an important effect during the adsorption process [42]. At acidic pH, congo red molecules are cationic form, which can attach with carboxyl groups of the adsorbents. At basic pH, congo red molecules are anionic form and the carboxyl groups of adsorbent also become an anion (−COO − ), which is not suitable for dye molecules binding with the adsorbents [38]. Therefore, at acidic pH, adsorbent has higher removal rate for congo red than at basic pH.
The dissolved organic matter such as fulvic acid (FA) and humic acid (HA) will affect the oxygen-containing functional groups of the adsorbents and affect the adsorption capacity [43]. Previous studies have shown that FA and HA were negatively charged in the pH range of 3.0-10.0 [44,45]. At low pH values, the FA and HA easily bind to the surface of the adsorbents, providing more oxygen-containing functional groups to form complexes with cationic adsorbates, so the adsorption capacity is increased. At high pH values, the binding of FA and HA to the adsorbents becomes difficult due to electrostatic repulsion, and thereby change the adsorption capacity of adsorbents [46,47]. On the other hand, the organic matter also competes with the dye molecules for adsorption sites, which also affects the adsorption capacity [48].
Adsorption mechanisms
Based on the study of pH and FTIR, the possible adsorption mechanism was discussed. After the adsorption process, The peaks of -COOH (1642 cm −1 ) and O-H (3346 cm −1 ) have changed with the decrease of intensity. This may indicate that these oxygen-containing functional groups play a role in the adsorption process. Through the study of pH, we can further analyze and discuss the adsorption mechanism. Congo red molecules are positively charged (CR + ) under acidic conditions and negatively charged (CR − ) under alkaline conditions [49]. With the increase of pH, oxygen-containing functional groups (-COOH) will be negatively charged due to deprotonation (-COO − ) [50]. Therefore, we can conclude that there is electrostatic attraction between the congo red molecules and the carboxyl groups of the adsorbent at acidic pH, and electrostatic repulsion at basic pH. This conclusion is also consistent with the experimental results of pH. Electrostatic interaction has an important influence on the adsorption process. Figure 6 shows the possible adsorption mechanism of congo red adsorption onto adsorbent at acidic pH.
Effect of other co-ions
The adsorption of dyes is generally accomplished by hydrogen bonding, functional group interactions and electrostatic interaction with the adsorbent. They are affected by the ionic strength and pH in the aqueous solution [51]. The metal cations such as Na + , Cu 2+ and Ca 2+ may be combined with the active sites on the surface of the adsorbent and compete with the dye molecules, which will affect the adsorption efficiency of the adsorbent for dye molecules [52]. Previous studies have shown that pomelo peel has adsorption capacity for Figure 6. The adsorption mechanism between congo red and adsorbent at acidic pH.
Cd 2+ [53] and Cu 2+ [54], so the presence of some ions in the solution may reduce the adsorption capacity of pomelo peel for Congo red. In addition to the competitive effect of ions, the ionic strength also affects the electrostatic interaction. The electrostatic attraction can promote the adsorption of dye molecules by adsorbents. Some authors have found that the addition of NaCl solution can inhibit the electrostatic attraction and reduce the adsorption capacity of the adsorbent [55,56]. This is mainly because Na + ions and Cl − ions can shield the charged sites of the adsorbent. And ionic strength may also affect the hydrophobic interactions. Therefore, the influence of ionic strength on the adsorption process is complicated.
Kinetic studies
In order to evaluate the adsorption kinetics, the pseudo-first-order model, pseudo-second-order model and intraparticle diffusion model were used to fit the experimental data. The degree of data fitting is represented by R 2 and sum squares errors (SSE).
The pseudo-first-order kinetic model is expressed as: [57] ( Where k 1 is the adsorption constant, q e represents adsorption ability of adsorbent at equilibrium concentration and q t indicates specific concentration at time t. The q e and q t obtain by fitting ( ) q q log e t with t ( figure 7(a)). The accurate values were shown in table 1. The q e (59.17 mg g −1 ) is lower than the 85.41 (mg/g) of experimental date. The value of R 2 is only 0.8667. The value of sum squares errors (SSE) is 0.56446.
The pseudo-second-order kinetic model is expressed as: [58] ( ) where k 2 (g/mg min) is the adsorption constant, which obtained by fitting t/q t with t ( figure 7(b)). And the values of k 2 and q e were shown in table 1. The value of R 2 (0.9968) is higher than the R 2 (0.8667) of the pseudofirst-order kinetic model. The value of sum squares errors (0.47502) is lower than the pseudo-first-order kinetic model. The experimental date of q e is 85.41 mg g −1 , which is closer to the fitting data of Pseudo-second-orde (89.37 mg g −1 ).These indicate that the pseudo-second-order kinetic model is more suitable adsorption process of congo red onto the freeze-dried modified pomelo peel.
The intraparticle diffusion model is expressed as: where k id (mg/g min 1/2 ) is the intraparticle diffusion constant and C i is the parameter related to the boundary layer of molecules. The k id and C i were obtained by fitting q t with t 1/2 ( figure 7(c)). And the values of k id and C i were shown in table 1. It is clearly evident that the adsorption by freeze-dried modified pomelo peel has two stages. Consequently, internal diffusion is only one of the conditions affecting the adsorption rate. It is a complicated process [59]. In the first stage of adsorption, the faster removal rate can attribute to sufficiently combine the dye molecules with surface area of adsorbent. Subsequently, the adsorption rate slows down as the concentration of dye molecules decreasing.
Equilibrium modeling
The experimental data was fitted by Langmuir and Freundlich models. The Langmuir model considers that the adsorption process can be evenly distributed throughout the surface of adsorbent. The Langmuir model is expressed as: [60] where k L is the Langmuir parameter, q max is the maximum adsorption capacity and the q e represents adsorption ability of adsorbent at equilibrium concentration. The k L and q max can be acquired by fitting c e /q e with c e ( figure 8(a)). The table 2 was represented the values of k L and q . max Comparing maximum adsorption capacity of modified pomelo peel and other adsorbents (table 3) illustrates that modified pomelo peel is the excellent adsorbent in wastewater treatment. The values of R 2 and sum squares errors were shown in table 2.
The Langmuir model also can express as: where c 0 is the initial concentration of congo red solution. The R L can be calculated by c 0 and K L . The figures of K L are all less than 1, which indicates that modified pomelo peel is the appropriate adsorbent in the process of removing dye.
The Freundlich model considers that adsorption is the heterogeneous distribution on surface of adsorbent. The Freundlich model is expressed as: where k F and n are Freundlich equilibrium constant and adsorption intensity, respectively. The values can be obtained by plotting q ln e versus c ln e ( figure 8(b)). The table 2 was illustrated the specific values of n. Compared with Freundlich model, Langmuir model has higher values of R 2 and lower values of sum squares errors. It can evidence that experimental data is more appropriate the Langmuir model and adsorption process of the dyes molecules onto the freeze-dried modified pomelo peel is monolayer adsorption process.
Thermodynamic study
The temperature has the obvious effect for the congo red molecules onto the freeze-dried modified pomelo peel. The parameters (ΔG, ΔH, ΔS) can be calculated by following the equation at different temperature: The specific figures of ΔG (−7.63, −6.85 and −6.06 kJ mol −1 ) represent that adsorption process is the spontaneous reaction and not require external energy. At 303 K and 323 K, the values ΔG are −7.63 (kJ/mol) and −6.06 (kJ/mol), respectively, suggesting that lower temperature is more favorable for congo red onto freeze-dried modified pomelo peel. The value of ΔH (−31.43 kJ mol −1 ) represents process that congo red molecules onto the freeze-dried modified pomelo peel is an exothermal reaction. The value of ΔS (−78.48 J mol −1 K) indicates that randomness of adsorption onto freeze-dried modified pomelo peel is reduced at the solid-solution interface [68].
Conclusions
In this work, a new kind of biosorbent was prepared from pomelo peel by hydrothermal treatment method. The surface morphology, functional groups and specific surface area of the adsorbent were studied by SEM, FTIR and BET methods, respectively. The influences of temperature, pH, adsorbent dosage and time for adsorption were researched by batch experiments. The maximum adsorption capacity (144.93 mg g −1 ) was calculated by Langmuir model at 303 K, which illustrates that pomelo peel is the excellent adsorbent in wastewater treatment.
The kinetic studies indicate that the adsorption process is more suitable for pseudo-second-order kinetic model. The research of thermodynamics evidences that adsorption process is an exothermal and spontaneous reaction. Results of this study show that modified pomelo peel has the bright prospect to adsorb congo red. | 5,079.2 | 2020-04-27T00:00:00.000 | [
"Chemistry",
"Engineering"
] |
Design and Implementation of Fuzzy Control for Industrial Robot
The dynamic equations of motion for a mechanical manipulator are highly non-linear and complex. It is therefore, very difficult to implement real-time control based on a detailed dynamic model of a robot, if not impossible (Luh et al., 1980; Lee et al., 1982). The control problem becomes more difficult if adaptive control is necessary to accommodate changing operational conditions. Such a requirement frequently exits in the manufacturing environment; therefore, an alternative design approach would be attractive to the industrial practitioner. A better solution to the complex control problem might result if human intelligence and judgement replaces the design approach of finding an approximation to the true process model. A practical alternative would be the use of fuzzy logic. It has been reported that fuzzy logic controllers performed better, or at least as good as, a conventional controller and can be employed where conventional control techniques are inappropriate (Li et al., 1989; Sugeno, 1985; Ying et al., 1990). In contrast to adaptive control, fuzzy logic algorithms do not require a detailed mathematical description of the process to be controlled and therefore the implementation of fuzzy logic should, theoretically, be less demanding computationally. Fuzzy logic algorithms can be designed for environments where the available source information is not accurate, subjective and of uncertain quality. Furthermore, these algorithms provide a direct means of translating qualitative and imprecise linguistic statements on control procedures into precise computer statements. In this chapter, a proposed fuzzy logic design to control an actual industrial robot arm is outlined. The description of fuzzy logic controller is described in Section 2. It includes the methodology for the design of a fuzzy logic controller for use in robotic application. Section 3 presents the robot control system architecture. In Section 4, the relevant issues that arise relating to the design techniques employed are discussed in detailed. These issues include choise of sampling time, fuzzy rules design strategy, and controller tuning strategy. To evaluate the effectiveness of the proposed design strategy, studies are made to
Introduction
The dynamic equations of motion for a mechanical manipulator are highly non-linear and complex. It is therefore, very difficult to implement real-time control based on a detailed dynamic model of a robot, if not impossible (Luh et al., 1980;Lee et al., 1982). The control problem becomes more difficult if adaptive control is necessary to accommodate changing operational conditions. Such a requirement frequently exits in the manufacturing environment; therefore, an alternative design approach would be attractive to the industrial practitioner. A better solution to the complex control problem might result if human intelligence and judgement replaces the design approach of finding an approximation to the true process model. A practical alternative would be the use of fuzzy logic. It has been reported that fuzzy logic controllers performed better, or at least as good as, a conventional controller and can be employed where conventional control techniques are inappropriate Sugeno, 1985;Ying et al., 1990). In contrast to adaptive control, fuzzy logic algorithms do not require a detailed mathematical description of the process to be controlled and therefore the implementation of fuzzy logic should, theoretically, be less demanding computationally. Fuzzy logic algorithms can be designed for environments where the available source information is not accurate, subjective and of uncertain quality. Furthermore, these algorithms provide a direct means of translating qualitative and imprecise linguistic statements on control procedures into precise computer statements. In this chapter, a proposed fuzzy logic design to control an actual industrial robot arm is outlined. The description of fuzzy logic controller is described in Section 2. It includes the methodology for the design of a fuzzy logic controller for use in robotic application. Section 3 presents the robot control system architecture. In Section 4, the relevant issues that arise relating to the design techniques employed are discussed in detailed. These issues include choise of sampling time, fuzzy rules design strategy, and controller tuning strategy. To evaluate the effectiveness of the proposed design strategy, studies are made to investigate which design strategy leads to the best control performance under various robot conditions. Section 5 concludes this chapter.
Description of Fuzzy Logic Controller Architecture
The basic structure of the fuzzy logic controller (FLC) most commonly found in the literature is presented in Fig. 1 (Lee, 1990a). The basic configuration of a fuzzy system is composed of a fuzzification interface, a knowledge base, a fuzzy inference machine and a defuzzification interface as illustrated in the upper section of Fig. 1. The measured values of the crisp input variables are mapped into the corresponding linguistic values or the fuzzy set universe of discourse at the fuzzification interface. The knowledge base comprises both the fuzzy data and fuzzy control rules. The fuzzy data base contains all the necessary definitions used in defining the fuzzy sets and linguistic control rules whereas, the fuzzy control rule base includes the necessary control goals and control policy, as defined by an experts, in the form of a set of linguistic rules. The fuzzy inference engine emulates human-decision making skills by employing fuzzy concepts and inferring fuzzy control actions from the rules of inference associated with fuzzy logic. In contrast to the fuzzification stage, the defuzzification interface converts the values of the fuzzy output variables into the corresponding universe of discourse, which yields a non-fuzzy control action from the inferred fuzzy control action. In general, for a regulation control task, the fuzzy logic controller maps the significant and observable variables to the manipulated variable(s) through the chosen fuzzy relationships. The feedback from the process output is normally returned a crisp input into the fuzzification interface. The crisp or non-fuzzy input disturbance, illustrated in Fig. 1, would normally include both error and change in error, and these are mapped to their fuzzy counterparts at the fuzzification stage. These latter variables are the inputs to the compositional rules of inference from which the fuzzy manipulated variable is obtained. At the output from the defuzzification process, a crisp manipulated variable is available for input to the process. In conclusion, it can be stated that to design a fuzzy logic controller, six essential stages must be completed: 1. Input and output variables to be used must be identified. 2. Design the fuzzification process to receive the chosen input variables. 3. Establish the data and rule bases. 4. Select the compositional rule of inference for decision making. 5. Decide which defuzzification process is to be employed. 6. Develop the computational units to access the data and rule bases.
Input and Output Variables
In any fuzzy logic control system, the observed input must be fuzzified before it is introduced to the control algorithm. The most commonly used antecedents at this fuzzification stage are the state variables, error and rate of change in error. For the case of positioning a joint within a robot arm, the first variable is the difference (error) between the desired and the current joint position. The value of the second state variable is the numerical difference between two successive values of error (change in error). These two state variables give a good indication of the instantaneous performance of the system and both variables are quantifiable by fuzzy sets. In this project, error (E) and change in error (CE) are defined as the input fuzzy sets and the controlled action (CU) as the output fuzzy set. The evaluation of the error and the change in error at sample interval, k, is calculated as follows :
Method of Representing Fuzzy Sets
According to Lee (1990a), there are two methods for defining a fuzzy set; nu-merical and functional, depending on whether the universe of discourse is discrete or continuous. In the case of a discrete universe, a numerical definition is employed where the value of the membership function is represented by a vector; the order of the vector dependent on the degree of discretisation. The user has to specifically define the grade of membership of each cardinal in the fuzzy sets. For a continuous universe of discourse, a functional definition can be utilised to define the membership function of a fuzzy set. The triangle, trapezoidal and the bell shaped functions are the popular types found in many engineering applications. In this Chapter, this latter form of representation is adopted. The evaluation of the membership function is evaluated on-line during process operation. A combination of bisected trapezoidal, trapezoidal and triangular shaped fuzzy set templates are used to represent the input and output variables; template shapes that are readily evaluated and require the minimum of computer memory storage. At present, researchers are still looking for the best guidance to determine the best shape for a fuzzy set to provide an optimum solution to a specific control problem. In general, the use of simple shapes could provide satisfactory performance. The geometry of these templates can be defined by the base width and the side slope when mapped to the universe of discourse.
Mapping Fuzzy Sets to the Universe of Discourse
In any application, it is essential for a practitioner to identify the most appropriate parameters prior to the mapping of the fuzzy sets to the chosen universe of discourse; the determination of the size of both the measurement and control spaces; the choice of the discretisation levels for both the measurement and control spaces, the definition of the basic fuzzy sets within these discretised spaces and finally the sample interval to be used. The size of both the measurement and control spaces can be directly determined by estimating the probable operating range of the controlled system. However, the choice of the discretisation levels in both the measurement and control spaces, and the fuzzy set definitions can only be defined subjectively and are normally based on the experience and judgement of the design engineer. From a practical point of view, the number of quantisation levels should be large enough to provide an adequate resolution of the control rules without demanding excessive computer memory storage. Generally 5 to 15 level of discretisations are found to be adequate. It should be emphasised that the choice of these parameters has a significant influence on the quality of the control action that can be achieved in any application (Lee, 1990a). The use of higher resolution in the discretisation levels will result in an increase in the number of control rules and thereby make the formulation of these control rules more difficult. It should also be emphasised that the fuzzy sets selected should always completely cover the whole of the intended working range to ensure that proper control action can be inferred for every state of the process. The union of the support sets on which the primary fuzzy sets are defined should cover the associated universe of discourse in relation to some value, . This property is referred to as the " -completeness" by Lee (1990a). To ensure a dominant rule always exists, the recommendation is that the value of at the crossover point of two overlapping fuzzy sets is 0.5. At this value of , two dominant rules will be fired. To define the input fuzzy sets, error (E) and change in error (CE), the following procedure is adopted. In the case of the former fuzzy sets, the maximum range of error for a particular joint actuator is calculated. For example, a robot waist joint with a counter resolution of 0.025 degree per count, and a maximum allowable rotation of 300.0 degree would result in a maximum positional error of 12000 counts. A typical schematic representation for the error fuzzy set universe of discourse would be as illustrated in Fig. 2. The linguistic terms used to describe the fuzzy sets in Fig where N is negative, P is positive, B is big, M is medium, S is small and ZE is zero; a notation that is used throughout this chapter. Combinations of these letters are adopted to represent the fuzzy variables chosen, for example Posi-tiveBig, PositiveMedium and PositiveSmall. As a result, 7 discretisation levels are initially defined for each input and output domain. The size and shape of the fuzzy sets displayed in Fig. 2 are chosen subjectively and tuned during process operation to obtain the most appropriate response. The proposed tuning methodology of these fuzzy sets is detailed later in Figure 4.2. To determine the domain size for the change in error variable in this project, an open loop test was conducted. In this test, a whole range of voltage (from the minimum to the maximum) was applied to each of the robot joint actuator and the respective change in angular motion error was recorded every sample interval. From this information, the fuzzy sets illustrated in Fig. 3 for the change in error were initially estimated. Although the open loop response of the system will be different from the close loop response, it will give a good initial guide to the size of the domain appropriate for use with the fuzzy logic controller. It should be noted that the choice of sampling interval is very important because it will affect the maximum change in error value recorded. It was found that the use of a very high sampling rate caused the recorded maximum change in angular motion error to be close to zero and this made it impossible to define the location of each fuzzy set in the domain of discourse. For example, a sampling period of 0.001 seconds will result in a maximum change in waist positional error of 2 counts; a value found experimentally. In a similar manner, the control variable output fuzzy sets were selected. However, in this particular case, the dimentionality of the space is determined by the resolution of the available D/A converters. The D/A converters adopted are of an 8-bit type which yield 256 resolution levels as indicated on the horizontal axis in Fig. 4(a). Again, the universe of discourse was partitioned into 7 fuzzy set zones as depicted in Fig. 4(b).
Figure 4(a). A typical characteristic for the waist joint actuator
It should be noted that the fuzzy set labelled Zero is defined across the dead zone of the dc-servo motor in order to compensate for the static characteristics of the motor in this region. The initial sizes and distribution of the fuzzy sets are tuned during operation to improve the closed loop performance of the system.
Transforming a Crisp Input to a Fuzzy Variable
Consider the trapezoidal representation of an error fuzzy set as illustrated in Fig In a similar manner the properties of a triangular or bisected trapezoidal fuzzy set template can be defined.
Defining the Fuzzy Rule Base
The fuzzy rule base employed in FLC contains fuzzy conditional statements which are currently chosen by the practitioner from a detailed knowledge of the operational characteristics of the process to be controlled. The fuzzy rule base can be derived by adopting a combination of four practical approaches which are mutually exclusive, but are the most likely to provide an effective rule base. These can be summarised as follows (Lee, 1990a): 1. Expert experience and control engineering knowledge. In nature, most human decision making are based on linguistic rather than numerical descriptions. From this point of view, fuzzy control rules provide a natural framework for the characterisation of human behaviour and decision making by the adoption of fuzzy conditional statements and the use of an inference mechanism. 2. Operational experience. The process performance that can be achieved by a human operator when controlling a complex process is remarkable because his reactions are mostly instinctive. An operator through the use of conscious or subconscious conditional statements derives an effective control strategy. These rules can be deduced from observations of the actions of the human controller in terms of the input and output operating data. 3. Fuzzy model of the process. The linguistic description of the dynamic characteristics of a controlled process may be viewed as a fuzzy model of a process. Based on this fuzzy model, a set of fuzzy control rules can be generated to attain an optimal performance from a dynamic system. 4. Learning. Emulation of human learning ability can be carried out through the automatic generation and modification of the fuzzy control rules from experience gained. The rule base strategy adopted in this work Design and Implementation of Fuzzy Control for Industrial Robot 417 is developed from operational and engineering knowledge. The initial control rule base adopted is displayed in the look-up
Fuzzy Inference Mechanism
One virtue of a fuzzy system is its inference mechanisms which is analogous to the human decision making process. The inference mechanism employs the fuzzy control rules to infer the fuzzy sets on the universe of possible control action. The mechanism acts as a rule processor and carries out the tasks of manoeuvring the primary fuzzy sets and their attendant operations, evaluating the fuzzy conditional statements and searching for appropriate rules to form the output action. As mention earlier, the input and output variables of error, change in error and control action, UE , UCE and UCU. respectively, are all chosen to be discrete and finite, and are in the form of; (7) where indicates a fuzzy subset. As a result of selecting 7 discretisation levels for each fuzzy input and output variable, i.e. PB, PM, PS, etc., 49 fuzzy control rules result. These control rules are expressed in the form of fuzzy conditional statements; (8) At sample interval k, the jth fuzzy control rule, equation (8), can be expressed as; (9) where e(k), ce(k) and cu(k) denote the error, change in error and manipulated control variable respectively. The jth fuzzy subsets Ej , CEj and CUj are defined as; Alternatively, Equation (9) can be evaluated through the use of the compositional rule of inference. If the minimum operator is utilised, the resulting membership function can be expressed as; (11) where the symbol indicates the fuzzy implication function and j jj j EC EC U ℜ= × × denotes the fuzzy relation matrix on In term of the membership functions, this can be expressed as; To use the result of Equation (13), a defuzzification process is necessary to produce a crisp output for the control action value.
Choosing Appropriate Defuzzification Method
Several approaches (Lee, 1990b) have been proposed to map the fuzzy control action to a crisp value for input to the process. Basically, all have the same aim that is, how best to represent the distribution of an inferred fuzzy control action as a crisp value. The defuzzification strategies most frequently found in the literature are the maximum method and centre of area method: 1. The maximum method. Generally, the maximum method relies on finding the domain value, zo, that maximises the membership grade which can be represented by; In the case when there is more than one maximum membership grade in W, the value of zo is determined by averaging all local maxima in W. This approach known as mean of maximum method (MOM) is expressed as; 2. The center of area method (COA). The center of area method sometimes called the centroid method produces the center of gravity of the possibility distribution of a control action. This technique finds the balance point in the output domain of the universe of discourse. In the case when a discrete universe of discourse with m quantisation levels in the output, the COA method produces; where zi is the ith domain value with membership grade of (zi ).
420
Industrial Robotics: Theory, Modelling and Control
Experimental Setup
The robot control system is composed of the host computer, the transputer network, and the interface system to a small industrial robot. The schematic representation of the control structure is presented in Fig. 6. The controller structure is hierarchically arranged. At the top level of the system hierarchy is a desktop computer which has a supervisory role for supporting the transputer network and providing the necessary user interface and disc storage facilities. The Transputer Development System acts as an operating system with Occam II as the programming language. At the lower level are the INMOS transputers; in this application one T800 host transputer is resident on the COMET board and mounted in one of the expansion slots of the desktop computer with the remaining three transputers resident in a SENSION B004 system. The robot is the RM-501 Mitsubishi Move Master II, with the proprietary control unit removed to allow direct access to the joint actuators, optical encoders and joint boundary detection switches. The host transputer also provides an interface facilities to the user, for example, input and output operation from the keyboard to the screen. The three transputers resident in the SENSION B004 system are a T414 transputer which is resident on the GBUS-96 board and provides a memory mapped interface to the robot through a Peripheral Interface Adapter (PIA) Card. The remaining two T800 root transputers are used to execute the controller code to the robot. The PIA card allows a parallel input and output interface to the robot joint actuators and conforms to the interface protocol implemented on the GBUS-96 which is known as a GESPIA Card. The actual hardware arrangement together with the interfacing employed is shown in Fig. 7, with the Mitsubishi RM-501 robot shown in Fig. 8.
The Mitsubishi RM-501 Move Master II Robot
This industrial robot is a five degree of freedom robot with a vertical multijoint configuration. The robot actuators are all direct current servo motors, but of different powers. At the end of each joint, a sensor is provided to limit the angular movement. The length of link and its associated maximum angular motion is listed in Table 2. Fig. 9(a) and 9(b) illustrate the details of the robot dimensions1 and it's working envelop. The maximum permissible handling weight capacity is 1.2 kg including the weight of the end effector. Figure 9(a). Range of movement of waist joint and robot dimensions (all dimensions are measured in millimeter).
Experimental Studies
A program code for the development of a FLC was written in the Occam language and executed in a transputer environment. This approach would enable the evaluation of the robustness of the controller design proposed and applied to the first three joint of a RM-501 Mitsubishi industrial robot. A T800 transputer is assigned to position each joint of the robot independently. To determine the effect on controller performance of changing different controller parameters, one joint only is actuated and the other two are locked. In the first experiment the impact on overall robot performance of changes in sample interval was assessed. This was followed by an investigation into how best to tune a controller algorithm and whether guide-lines can be identified for future use. The problem is to overcome the effect of changing robot arm configuration together with a varying payload condition.
The Choice of Sampling Time
Inputs (error and change in error) to the fuzzy logic control algorithm that have zero membership grades will cause the membership grades of the output fuzzy sets to be zero. For each sample period, the on-line evaluation of the algorithm with 49 control rules has been found by experiment to be 0.4 milliseconds or less. Hence, to shorten the run time, only inputs with non-zero membership grades are evaluated. For times of this magnitude, real-time control is possible for the three major joint controllers proposed. It has been cited in the literature that it is appropriate to use a 0.016 seconds sampling period (60 Hertz) because of its general availability and because the mechanical resonant frequency of most manipulators is around 5 to 10 Hz (Fu et al., 1987). Experiments have been carried out to determine how much improvement can be achieved by shorten the sampling period from 0.02 seconds to 0.01 seconds. In the first experiment, the waist joint is subjected to a 60.0 degree (1.047 radian or 2400 counter count) step disturbance with all other joints in a temporary state of rest. The results shown in Fig. 10 suggest that very little improvement in transient behaviour will be achieved by employing the shorter sampling period. The only benefit gained is a reduction in the time to reach the steady state of 0.4 seconds. In a second test, the waist joint is commanded to start from its zero position and to reach a position of 5 degree (0.0087 radian or 20 counter count) in 2 seconds; it remains at this position for an interval of 1 second after which it is required to return to its home position in 2 seconds as showed in Fig. 11. Again the benefit is only very marginal and of no significance for most industrial applications. Despite these results, it was decided that the higher of the two sampling rates would generally ensure better transient behaviour, hence the 0.01 seconds sampling period is used throughout this project.
Design and Implementation of Fuzzy Control for Industrial Robot 425 Figure 10. Waist response to a step input for different sampling periods. Figure 11. Waist trajectory tracking at different sampling periods.
Controller Tuning Strategies
Tuning of a FLC may be carried out in a number of ways; for instance modifying the control rules, adjusting the support of the fuzzy sets defining magnitude and changing the quantisation levels. The objective is always to minimise the difference between the desired and the actual response of the system. Because of the large number of possible combinations available and the different operational specifications that exist, no formal procedure exists for tuning the parameters of a FLC. In practice, trial and observation is the procedure most commonly employed. This can be tedious and time consuming and may not result in the selection of the most suitable parameters and in many practical situations does require safeguards to prevent excessive output variations and subsequent plant damage. To establish a rule base for a FLC, it is necessary to select an initial set of rules either intuitively or by the combination of methods described in Figure 2.3 for the process to be controlled. Rule modifications can be studied by monitoring the response of the close loop system to an aperiodic disturbance in a phase plane for error and change in error in this case. This trial and observation procedure is then repeated as often as required until an acceptable response is produced. In this project, three different ways for tuning the controller have been investigated. The initial control rules were initially selected by modifying the rule based initiated by Daley (1984). The modifications made to the rule base given in Daley (1984) were necessary to ensure a faster transient response with minimum overshoot and steady state error for the robot arm. The rule base listed in Table 1 was found to be the most appropriate for the robotic control process. A second procedure for tuning a FLC is investigate in this section and the final method will be presented in the next section. The use of two inputs and one output means that there is a three dimensional space in which to select the optimal solution. In most reported cases, scaling factors or gains were introduced to quantified these three universes of discourse. Hence, making it possible to tune a controller by varying the values of these gain terms. However, in this project, the functional forms of the fuzzy sets were utilised and these were mapped directly to the corresponding universe of discourse. Thereby, tuning is carried out by adjusting or redefining each fuzzy set location in the universe of discourse. The strategy developed in this project is to show the effect of changing each of the input and an output definitions to establish the impact on the overall performance of the robot. The initial estimate for the fuzzy sets employed in the three domains of discourse were made off-line as detailed in Figure 2.2.1. Fig. 12 shows the estimates for the error fuzzy set definitions. The corresponding estimates for the change in error and controlled action fuzzy set definitions are plotted in Fig. 13 and Fig. 14, respectively. Tuning of the error fuzzy sets is made by gradually moving the fuzzy set locations in the universe of discourse closer to the zero value of error. A similar procedure is adopted to tune the output fuzzy sets, however, the initial selection positioned these fuzzy sets as close as possible to the equilibrium point. For these sets, tuning is executed by gradually moving the fuzzy sets away from the equilibrium point until an acceptable close loop response is found. To demonstrate the effect of changing the error fuzzy sets definition, three choices of error fuzzy sets definition were made and are plotted in Fig. 12(a) -12(c) named Case 1, 2 and 3. The other two fuzzy set definitions remained unchanged and are shown in Fig. 13 and 14. A step disturbance of 30.0 degree (1200 counter counts or 0.523 radian) was introduced at the waist joint and the other joints, shoulder and elbow, were locked at 10 degrees and 90 degrees respectively2. This robot arm configuration was chosen to exclude variations in inertia, gravitational force and cross coupling resulting from the movement of these two joints. The impact from the use of an inappropriate definition for the output fuzzy sets is significant when variations in these forces are present and this is studied in latter experiments. The joint response to a 30.0 degree step disturbance is shown in Fig. 15(a) and the associated control signals are shown in Fig. 15(b). Notice that in Figure 15(a), at about 1.0 second the joint response for error fuzzy set definition Case 1 started to deviate from the other two cases because of the large spread in the fuzzy sets in comparison to the other two cases depicted in Fig. 12(a). There is no noticeable difference in transient response between Cases 2 and 3 because the fuzzy set definitions are close to the equilibrium point. However, differences start to emerge as the two responses approach the chosen set point, Case 3 response is more oscillatory. This corresponds to the excessive voltage variations that can be observed in Fig. 15(b). This phenomenon occurred because of the use of a very tight definition for error (close to zero) in Case 3 (Fig. 12(c)) which leads to a large overshoot and hunting around the set point. The use of the fuzzy set definition, Case 2 gives a good accuracy throughout the whole envelope of operation for the waist joint with this particular robot arm configuration. The maximum steady state error is found to be 0.0013 radian (3 counter count) as a result of the coarseness of the fuzzy sets used; fuzzy set labelled ZERO is defined between -5 to +5 counts. This performance, however, is degraded when the robot arm configuration is changed. For example, when the shoulder and elbow joints are at full stretch and locked at 100.0 and 90.0 degree respectively, large overshoot cannot be avoided due to the increased inertia on the waist joint. Fig. 16 is provided as an illustration of the waist response to a small step disturbance of 1.0 degree (40 counter count or 0.017 radian). The blue line included in this figure is a waist response for the re-tune fuzzy sets definition which will be discussed later in this section. It can be seen that that despite exhaustive tuning of both the input fuzzy sets a large overshooting cannot be avoided for the second robot arm configuration. From these results, it can be concluded that to provide a better control a new combination of fuzzy sets have to be defined. A way of achieving this is to reduce the waist speed of operation. By redefining a smaller error, change in error and control action fuzzy sets in this region of operation, finer control around the equilibrium can be achieved but of a reduced operational speed. The smaller tuned fuzzy set combinations are plotted in Fig. 17, 18 and 19. The blue line in Fig. 16 shows the waist response and the corresponding controlled input signals for the smaller tuned fuzzy set combinations for com- The drawback of using the smaller fuzzy set combinations is obvious when the waist is subjected to a large step disturbance, for example at 30.0 degrees (1200 counter count or 0.523 radian) with the arm in the second configuration. The subsequent waist response is plotted in Fig. 20 together with the response for the second definition of the fuzzy set combinations. The waist response when using the smaller fuzzy set combinations of Fig. 16 and Fig. 20 show that the controller could position the waist with a smaller overshoot of 0.025 degree (1 counter count) and zero steady state error, however, the penalty to be paid is an increase in rise time. The zero steady state error is achieved because of the use of a fuzzy singleton definition in the error fuzzy set labelled ZERO, i.e. the error fuzzy set is defined between ± 1 in the universe of discourse as is depicted in Fig. 17. Although the Case 2 fuzzy set combinations can provide a faster response (about 1.2 seconds quicker for a 30.0 degree step input), the overshoot (0.25 degree) and steady state error (0.075 degree) are both greater, Fig. 20. These results deteriorate when the controller operates under gravitational force and variable payload. A further comparison to evaluate the performance between the smaller fuzzy set combinations and the Case 2 fuzzy set combinations is conducted by subjecting the waist to a sinusoidal signal disturbance of 30.0 degree amplitude and 4.0 seconds time periods. Fig. 21(a) shows clearly that trajectory following with the Case 2 fuzzy set combinations is by far the better result. Fig. 21(b) illustrates that the use of a smaller range of voltage in the output fuzzy sets definition could not generate an adequate controlled signal. These results suggest that tuning can never optimise simultaneously speed of response and accuracy. If fuzzy logic is to be used successfully in industrial process control, a method which can provides a fast transient response with minimum overshoot and steady state error must be found. One way to achieve this is to partition the problem into coarse and fine control, an approach suggested by Li and Liu (1989). Having investigated the problems associated with the control of the waist joint, the investigation was extended to the more difficult upper-arm link, the shoulder joint. The control of this joint is difficult because of the gravitational force acting on it. For example, when the elbow is fully stretched and the shoulder is at 30.0 degree location to the working envelop, a load of 0.4 kg. is enough to drag the shoulder joint downwards with 0 voltage (127 DAC input value) applied to the actuator. The use of a single output fuzzy set definition was found only suitable for a limited range of operation and not applicable for the robot employed in this study. To illustrate this limitation, Fig. 22 shows the effect of using a single output definition in 4 different operational regions when the elbow is fully stretched. To compensate for the gravitational loading effect, 4 operational regions were identified, and each was assigned a different output fuzzy set. The switches 1, 2, 3 and 4 would control the choice of the output fuzzy set in the range of 0 to 10 degrees, 10 to 30 degrees, 30 to 90 degrees and 90 to 130 degrees of the shoulder joint working envelop, respectively. The four switched output fuzzy sets are presented in Fig. 23(a) -23(d) and these have been tuned as previously discussed. In all four modes of operation, the input fuzzy set combinations of Case 2 were utilised. From Fig. 23(a) -23(d), it is obvious that the fuzzy sets labelled ZERO is moving towards the right of the plot from the left as the region of operation moves from 1 to 4. This is to compensate for the gravitational load which forces the joint to overshoot when moving downwards as can be seen in Fig. 20. It should be noted that the use of the switches in selecting the output fuzzy set definition is just a coarse estimate, and as a result can give up to a maximum steady state error of 0.125 degrees (5 counter count) for the shoulder joint working envelop. If more accurate positional control is needed, it will be necessary to increase the number of switching regions or alternatively a different method will have to be found. It should also be mentioned that the use of a trapezoidal function to represent the dead-zone area, mention in Section 2.2.1, is not suitable for implementation at this joint because of the unsymetrical nature of the actuator dead-zone in different regions of operation. Therefore, as an alternative the triangular function was used because it provides a more operational acceptable definition for the fuzzy sets. From the experiments gained in this section, it can be concluded that by using a trial and observation procedure, tuning of the FLC parameters can be successfully accomplished. To reduce the design time consumed adopting the trial and observation tuning method, a good initial estimate for the fuzzy set definitions is essential.
Conclusion
In this chapter, a methodology for the application of fuzzy logic theory to the development of a fuzzy logic controller had been presented and successfully implemented. The developed algorithm had been shown to be of simple design and implementable for real-time operation of a three joint industrial robot using joint space variables for control. The methodology to estimate the initial fuzzy sets has been presented. The use of the function form of template to represent the fuzzy sets provides a way to directly map these fuzzy sets into the corresponding universe of discourse. Unfortunately, this design could only be arrived at by the use of a trial and observation procedure and would suggest a more formal procedure must be developed for industrial applications. Furthermore, design by a trail and observation procedure cannot be guaranteed to yield the best result. In conclusion, it had been shown that a FLC can be designed to match specific process dynamics without the use of a process model within the control loop. Therefore, if automatic tuning can be introduced into the FLC design a very robust control approach will result and this could be directly applied to any poorly defined non-linear process. | 8,929.2 | 2006-12-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Quantifying gender preferences in human social interactions using a large cellphone dataset
In human relations individuals’ gender and age play a key role in the structures and dynamics of their social arrangements. In order to analyze the gender preferences of individuals in interaction with others at different stages of their lives we study a large mobile phone dataset. To do this we consider four fundamental gender-related caller and callee combinations of human interactions, namely male to male, male to female, female to male, and female to female, which together with age, kinship, and different levels of friendship give rise to a wide scope of human sociality. Here we analyse the relative strength of these four types of interaction using call detail records. Our analysis suggests strong age dependence for an individual of one gender choosing to call an individual of either gender. We observe a strong bonding with the opposite gender across most of their reproductive age. However, older women show a strong tendency to connect to another female that is one generation younger in a way that is suggestive of the grandmothering effect. We also find that the relative strength among the four possible interactions depends on phone call duration. For calls of medium and long duration, opposite gender interactions are significantly more probable than same gender interactions during the reproductive years, suggesting potential emotional exchange between spouses. By measuring the fraction of calls to other generations we find that mothers tend to make calls more to their daughters than to their sons, whereas fathers make calls more to their sons than to their daughters. For younger callers, most of their calls go to the same generation contacts, while older people call the younger people more frequently, which supports the suggestion that affection flows downward. Our study primarily rests on resolving the nature of interactions by examining the durations of calls. In addition, we analyse the intensity of the observed effects using a score based on a null model.
Introduction
In social interactions between humans, gender and age play a key role in the communities and social structures they form and the dynamics therein. For the caller-callee interactions in mobile communication there are four fundamental possibilities, namely male to male, male to female, female to male, and female to female, which together with age, kinship, and different levels of friendships affect the strengths of social interactions, giving rise to a wide scope of human sociality. The studies of primate brain size and its relation to their average social group size suggest that humans are able to maintain of the order of 150 stable relationships (Dunbar number) [1][2][3]. In addition the Social Brain hypothesis suggests that on the basis of emotional closeness human social networks can be divided into four cumulative layers of 5, 15, 50 and 150 individuals, respectively [4]. The concept of emotional closeness is, in general, hard to quantify, but previous studies have shown how it can be associated with the frequency of communication between two individuals [5,6]. This makes the concept quantifiable such that one can observe how much an individual shares social resources with his or her contacts of different gender and age.
Over the past decade or so, much research on human communication patterns has been done by using "digital footprints" data from modern communication technologies such as mobile phone calls and text messages as well as social media like Facebook, and Twitter [7][8][9]. Of these the mobile phone communication data of call detail records (CDRs) has turned out to help us in getting insight into the structure and dynamics of social networks, human mobility and behavioural patterns in much finer details than before [7]. It has also revealed how microscopic properties related to individuals translate to macroscopic features of their social organization such as networks. As a result of these studies we now have quite a good understanding of a number of structural properties of human social networks such as degree, strength, clustering coefficient, community structure, and motifs [10][11][12].
Apart from these basic structural properties of networks, more recent studies have given us insight into a number of other aspects of social networks, namely their dependence on temporal, geographic, demographic, and behavioral factors of individuals in the network [13][14][15][16][17]. One such observation pertains to the shifting patterns of human communication across the reproductive period of their lives, which appears to reflect parental care [18,19]. Another is a study using the postal code information in the data to show that the tie strength is related to geographical distance [20]. In addition, it has been shown that there is a universal pattern of time allocation to differently ranked social contacts [21]. Finally, recent studies indicate variation in connections and the number of friends with the age and gender [22,23]. The importance of the strength and significance of communication with top-ranked contacts have also been studied in detail [18,22].
In the present study, we focus on measuring the relative strengths of the four possible pairwise caller-callee interactions over their lifespans as a function of the caller's age. From the point of view of call initiation, we find that females play a more active role during their reproductive years as well as during their grandmothering period [24,25]. The grandmothering hypothesis is usually studied in the context of human longevity and evolutionary benefits. The notion deals with the focus of post-menopausal on their grandchildren. In general, the social focus of women are known to shift from the opposite gender in the same age cohort, when they are young, to the age cohort of their children, as they grow older. We observe that while females of grandmothering age are found to give more attention to their children, males up to the age of 50 years still keep stronger connection with their spouses of slightly younger age. Furthermore, the fraction of calls to individuals of different generations indicates that mothers tend to call their daughters more than their sons, whereas fathers call their sons more than their daughters. For younger individuals, most of their calls go to contacts of the same generation, whereas older people call younger people more frequently. The calling activity of older adults with the younger individuals who are below or around their reproductive age would signify parental and alloparental care, that is, caring for the children of children. We group these kind of behaviour as affection flows downward.
Methodology
In this study we analyse mobile phone communication records of a particular European mobile service provider containing time series of call detail records or CDRs of callercallee pairs. This dataset also includes demographic information such as age and gender of the callers. By using the gender information we measure the relative strengths for the four basic calling pattern such that we count the total fraction of calls for the caller-callee pairs of the same or of different genders by assuming a cut-off for the minimum call duration. We analyse all the CDRs for the year 2007 on a month-by-month basis for more than 2.4 million subscribers where both the caller's and the callee's demographics are known, totaling over 30 million calls. Since datasets of this kind are susceptible to error due to multiple subscriptions, we filtered out customers who have multiple subscriptions under the same contract number. Our study based on CDRs allows obtaining anonymized data from a very large population, but is in contrast to small-scale studies where volunteers are recruited and cross-validation of results is possible by collecting information from the participants through questionnaires [21,26].
Results
In order to analyze the gender preferences of individuals in interaction with others at different stages of their lives we choose the age of the caller and count the total number of calls within a time window. We apply a threshold for minimum call duration, such that calls shorter than the threshold value are considered not to be indicative of emotional closeness while calls longer than that are taken to indicate a meaningful emotional or social exchange relationship between the caller and the callee. Then we calculate the relative probabilities for the four possible types of caller to callee interaction. As it is difficult to decide a priori where the borderline between meaningless and meaningful is, we will vary the threshold for minimum call duration in measuring how the probabilities of the four ways of interaction vary with age and gender of the callers.
In Fig. 1 we show snapshots of the four interaction types for a call duration threshold of one minute. By considering callers and callees between 20 and 70 years of age we have calculated the calling or interaction probabilities for the same and different gender pairs. Here the probability is determined as the ratio between the total number of calls to the specific age callee and the total number of calls to all the callees within the age range of 20 to 70 years. From these communication patterns, the signature of a generation gap becomes evident. For example, the calling pattern between two females exhibits a rather clear signature of being triple lobed with the side lobes separated from the same age group center lobe by a generation gap up and down. This is indicative of frequent interactions between mothers and their daughters over the two generation gaps.
For callers with age g, we first divide the (outgoing) calls into four sets according to the genders of the caller-callee pairs, namely, FF, FM, MF and MM, where 'M' denotes males and 'F' denotes females. Then we further divide the sets by the duration of the calls (t, measured in seconds). For a given duration t, we calculate the relative probabilities among the four possible caller-callee pairs such that, f FF (g, t) + f FM (g, t) + f MF (g, t) + f MM (g, t) = 1. In case female-female pairs, f FF (g, t) = (total numbers of outgoing calls of duration t from female callers with age g to female callees)/(total number of outgoing calls of duration t from any caller (male or female) with age g), and likewise for other types of pairs. In Fig. 2 we show these probabilities as a function of the call duration t for different age groups of callers, i.e. 21-25, 31-35, 41-45, 51-55, 61-65, and 71-75 years. We find that the relative ranking between them is strongly dependent on call duration. At younger ages (21-25 years), the MM calls tend to be relatively short, with interactions peaking around 10 secs and being of the highest rank up to 100 secs then decaying, suggesting that these calls are concentrated on their same gender friends. However, as men age, they get married and change their interaction preference to their opposite gender partners (see the panels for the age groups of 31-35 and 41-45 years). At the same time the distribution of call duration becomes flatter making the average call duration longer, a trend also evident among the older age groups (51-55, 61-65, and 71-75 years). On the other hand, the ranking for Relative probabilities (f FF (g, t), f FM (g, t), f MF (g, t), and f MM (g, t)) of the four possible ways of interaction between same or different gender caller-callee pairs as a function of the call duration (in seconds) for six different age groups of callers, presented as panels of (A) 21-25, (B) 31-35, (C) 41-45, (D) 51-55, (E) 61-65, and (F) 71-75 years old. The relative ranking of these four possibilities is dependent on the age of the caller and the duration of calls the FF calls tend to be rather low for all the age groups up to a call duration of 100 secs. The distribution of call duration is initially quite flat and small in value, but it starts increasing at about the age of women bear their first child, peaking at around 1000 secs. This suggests frequent interactions between the daughter and her mother, and seems to indicate that the grandmothering effect has set in. As for opposite gender pairs, we find that below the age of 35 years, the FM and MF interactions show quite high values for medium to high call duration. This can be interpreted as an indication of strong bonding between spouses. But with age, the FM-interactions start decreasing while MF-interactions increase, thus showing inverse relationship from the age of 40 years onward for medium to high call duration. This observation suggests that as women age they shift their attention from their spouses to their children.
Next, aggregating calls over the different durations we calculate the relative probabilities as functions of age g, such that for a given g, we have, where f FF (g) = (total numbers of outgoing calls from female callers with age g to female callees)/(total number of outgoing calls from any caller (male or female) with age g), and likewise. In Fig. 3, we depict these relative probabilities for the four caller-callee interaction categories as functions of the caller's age, for call durations greater than 30 sec, 60 sec, 120 sec, and 240 sec, respectively. The probabilities are rather stable when the calls of very low duration are filtered out. If we concentrate only on the behavior observed for threshold values of 120 sec and 240 sec (see the two bottom panels), our observations are as follows: For individuals older than 30 years, MM interactions become less frequent, which can probably be attributed to men getting married and thus giving priority to their opposite gender spouses over the same gender friends. This picture is also supported by observing the age-wise variation of the MF-interactions, where we see that up to the age of 45 years men call their spouses more than they call others. However, MF-interactions also show a minimum around the age of 50 years after which they start increasing again from the age of 55 years on. This may be attributed to men's more frequent interactions with women one generation younger which corresponds to the age cohort of the daughters. On the other hand, the FF-interaction curve starts from a low value at about 27 years The fraction of calls of duration greater than 100 seconds (out of all the calls made) as a function of the caller's age for interactions between four types of caller-callee pairs. (B) The average call duration as a function of the caller's age for the four types of interaction between caller and callee of age, after which it shows a steadily increasing trend. This observation indicates again that before marriage, females call less frequently to other females. After the age of 27, the FF-interaction curve grows rapidly up to the age of about 65 years. This behavior lends support once more to the grandmothering effect. Finally, the curve for FM-interactions indicates that after the age of 35 years, the focus of women on their spouses starts progressively decreasing. A similar observation also presents itself when we consider only top-ranked calls (ranked by their call duration) as shown in the Appendix (see Fig. 7).
In Fig. 4(A), the fraction of calls having duration greater than 100 seconds (out of all the calls made) is shown as a function of the caller's age. Here, the fraction of longer calls for the four different pairs of interactions all peak around callers aged 30 years, after which the interactions decrease till about 50 years of age, followed by an increase till about 60 years of age, at which point the interactions seem to plateau. It should be noted that, for the FF curve, the increase from 50 to 60 years of age can again be taken as clear evidence of grandmothering. In Fig. 4(B), we measure the average call duration as a function of caller's age for the four different types of interactions of the same or different gender pairs. From the MM curve, it is evident that the average call duration for male-to-male calls is low throughout their lifespan. The FM and MF curves show that at younger ages (i.e. before marriage) both male-to-female and female-to-male participate in long phone calls. But after typical marrying age for this population (27 years, as indicated in the national statistics), call duration drops significantly. The FF curve shows that initially the fraction increases with age (up to the age of 40 years), then rapidly falls. It is nevertheless clear that after the age of around 35 years, the call duration for female-to-female calls is the highest among the four possible types of interaction, which again can be interpreted as a signature of the grandmothering effect.
In Fig. 5, we show the fraction of outgoing calls from a caller to a callee who is either one generation older or one generation younger. The caller-callee pairs with a generation gap are chosen such that the magnitude of the difference between the age of the caller and the age of the callee is greater than 20 years. Here we observe that FF-interactions always have the highest value for any age, which can be taken as evidence of a large amount of communication between mothers and their daughters. Before the age of 27 years (the Figure 5 Fraction of calls from the callers to previous or to next generation callees as a function of caller's age for the same and different gender pairs. The caller-callee pairs with a generation gap are chosen such that the magnitude of the difference between the ages of the caller and the callee is greater than 20 years. The normalization is such that the fraction of calls to caller's own generation and fraction of calls to the other generations sums up to one average age of marrying in this population), measurement of MF interactions indicates that sons are also strongly attached to their mothers. After the age of 40 years, the MF and FM interactions are very close to each other, suggesting that sons get the same amount of attention from both parents. On the other hand, the tie strength between fathers and their sons are reflected in the curve for MM interactions, which show a similar trend as the other interaction types. Notice that female callers are, relatively speaking, closer to the other generation than the same age male callers. In addition, the calling patterns of older people suggest that sons and daughters get different amounts of attention from their parents. In other words, from the mothers' point of view, daughters get more attention than the sons, while sons get more attention from their fathers than daughters do. We find that, at younger ages, the fraction of calls going from one generation to another is around 10% to 30% of the total number of calls. On the other hand, when the age of the callers reaches 60 years, they are found to mostly communicate with their children (ranging from 50% to 70%), which supports the claim that affection flows downward. A similar pattern emerges from an analysis of just the top ranked calls, as elaborated in the Appendix, see Fig. 8.
Quantification using a null model
In order to benchmark our results we present a null model [27,28] with the assumptions that the communication events are established at random with respect to gender and that the communication volume is proportional to the populations of caller and callee. We focus on the results shown in Fig. 2 and Fig. 3, where the deviations of the observed probabilities, like f FF (g, t) and f FF (g) with respect to the null model results provide a quantification of the amount of non-randomness. The model allows us to calculate a set of expectation values and standard deviations for the number of outgoing calls between different gender groups of the given age of callers. From the null model we obtain the set of probabilities {p FF (g), p FM (g), p MF (g), p MM (g)}, where, p FF (g) + p FM (g) + p MF (g) + p MM (g) = 1. Here p FF (g) denotes the probability of an outgoing call from a female caller of age g to a female callee, irrespective of the age of the latter. The probabilities for the other three gender pairs are similarly defined. We assume for a given age g, that the set of probabilities comprise of a multinomial distribution that signifies random reshuffling of the total number of outgoing calls across the four types of gender-based interactions. If C tot (g, t) is the total number of outgoing calls of duration t from callers with age g found in the dataset, then the expectation value and the corresponding variance of the number of outgoing calls of duration t from female callers with age g to female callees are, We obtain the probabilities for outgoing calls from one group to another following the calculation of edge probabilities in the configuration model of random graphs [29] such that the number of possible interactions between two sets of individuals is assumed to be the product of the populations of these sets. For instance, the probability p FF (a) = k[n F (g){n F (g) -1} + n F (g){n F,totn F (g)}], where n F (g) is the number of female subscribers with age g, n F,tot is the total number of female subscribers, and k is a proportionality constant that is independent of both the age and gender. The first term inside the brackets denotes the possible interactions between the females with age g and the second term accounts for the interactions with the females of other ages. By simplifying the expression and normalizing the probabilities we obtain p FF (g) = n F (g)n F,tot {n F (g) + n M (g)}{n F,tot + n M,tot } .
( 2 ) Note that it is also possible to obtain a further generalized form for the above probability, namely p FF (g, t) that is dependent on the call duration t. This can be calculated by solely taking into account the subscribers who participated in calls of duration t. However, in the current scheme we do not include this dependence. With C FF (g, t) being the number of outgoing calls from female subscribers with age g to other females, we first calculate a scaled deviation or the Z-score given by, Z(g, t) = {C FF (g, t) -C FF (g, t) }/σ FF (g, t) [30]. This score scales the deviation of the actual number of calls from the expected number of calls in terms of the standard deviation as shown in Eq. (1). The expression for Z can also be written in the form: 1p FF (g)). The last expression shows that Z(g, t) is the scaled deviation of the observed probability f FF (g, t) shown in Fig. 2. However, Z(g, t) gets amplified by the number of calls C tot (g, t) which depends on the volume of calling having different durations as well as the volume of calling which differs across the ages. Therefore, to quantify the non-randomness that is independent of the volume of calling we use the following normalized score, which is similar to a measure for the effect sizes [31], In Fig. 6(A) we plot the normalized scores corresponding to Fig. 2. The probabilities in the null model are provided in Table 1. A comparison between Fig. 2 and Fig. 6(A) reveals Table 1 The probabilities used in the null model that are used to calculate the normalized scores corresponding to the results in Fig. 2. The values shown below correspond to the age brackets used in Fig. 2 the following. First, a demotion of the MM calling probabilities in terms of the normalized score. The values of f MM (g, t) that are observed in Fig. 2 appear to be enhanced by the presence of larger number of male subscribers. In fact, the overall scores corresponding to the MM pairs across different call durations and ages are negative, implying that the MM communication is lower than what is expected in the null model. Interestingly, our conclusions regarding the importance of short duration calls for the MM pairs for younger callers (aged 21-25) is still supported as the peak of the MM curve crosses over to a positive value in Fig. 6(A). For the MF pairs in Fig. 6(A), similar to Fig. 2, the patterns are largely unchanged across different age ranges of the caller and the score remains mostly positive. The curves for the FM and FF pairs appear to be contrasting cases. With the increase in the age of the caller, the FM scores show an overall decrease moving from positive to negative values, while the FF scores change from negative to positive. For older females, the effects is the strongest for calls at larger duration, where the FF calling appears to be higher than expected while the FM calling is lower than expected. This is consistent with our conclusions regarding the shift of focus for older women. In Fig. 6(B) we present the scores based on the results in Fig. 3 where the quantities of interest are, for example, f FF (g) instead of f FF (g, t). Here, the corresponding normalized score is {f FF (g)p FF (g)}/ p FF (g)(1p FF (g)).
We only show the scores corresponding to a threshold of 30 seconds on the duration of calls. The scores in Fig. 6(B) are in a way an encapsulation of the behaviour depicted in Fig. 6(A). The variation in the scores for different pairs relative to each other is mostly similar to the results in Fig. 3. However, and most notably the scores appear to provide additional clarity of the nature of variation of the MM, FF, MF and FM curves for caller ages above 50 years. Whereas, in Fig. 3, the curves overlap, the scores for the different pairs in Fig. 6(B) appear well differentiated. The communication for the FF pairs appear to take the lead over the other pairs. Also, in the case of the MM pairs, the adjustment with the null model shows that the communication is relatively much lower than what would be expected and that the original f MM (g) is amplified due to larger number of male subscribers.
Summary and conclusion
In this study, we have measured the relative interaction probabilities for the four possible caller-callee pairs of the same and opposite gender. We have observed that in general the interaction probabilities are strongly dependent on the age and gender of the caller in relation to the age and gender of the callee. Also, we observed the communication over the generation gap as depicted in Fig. 1 showing the lobed structure and in the Appendix in Fig. 9, where we depict the distribution of calls made by the callers of certain age to callees as a function of callees' age showing it to be bimodal [18,19,22]. Our findings from the study of the distributions of call duration for different age groups of the caller (Fig. 2) shows that the MF interactions tend to increase with call duration up to age 50 years, suggesting that men have a strong emotional connection with their opposite gender spouse of about the same age. In contrast, the FM interactions indicate that women are not as active after the age of 35 years, and have a decreasing trend for medium or long call duration with age. On the other hand, the MM interactions show initially greater probability for short call duration at younger ages, after which this becomes least probable for medium and higher call duration for any age of the caller. The FF interactions start with the lowest probability of all at younger ages, then shows a steadily increasing trend for medium and higher call duration with the age of the caller.
In the investigation of the relative probabilities for the four types of interaction as a function of the caller's age with calls above certain threshold value (30 sec, 60 sec, 120 sec and 240 sec) (Fig. 3), we show that the FF interactions have an increasing trend with the caller's age. This is due to frequent interactions between the daughter and her mother, an indication that the grandmothering effect has set in. An opposite trend is observed for the FM interactions, i.e. the relative probability shows a decreasing trend with age. On the other hand, the MF interactions show a high probability for ages ranging from 20 to 50 years. After that, it shows a decreasing trend up to the age of 55 years, and then beyond that again shows an increasing trend. The MM interaction curve shows the weakest interaction after the age of 25 years.
The effect of the difference in the number of male and female subscribers on the counting of pairs and the results shown in either of Fig. 2 and Fig. 3 is understood with the help of the scores calculated using the null model. These scores reveal how the populations of males and females influence the counting of pairs. Moreover, zero being a reference for the score, the latter is able to differentiate between contrasting cases, that is between negative (below the expectation with respect to the null model) and positive scores (above the expectation). For example, in Fig. 6(A) that corresponds to Fig. 2, scores for female-female calling are found to change from negative to positive with age of the caller. Overall, the scores lend support to our original results yet bringing clarity on the nature of the results.
Looking at the fractions of calls of duration more than 100 sec as a function of the caller's age (Fig. 4) revealed that for the FF interactions there is an increase from 50 to 60 years of age, which once again is taken as a clear evidence of grandmothering setting in. We have also found that on the basis of the average call duration that around 35 years of age the duration of female-to-female calls is highest among the four possible types of interaction. This can again be interpreted as a signature for the grandmothering effect. Furthermore, we showed (Fig. 5) that there is a fraction of total calls going from callers to callees who are either one generation older or one generation younger. Here we observed that for female callers the fraction of calls going to a different generation is, for all ages, always greater than for male callers. More precisely, the FF interactions show the highest probability, most likely reflecting strong ties between mothers and their daughters. On the other hand, at a younger age, a large fraction of calls go from males to females, suggesting that sons are strongly attached to their mothers before marriage. After the age of 40 years, the MF and FM interaction curves are very close to each other, suggesting that sons get the same amount of attention from both parents. More generally, we have found that younger callers, most of the calls (70-90%) to callees of the same generation. On the other hand, for older people, most of their calls go to their children (i.e. contacts who are younger by a generation), which supports the claim that affection flows downward. Broadly speaking, our conclusions regarding the preference of communicating individuals as a function of their age, and in particular, the preference of women in their post-reproductive period, is based on the observed variation in the quantities, namely the relative frequency of outgoing calls, the propensity to make calls of longer duration, and the proliferation of crossgenerational calls. However, the consistency with alternative hypotheses could as well be investigated but those should be able to explain the patterns of communication taking into account the age dependent variation in kinship and different levels of friendship.
The advent of newer channels of digital communication in the last decade has rapidly supplemented the usage of mobile phones. The pattern of communication availing multiple modes, for example, voice calling in conjunction with text-messaging or through social networking services, is in turn able to characterize the nature of sociality [32][33][34]. How-ever, the fact that variability in calls can also serve as an important factor, has been rather overlooked. In previous works by some of the authors we had focussed on undirected communication and tried distinguishing communication between peers, partners and kins for individuals of different gender and age [19]. Here, we investigate outgoing calls specific to gender and age, and consider duration of calls in the same spirit as multiple available channels. On one hand the current study stands consistent with the earlier works that revealed patterns in sociality like grandmothering, on the other the study might be indicative of gender and age differences in selecting different channels of communication. The understanding of the immediate social neighbourhood of individuals as well as the preferences Here the calls are ranked by call duration 20% of the calls and top 10% of the calls): (i) Beginning with a low value, the FF interaction curve has an increasing trend with caller age; (ii) the FM interaction curve starts with high value and then it shows a decreasing trend with age; (iii) the MF interaction curve shows a high value from age 30 to 40 years, after which it has a decreasing trend up to the age of 55 years, after which it shows an increasing trend; (iv) finally, the MM interaction curve differs from all the other curves by showing a low value for the probability for all the ages of the caller.
In Fig. 8, we depict the fraction of the calls going from the caller to the callee of either the previous or next generation. The main conclusions are as follows: (i) the FF-interaction curve shows the highest value for any age of the caller as clear indication of the mothers and daughters frequent connections; (ii) the MF-interaction curve indicates that sons are more attached to their mothers before they marry; (iii) after the age of 40 years, the MF and FM interaction curves are very close each other, indicating that sons get the same amount of attention from their parents. Also the behavior of all the four possible combinations of social interaction after the age of 40 tells us that mothers more frequently call their daughters than their sons, and fathers more frequently call their sons than their daughters. In Fig. 9, we show the distribution of calls made by the callers of certain age to callees as a function of callees age (red lines) and the corresponding average call durations (green lines). The distributions of calls turn out to be bimodal, with a maximum at around caller's own age and another maximum at an age difference of one generation [18,19,22]. | 8,090.6 | 2019-03-12T00:00:00.000 | [
"Sociology",
"Computer Science"
] |
On Forecasting Cryptocurrency Prices: A Comparison of Machine Learning, Deep Learning, and Ensembles
: Traders and investors are interested in accurately predicting cryptocurrency prices to increase returns and minimize risk. However, due to their uncertainty, volatility, and dynamism, forecasting crypto prices is a challenging time series analysis task. Researchers have proposed predictors based on statistical, machine learning (ML), and deep learning (DL) approaches, but the literature is limited. Indeed, it is narrow because it focuses on predicting only the prices of the few most famous cryptos. In addition, it is scattered because it compares different models on different cryptos inconsistently, and it lacks generality because solutions are overly complex and hard to reproduce in practice. The main goal of this paper is to provide a comparison framework that overcomes these limitations. We use this framework to run extensive experiments where we compare the performances of widely used statistical, ML, and DL approaches in the literature for predicting the price of five popular cryptocurrencies, i.e., XRP, Bitcoin (BTC), Litecoin (LTC), Ethereum (ETH), and Monero (XMR). To the best of our knowledge, we are also the first to propose using the temporal fusion transformer (TFT) on this task. Moreover, we extend our investigation to hybrid models and ensembles to assess whether combining single models boosts prediction accuracy. Our evaluation shows that DL approaches are the best predictors, particularly the LSTM, and this is consistently true across all the cryptos examined. LSTM reaches an average RMSE of 0.0222 and MAE of 0.0173, respectively, 2.7% and 1.7% better than the second-best model. To ensure reproducibility and stimulate future research contribution, we share the dataset and the code of the experiments.
Introduction
Cryptocurrencies are virtual currencies that rely on blockchain technology. They have seen widespread market adoption since the introduction of Bitcoin in 2009, the most popular crypto so far. Many different subjects trade cryptos and invest in crypto funds and companies; according to CoinMarketCap [1], the global market capitalisation of cryptocurrencies reached an estimated value of USD 932.49 billion in September 2022. Although investments have seen lucrative returns, ubiquitous price fluctuations across most cryptocurrencies make such investments challenging and risky. For example, Bitcoin's price has been highly volatile since its market launch, reaching peaks as high as +122% and +1360% in 2016 and 2017, respectively [2]. Ethereum, XRP, and Litecoin have seen similar fluctuations in 2017 alone [2].
For these reasons, investors require a forecasting approach to effectively capture crypto price fluctuations to minimise the risk and increase their profit. Moreover, it is possible to use volatility forecasts to estimate swings in their price, which is useful for developing and analysing quantitative financial trading strategies [3]. However, similar to stock price forecasting, whose market is dynamic and complex as well [4], crypto price forecasting is regarded as one of the most challenging prediction tasks in the financial domain at present [5]. Most successful researchers cast this problem as an example of time series forecasting [6][7][8][9][10][11], since the idea is to leverage historical and current price data to predict future prices over a period of time or a specific point in the future. Time series analysis has also been applied in weather forecasting and demand forecasting for retail and procurement, for example.
In the literature, the application of statistical techniques is the traditional approach for time series forecasting. Such techniques adopt statistical formulas and theories to model and capture patterns in the time series. The most frequently employed statistical models are the autoregressive integrated moving average (ARIMA) model and its variants, exponential mmoothing, multivariate linear regression, multivariate vector autoregressive model, and extended vector autoregressive model [12]. In addition, in forecasting the future prices of cryptos, the most popular example is the ARIMA [13]. Researchers have commonly employed this model to forecast Bitcoin prices [6,14,15]. Other models have also been applied, such as generalized autoregressive conditional heteroscedasticity (GARCH) models in volatility forecasting of cryptos [16,17] and diffusion processes in probabilistic forecasting of cryptos [18].
Another research branch employs machine learning (ML) models such as stochastic gradient boosting machines [19], linear regression, random forest, support vector machines, and k-nearest neighbours [20]. By leveraging historical data, these techniques focus on identifying the most influential features that determine future crypto prices to boost prediction accuracy.
A third body of work employs deep learning (DL) models to tackle crypto price forecasting, following their recent widespread success in quantitative finance [21]. Neural networks, recurrent neural networks (RNN) such as gated recurrent unit (GRU) and long short-term memory (LSTM), yemporal convolutional networks (TCN), and hybrid architectures have been applied to predict prices of Bitcoin, Ethereum, and Litecoin, for example [7,9,11]. DL approaches are considered effective at time series forecasting because they are robust to noise, they can provide native support for data sequences, and they can learn non-linear temporal dependencies on such sequences [22].
Although the literature has proposed statistical, ML, and DL techniques, there is no clear evidence of which of these approaches is superior. Indeed, the research is scattered and lacks generality because it focuses on predicting the price of a single crypto among a small number of the most popular cryptocurrencies (mainly Bitcoin). Moreover, the over-complexity of the model architecture makes their adoption in a real-world scenario very challenging because implementation, training, and predictions are expensive. Lastly, with different datasets, pre-processing strategies, and experimental methodologies, the approaches' comparisons are inconsistent, the experiments are hard to reproduce, and their findings are therefore unreliable.
The main goal of this paper is to overcome these limitations and shed light on the effectiveness of the most popular approaches proposed in the literature so far on the crypto price prediction task. Therefore, as a major contribution, we design a framework for comparing widely used statistical, ML, and DL approaches in predicting the price of five popular cryptocurrencies, i.e., Ripple (XRP), Bitcoin (BTC), Litecoin (LTC), Ethereum (ETH), and Monero (XMR). DL networks selected include different architectures such as convolutional neural networks, recurrent neural networks, and transformers. To the best of our knowledge, we are also the first to propose using temporal fusion transformer (TFT) as a DL approach to tackle crypto price prediction. In addition, we investigate the use of hybrid models and ensembles to determine whether a combination of multiple models can improve the accuracy of the predictions.
To overcome cryptocurrency prices' high fluctuation and volatility, we transform non-stationary time series into stationary ones by applying detrending. Predictive models are trained and tested on a 5-year time-window dataset we collected from online cryptocurrency trading platforms. Our evaluation methodology spans over one year of data and is incremental with monthly time windows. Results show that DL approaches are better than ML and statistical approaches, and, for DL models, complex architectures outperform less complex ones. To ensure reproducibility and stimulate future research contribution, we open source the dataset and the code of the experiments (https: //github.com/katemurraay/tsa_crt, accessed 15 January 2023), as we believe our work to be an essential starting point for practitioners to investigate crypto price prediction.
The remainder of this paper is structured as follows: Section 2 presents the models comparison, the data collection and preprocessing, and finally describes the experimental methodology; Sections 3 and 4 outline the results of the experiments and discuss their findings, respectively; finally, Section 5 draws conclusions and illustrates future plans.
Materials and Methods
In our framework, we assume the availability of a dataset of size m with daily interval granularity, i.e., each dataset's instance refers to a timestamp day t i , i ∈ (1, m), where t 1 and t m denote the earliest and the latest data points available in the dataset, respectively. We denote with y t i the value of the target variable at timestamp t i , i.e., the cryptocurrency price to predict. We also denote with x t i the features available at time t i ; x t i = [y t i−l , . . . , y t i−1 ], where l is the length of the window considered as input by the models. Our goal is to build predictive models that learn a function f (x t i ) = y t i , see Section 2.1 for the list of models we employ in this study. This learning task is a typical example of univariate time series analysis because only one variable (i.e., the crypto price, y) varies over time.
In the remainder of this section, we describe the predictive models, the data acquisition and its preprocessing, and the experimental methodology we use to compare the models.
Predictive Models
Below we give details of the statistical, ML, DL, hybrid, and ensemble models we compare.
•
Auto Regressive Integrated Moving Average (ARIMA). This is a generalisation of the simpler ARMA model (auto regressive moving average). The traditional threestep process of constructing ARIMA models by [13], includes model identification, parameter estimation, and finally, the diagnosis of the simulation and its verification. Essentially, a prediction for a y t target value is the linear combination of the y t i values up to the t target timestamp and the prediction errors made for the same y x t i values. Examples of ARIMA usage include forecasting for air transport demand [23,24], longterm earning prediction [25], and next-day electricity price prediction [26]. ARIMA has effectively predicted BTC prices in [6,14,27]. • k-Nearest Neighbor (kNN). Originally suited for classification tasks, kNN is a nonparametric model that has been successfully extended and employed for regression tasks in time series analysis. To predict y t target , the kNN calculates the k most-similar x t i values to x t target . Then, prediction of y target is the weighted average of the k y t i values. The kNN model has been used in financial forecasting [28], electric market price prediction [29], and in the prediction of Bitcoin [30]. • Support Vector Regression (SVR). Built on support vector machines for classification, SVR enables both linear and non-linear regression. Similarly to kNN, SVR is a nonparametric methodology introduced by [31]. SVR aims to maximise generalisation performance when designing regression functions [32]. SVR was applied to a variety of time series tasks such as forecasting warranty claims [32], predicting blood glucose levels [33], and for stock predictions in the financial market [34]. Examples of SVR usage in forecasting crypto prices can be found in [20,21]. • Random Forest (RF) Regressor. This is essentially an ensemble of decision trees, each of which is built on a random subset of the training set. RF's predictions are performed by averaging the predictions of individual trees. The key benefits of RF are its generalisation capability, and minimal sensitivity to hyperparameters [35]. RF has been used in time series tasks for forecasting cyber security incidents [36], for the prediction of methane outbreaks in coal mines usage [37], and for projecting monthly temperature variations [35]. In the prediction of cryptos, RF has been used for BTC forecasting in [20] and BTC, ETH, and XRP in [19]. • Long Short Term Memory (LSTM). This is a type of RNN capable of learning longterm dependencies and, therefore, is suitable for time series analysis [38]. Although LSTMs follow a chain-like structure similar to ordinary RNNs, in an LSTM's repeating module, four neural layers interact, i.e., two in the input gate, one in the forget gate, and one in the output gate. The input gate adds or updates new information, and the forget gate removes irrelevant information. The output gate ultimately passes updated information to the following LSTM cell. Examples of LSTM usage can be found in short-term travel speed prediction [39], predicting healthcare trajectories from medical records [40], and forecasting aquifer levels [41]. The model has also been successful for crypto price prediction [7][8][9]. • Gated Recurrent Unit (GRU). Although the GRU model is similar to LSTM, the former improves upon the computational efficiency of the latter because it has fewer external gating signals in the interpolation. Consequently, the related parameters are reduced. GRU has been used in the short-term prediction for a bike-sharing service [42], network traffic predictions [43], and forecasting airborne particle pollution [44]. GRU was found in [10] to forecast the prices of BTC, ETH, and LTC successfully. • LSTM-GRU (HYBRID). This method was proposed by Patel et al. [11] to avail of the advantages of both LSTM and GRU. Their study indicated that this hybrid approach effectively predicted Litecoin and Monero daily prices, for this reason we include it herein. Combinations of LSTM and GRU have been successfully applied to predict water prices [45]. • Temporal Convolution Network (TCN). Presented by Bai, Kolter, and Koltun [46], TCN is a variant of the convolutional neural network architecture, and uses dilated, causal, one-dimensional convolutional layers. TCN's causal convolutions prevent future data from leaking into the input. TCNs have been widely adopted in time series forecasting. For example, TCNs can produce a short-term prediction of wind power [47], predict just-in-time design smells [48], and forecast in stock volatility [49]. In addition, TCN was effective at forecasting weekly Ethereum prices [50]. • Temporal Fusion Transformer (TFT). Introduced by [51], the architecture of TFT is built on the vanilla transformer architecture. TFT is one of the most recent deep learning approaches for time series forecasting. Its design incorporates novel components such as gating mechanisms, variable section networks, static covariates, prediction intervals, and temporal processing. TFT has been applied in other time series tasks such as the prediction of pH levels in bodies of water [52], flight demand forecasting [53], and projecting future precipitation levels [54]. To the best of our knowledge, we are the first to employ it for the crypto price prediction.
We employ the voting regressor for the ensemble, a combination of different base inducers using the models described above. We build a total of 502 ensembles, one for each possible combination. An ensemble's prediction is given by averaging the predictions from the individual models that compose the ensemble. Note that each individual model was trained separately and independently.
In our comparison, other approaches for time series forecasting could have been investigated, for example, functional data analysis for predicting electricity prices [55,56], group method of data handling and adaptive neuro-fuzzy inference system for predicting faults [57], and multi-modality graph neural network for financial time series prediction [58]. However, we limited our choice to the most popular and representative models proposed in each category (i.e., statistical, ML, and DL) in the literature because a complete and exhaustive comparison of time series methods is beyond the scope of this paper.
Binance.com is the world's largest and most popular cryptocurrency exchange portal for daily trading. It provides an array of features specific to cryptocurrency products which include market information for thousands of cryptocurrencies. Investing.com acts as a global portal for stock market information and analysis on many worldwide financial markets. For our investigation, we selected five popular cryptocurrencies in the literature, i.e., XRP, Bitcoin (BTC), Litecoin (LTC), Ethereum (ETH), and Monero (XMR).
The data collection process made use of the Binance API as a primary resource and it was complemented by information retrieved from Investing.com when missing values occurred (e.g., when the closing price of XMR was not available for a specific day). The time frame of the collected data ranges from 1 June 2017 to 31 May 2022, i.e., five years. A summary of the resulting datasets are reported in Table 1, and the covariates available for the i-th instance of each dataset are the following: • t i -the timestamp of the day; • OP t i -the opening price of the cryptocurrency at t i ; • HP t i -the highest price of the cryptocurrency at t i ; • LP t i -the lowest price of the cryptocurrency at t i ; • y t i -the target variable, i.e., the closing price of the cryptocurrency at t i (which corresponds to the opening price of the following day, i.e., OP t i+1 = y t i ).
In this paper, we address the crypto price prediction task as a univariate time series analysis problem, and therefore we ignore the covariates OP, HP, and LP, but they are included in the available preprocessed dataset. We plan to consider such covariates in future work.
Data Pre-Processing
When forecasting with time series, their stationarity property is crucial for effective modeling [5]. A time series with mean and variance that do not change over time is referred to as stationary. On the contrary, a time series whose mean, frequency, and variance fluctuate over time and frequently display high volatility, trend, and heteroskedasticity is referred to as non-stationary [5]. Typically, traditional statistical forecasting methods such as ARIMA require time series to be stationary in order to successfully capture their properties [59]; similarly, stationarity favours learning in non-statistical models such as the ML and DL employed in this paper [60]. For these reasons, we run the augmented Dickey-Fuller (ADF) statistical test [61] to identify whether our datasets are stationary. The results show that all datasets are non-stationary except the XRP dataset.
We transform our datasets into stationary datasets by applying detrending, i.e., the process of removing the trend from a time series. In particular, we apply the differencing transformation, the simplest detrending technique that generates a new time series where the new value y t i at timestamp t i is calculated as the difference between the original observation and the observation y t i−1 at the previous time step, i.e., (1) Figure 1 shows the original Bitcoin time series in yellow and its differenced version in red. The ADF test computed on the detrended datasets confirm their stationarity.
Another typical pre-processing step that is widely adopted to enhance learning is data normalisation (e.g., [11]). We apply the Min-Max normalisation to all y t i of each dataset, so that values are mapped in the (0, 1) range according to the following formula: where y min = min{y t i } and y max = max{y t i }. To avoid leakage, y min and y max values are calculated from training data only.
Experimental Methodology
We performed experiments on each dataset/crypto separately, with the following methodology that was the same for all models. We performed an initial temporal trainingtest split on each dataset. The first 80% of the data belonged to the training set (i.e., four years of data, from t = 1 June 2017 to t = 31 May 2021) and the last 20% of the data belonged to the test set (i.e., one year of data, from t = 1 June 2021 to t = 31 May 2022). We further partitioned the test set into twelve non-overlapping monthly windows (from June 2021 to May 2022 included) and we labelled them with M i , i ∈ {1, 2, . . . , 12}.
Inspired by [62], an incremental monthly-based strategy was employed to evaluate each model. In the first evaluation step, we trained the model on the training set, we performed predictions, and we computed the test metrics (presented in Section 2.5) on M 1 . In the second evaluation step, we included M 1 's data in the training set and we retrained the model from scratch on this newly enlarged training set. We again performed predictions and we computed the test metrics on M 2 . We repeated the same process for the remaining ten partitions, each time increasing the training set and moving the evaluation window one step forward. Both ML and DL models have hyperparameters; therefore, we tuned them only in the first evaluation step by using 20% of the training data for validation (optimizing for MSE), and we kept them fixed for the remainder of the evaluation. Hyperparameter details and values spaces are reported in Appendix A Table A1. We considered a sliding window of 30 days of data as input to compute a one-step-ahead prediction. To avoid overfitting of the DL models during training, we applied early stopping and we performed the experiments three times (averaging the results) to account for the randomness in the initialisation of the models.
Evaluation Metrics
To assess the quality of a model's predictions, we computed the root mean squared error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE), and R-squared score (R 2 ) in each evaluation step described in the previous section, as follows: In the above Equations (3)-(6), y t i is the true price of the crypto after the normalisation, y t i is the predicted value, y is the average of the predicted values, and n indicates their number. Note that the R-squared metric highlights the model's variance in relation to the total variance. Therefore, as opposed to the other error metrics, the higher the R-squared value, the better the model's performance.
Results
This section reports the results of the experiments and compares the regression models in terms of accuracy and computational time. We assess both their average and cryptospecific performances. Then, we examine the results of the ensembles and the contribution of each individual model to an ensemble's performance. Table 2 shows the average performance of each model computed across all cryptos. Models are ranked by RMSE in ascending order.
Individual Models
First, we observe that the models' ranking is consistent across all the accuracy metrics (with very few exceptions). The LSTM exhibits the best performance, with a consistent gap compared to the other models. For each metric, values are quite close because we compute them on the normalised predicted price, and not on the detrended data. The recurrent neural network models occupy the first three positions of the rank, followed by the KNN and the convolutional network approach. Interestingly, ARIMA performs better than TFT, RF, and SVR.
Regarding the time required to train and deploy the models, DL approaches are more expensive compared to machine learning and statistical methods, as expected. Overall, all the models provide a prediction in a reasonably short time, so they might be suited to operate in some online settings. In particular, for training and inference, HYBRID (LSTM-GRU Hybrid in Section 2.1) and TFT are the most expensive, respectively. In contrast, ML models are considerably faster to run. The KNN provides a good trade-off between accuracy and computational cost. Table 3 indicates the RMSE results across the different cryptos. The ranking of the top three models is consistent across all the cryptos. However, in the lower positions, some variability can be observed, e.g., SVR and TFT perform particularly well on BTC. Table 4 highlights the performances of the best ten ensembles in terms of RMSE. The ensembles do not outperform the LSTM network, and the latter is included in all the top-performing ensembles. It is interesting to see how the LSTM and GRU ensemble outperforms the HYBRID model, which is a deep non-sequential network that combines LSTM and GRU. To evaluate the contribution of an individual model, we compared the average accuracy of all the ensembles that include this model and those that do not (and the difference can be seen as the average RMSE contribution given by that individual model). The results in Table 5 confirm the individual model ranking in Table 2. Most notably, the contributions of the non recurrent models are negative, i.e., they worsen the ensemble accuracy on average.
Discussion
The results show that the models' performance ranking is consistent across different cryptos, and their average performance confirms the ranking. Recurrent DL approaches dominate the cryptocurrency price prediction task according to all accuracy metrics. In particular, the LSTM is the best-performing model with an average RMSE of 0.0222 and substantially outperforms other network architectures, such as TCN (convolutional) and TFT (transformer), which have a 4.9% and 5.8% higher error, respectively. The nature of the latter architectures can explain their poor performance. Regarding TCN, convolutional networks are good at interpreting repeated hierarchical patterns in the data (captured by the dilated convolutions), but these patterns are absent from the crypto price time series. Moreover, TCN generally performs better for fine-grained (dense) predictions (such as hourly predictions rather than daily or monthly predictions). This is because the oscillation between a wider time window has a different distribution and is harder to capture by dilated convolutions. Regarding TFT, its attention mechanism is known for capturing the relationship between covariates of the time series at hand. However, such covariates are ignored in our experiments (and we leave this for future work). TCN and TFT are also known to be data-hungry, i.e., they require substantial volumes of data to capture patterns successfully. Unfortunately, the amount of historical data available to train these models on forecasting daily prices is limited. The second best model is GRU, a recursive network simpler than LSTM, which achieves an RMSE of just 2.7% higher with a similar computational effort. To wrap up, results for DL models suggest that more expensive and complex architectures may be redundant for this type of time series task.
The KNN provides an excellent trade-off between the accuracy of the prediction and the computational effort required, with an error 4.8% higher than LSTM but with no training time required and a 25 times faster inference time. The other machine learning models (SVR and RF) are at the bottom of the ranking and, quite surprisingly, are outperformed by the baseline ARIMA. This is probably because they cannot capture meaningful patterns in the time series, which is noisy and presents outliers (SVR performs better because it is less prone to outliers). In contrast, due to its linearity assumptions, ARIMA's predictions are directional and more accurate for short-term analysis. In conclusion, ARIMA provides a good trade-off between good accuracy and reduced computational demand.
Ultimately, the last part of the experiment highlights that combining different regressors into an ensemble does not boost performance. This approach aims to compensate for a model's shortcomings by averaging it with others that are more accurate in particular cases. However, if a regressor provides more accurate predictions in the vast majority of cases, averaging it with considerably more inaccurate models negatively affects its performance. Indeed, the LSTM consistently outperforms all the ensembles due to a wide accuracy gap with the other models.
Conclusions
This paper compares deep learning (DL), machine learning (ML), and statistical models for forecasting the daily prices of cryptocurrencies. Our one-step-ahead evaluation framework is incremental and works on a monthly retraining schedule. We tested over 12 months of data. Results show that, in general, recurrent DL approaches are the best models for this task. In particular, the LSTM is the best-performing model, and its training is less expensive than the other DL models with the closest performance. The reasons why DL models such as TCN and TFT underperform might be, for example, that the convolutional approaches are better suited for dense predictions ("sparse" in our analysis) and TFT are good at leveraging covariates (ignored in our analysis), while both approaches suffer from a data scarcity problem. KNN and ARIMA provide a good trade-off between accuracy and computational expense. Finally, the deployment of ensemble approaches is detrimental, as their performance is inferior to the individual LSTM approach.
The availability of accurate predictions is essential to crypto traders, who often trade hourly and daily. Therefore, tailoring accurate predictors for trading strategies might help them increase their revenue. However, our predictors can only predict daily prices; in the future, we aim to build predictors that also provide hourly prices and investigate the integration of such predictors with some trading strategies (e.g., [3]).
Several factors can also affect the price fluctuations of cryptos, including regulations, social media trends, market sentiments, and other cryptos' volatility. For example, the work of [63] analyses how regulatory news and events affect returns in the cryptocurrency market using an event-based approach. According to this report, events that raise the likelihood of regulation adoption are linked to a negative return for cryptos. Another example is from [64], where the prices of other cryptos exhibit an interdependent relationship (Bitcoin is the parent coin for both Litecoin and Zcash). Therefore, in the future, we aim to integrate these kind of covariates in our models to improve prediction accuracy.
Another avenue of improving forecasting involves investigating the relationship between cryptos. Their prices exhibit an interdependent relationship, and the coins can be grouped into clusters of similar behaviour [65]. Using this framework, similar cryptos can be used to train a more accurate model specific to that pattern and offer rich and valuable insights into the dynamics between cryptos, while also improving the accuracy of predictions of crypto forecasting. Funding: This publication has emanated from research supported in part by Science Foundation Ireland under grant no. 18/CRT/6223. This publication has also emanated from research conducted with the financial support of Science Foundation Ireland under grant number 12/RC/2289-P2 which is co-funded by the European Regional Development Fund. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Conflicts of Interest:
The authors declare no conflicts of interest.
Appendix A. The Hyperparameter Values of the Predictive Models
The details regarding the values of hyperparameters of each model are shown in Table A1. | 6,809.6 | 2023-01-29T00:00:00.000 | [
"Computer Science"
] |
Glucosamine Ameliorates Symptoms of High-Fat Diet-Fed Mice by Reversing Imbalanced Gut Microbiota
Glucosamine (GlcN) is used as a supplement for arthritis and joint pain and has been proved to have effects on inflammation, cancer, and cardiovascular diseases. However, there are limited studies on the regulatory mechanism of GlcN against glucose and lipid metabolism disorder. In this study, we treated high-fat diet (HFD)-induced diabetic mice with GlcN (1 mg/ml, in drinking water) for five months. The results show that GlcN significantly reduced the fasting blood glucose of HFD-fed mice and improved glucose tolerance. The feces of intestinal contents in mice were analyzed using 16s rDNA sequencing. It was indicated that GlcN reversed the imbalanced gut microbiota in HFD-fed mice. Based on the PICRUSt assay, the signaling pathways of glucolipid metabolism and biosynthesis were changed in mice with HFD feeding. By quantitative real-time PCR (qPCR) and hematoxylin and eosin (H&E) staining, it was demonstrated that GlcN not only inhibited the inflammatory responses of colon and white adipose tissues, but also improved the intestinal barrier damage of HFD-fed mice. Finally, the correlation analysis suggests the most significantly changed intestinal bacteria were positively or negatively related to the occurrence of inflammation in the colon and fat tissues of HFD-fed mice. In summary, our studies provide a theoretical basis for the potential application of GlcN to glucolipid metabolism disorder through the regulation of gut microbiota.
INTRODUCTION
Diabetes mellitus is a chronic and systemic inflammatory disease characterized by hyperglycemia, which mainly includes three types: type 1 diabetes mellitus (T1DM) with insufficient insulin secretion, type 2 diabetes mellitus (T2DM) with insulin resistance, and gestational diabetes mellitus with raised blood glucose during pregnancy (Menini et al., 2020). The population of diabetic patients worldwide has reached 463 million as of 2019 and is expected to reach 700 million in 2045 (Belma et al., 2019). Among them, T2DM is more common in adults and accounts for up to 90% of patients (Zhang et al., 2020). It may be associated with an unhealthy diet, genetic inheritance, and environmental factors (Toi et al., 2020). In addition, a series of publications have demonstrated that gut microbiota is closely related to the occurrence and development of T2DM by direct (microorganisms themselves) or indirect (structural components or metabolites of microorganisms) interaction with the host Herrema and Niess, 2020). Therefore, it is of significance to investigate the regulatory effect of gut microbiota on the prevention and treatment of T2DM.
Glucosamine (GlcN) is a naturally occurring amino sugar in the human body, and its salts include glucosamine hydrochloride and glucosamine sulfate, which are widely used in dietary supplements against osteoarthritis and joint pain (Reginster et al., 2001;Veronese et al., 2020). Studies show that GlcN has favorable biological activities, such as anti-inflammation, anti-bacteria, anti-cancer, and cardioprotection. And most of these bio functions are achieved by the regulation of inflammatory responses, especially the inhibition of Nuclear Factor-κB (NF-κB) signaling, inflammatory cytokine production, and enzymatic expression (Dalirfardouei et al., 2016). New evidence indicates that GlcN can initiate a positive regulatory effect on glucose metabolism disorder. In an average of 8.1 years of follow-up of 404,508 subjects, the habitual use of GlcN was closely related to the incidence of lowrisk T2DM (Ma et al., 2020). In another clinical trial, the patients taking glucosamine sulfate orally for 3 years were accompanied by slightly decreased fasting blood glucose (Reginster et al., 2001). There were also reports in which the oral administration of glucosamine sulfate caused the abundance changes of gut microbiota (Shmagel et al., 2019). Based on the above, we hypothesize that GlcN may have a beneficial effect on diabetes mellitus through the regulation of gut microbiota.
In this study, we aim to determine the improvement of GlcN on glucose metabolism disorder in diabetic mice with high-fat diet (HFD) treatment. Also, by 16S rDNA sequencing, we investigated the role of gut microbiota in the amelioration of diabetic symptoms by GlcN.
Materials and Reagents
GlcN, i.e., D-(+)-Glucosamine hydrochloride was purchased from Sigma (St. Louis, MO, United States), and the structure was shown in Supplementary Figure S1 with the purity ≥99%. Hematoxylin and eosin staining kit was obtained from Beyotime Institute of Biotechnology (Jiangsu, China). Fluorescent secondary antibodies and 4′, 6-diamidino-2-phenylindole (DAPI) were purchased from Thermo Fisher Scientific (Waltham, MA, United States). The primers for Quantitative RT-PCR were synthesized by BGI Gene (Shenzhen, China). Other chemical reagents were from Beijing Chemical Company (Beijing, China).
Animal Treatment
Twenty C57BL/six male mice (4-6 weeks old, 20 ± 2 g) were from the Model Animal Research Center of Nanjing University (Jiangsu, China). All mice were kept at a constant temperature (23 ± 2°C) and humidity with a 12 h light-dark cycle. During the one-week adaptation, mice were given a normal chow diet and drinking water ad libitum. After that, the mice were randomly divided into four groups (n 5): 1) for normal chow diet (NCD) group, mice were fed with NCD (10 kcal% fat, D12450B); 2) for high-fat diet (HFD) group, mice were fed with HFD (45 kcal% fat; D12451); 3) for NCD + GlcN group, mice were fed with NCD and GlcN (1 mg/ml in drinking water, about 200 mg/kg/d); 4) for HFD + GlcN group, mice were fed with HFD and GlcN. Both NCD and HFD were purchased from KeAo Xieli Co., Ltd. (Beijing, China). To ensure the activity of GlcN, its aqueous solution was replaced every other day.
All the mice were raised in groups according to their different diets and drinking water for five months. During the GlcN treatment, the body weight, diet, and water drinking of mice were monitored.
Intraperitoneal Glucose Tolerance Test
All mice were fasted for 12 h and then intraperitoneally injected with glucose (1 g/kg Body weight, dissolved in physiological saline). The tail vein blood was collected for blood glucose test by using an Accu-chek glucometer (Roche Diagnostics, Basel, Switzerland) at the time points of 0, 15 min, 30 min, 1 h, and 2 h after glucose injection.
Organ Weight and Hematological Index
After the experiment, all mice were euthanized and dissected. Major tissues were collected and weighed, including heart, spleen, pancreas, brown adipose tissue (BAT), liver, white adipose tissue (WAT), and intestine. A part of WAT and colon tissues were fixed in 4% paraformaldehyde. The blood samples were collected for hematological index analysis (Nihon Kohden, Tokyo, Japan), and the fecal samples were collected for 16S rDNA sequencing. All samples were stored at −80°C for further experiment.
Hematoxylin and Eosin Staining
The colon and WAT samples were fixed in 4% paraformaldehyde for 24 h followed by the embedding and then sliced (50 μm thickness). The paraffin sections were sequentially subjected to xylene dewaxing, gradient ethanol hydration, and H&E staining. The images were captured using a Leica DFC310 FX digital camera connected to a Leica DMI4000B light microscope (Wetzlar, Germany).
RNA Extraction and Quantitative Real Time-Polymerase Chain Reaction
The colon and adipose tissues were homogenized using Trizon (CoWin Biosciences, Beijing, China) and homogenized. Then, chloroform, isopropanol, and ethanol were added in sequence for the extraction of total RNA. The concentrations and purity of extracted RNA were assayed with Q5000 ultra-micro nucleic acid protein analyzer (QUAWELL Technology Inc., United States). Reverse transcriptions were performed using 500 ng of highquality total RNA with HiFiScript cDNA Synthesis Kit (CoWin Biosciences, Beijing, China). Quantitative RT-PCR (qRT-PCR) was performed with UltraSYBR Mixture High ROX Kit (CoWin Biosciences, Beijing, China) on a 7500 Fast Real-Time PCR System (Applied Biosystems, Foster City, CA, United States). The primer sequences used in this study were listed in Supplementary Table S1, including chemoattractant protein 1 (MCP-1), Interleukin 1β (IL-1β), Interleukin six (IL-6), Peroxisome proliferator-activated receptor gamma (PPARγ), and Integrin subunit alpha X (CD11c). Amplification protocol for 40 cycles was as follows: 2 min at 95°C for initial activation, 15 s at 95°C for denaturation, 60 s at 60°C for annealing/extension. Finally, β-Actin was used as a reference gene for the calculation of relative target gene expression using the 2 −ΔΔCT method.
Detection of Gut Microbial Community
After mice were dissected, the fecal contents were collected, and the FastDNA ™ SPIN Kit (MP Biomedicals, CA, United States) was used for genome extraction. Subsequently, 1% agarose gel electrophoresis and spectrophotometry detection at 260 nm/ 280 nm were conducted to detect the purity of extracted genomic DNA. Further, the extracted genomic DNA was used as a template. The amplification of 16S rRNA V3-V4 gene region was carried out using the 16S V3 338F forward (5′-ACTCCTACG GGAGGCAGCAG-3′) and V4 806R reverse primers (5′-GGACTACHVGGGTWTCTAAT-3′) with Barcode. To ensure the accuracy and reliability of subsequent data analysis, low cycle number amplification (25 cycles) was used with the same cycles for each sample. Three replicates were performed, and the PCR products were detected by 2% agarose gel electrophoresis. The AxyPrepDNA gel recovery kit (Axygen Biosciences, CA, United States) was used to cut the gel for the recovery of PCR products.
The purified amplicon was quantified with QuantiFluor ™ -ST (Promega, United States) and adjusted to the same concentration for each sample. According to standard procedures, the Miseq library was constructed and the Miseq PE 300 second-generation high-throughput sequencing platform was used for paired-end (2 × 300) sequencing (Allwegene Technology Inc., Beijing, China). The obtained paired-end sequence data were used for filtering processing and data accusation. Subsequent data analysis and comparison were performed by using Quantitative Insights Into microbial Ecology (QIIME) software package, R language and Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt) algorithm and platform.
Statistical Analysis
Data were expressed as means ± SEM (Standard Error of Mean) and analyzed with Graphpad Prism 6 software (GraphPad FIGURE 1 | Effect of GlcN on pathophysiologic symptoms of diabetic mice induced by HFD-fed. After the mice were treated with NCD, HFD, and/or GlcN (1 mg/ml, in drinking water) for 5 months, the body weight (A) and fasting blood glucose (B) of mice were monitored. For the IGTT experiment, the blood glucose at different time points among experimental groups was tested (C), and the area under the glucose tolerance curve was calculated (D). Also, the visceral indices of major organs were detected, including heart, spleen, pancreas, BAT, liver, WAT, and brain (E,F). Data were shown as means ± SEM (n 5). *p < 0.05, **p < 0.01 indicates the difference between HFD group and NCD group; # p < 0.05, ## p < 0.01 indicates the difference between HFD + GlcN group and HFD group.
Frontiers in Pharmacology | www.frontiersin.org June 2021 | Volume 12 | Article 694107 Software, Inc., United States). The difference comparison and statistical analysis between groups were performed by a two-way ANOVA with Tukey-Kramer test. The LEfSe analysis on gut microbiota abundance and composition was based on Kruskal-Wallis and Wilcoxon tests, and the linear discriminant analysis (LDA) score threshold was 4.0-5.0. The comparison in Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway was drawn with STAMP software. Statistical significance was considered at a p-value < 0.05.
Effect of Glucosamine on Physiological Indices of High-Fat Diet-Fed Mice
After GlcN treatment for five months, the body weight and fasting blood glucose of mice in the HFD group were significantly higher than those in the NCD group (p < 0.05) ( Figures 1A,B), proving that the diabetic mouse model was successfully constructed. Noticeably, GlcN treatment significantly reduced the blood glucose of HFD-fed mice (p < 0.05) ( Figure 1B), indicating the improvement of GlcN on hyperglycemia in diabetic mice. As compared with the mice of the NCD group, the intolerance to glucose in the HFD group was proved by IGTT experiment, which was abrogated after GlcN treatment ( Figure 1C). This was further confirmed by integrating the area under the curves of glucose tolerance among groups, which was significantly decreased by GlcN treatment (p < 0.05, vs. HFD group) ( Figure 1D). The mouse organ weight usually reflects the changes in their physiological conditions. Therefore, the heart, spleen, pancreas, BAT, liver, WAT, and brain of mice among experimental groups were weighed, and the visceral indices were calculated. It was found that the decrease in pancreatic weight and the increase in white adipose tissue weight in HFD-fed mice were statistically reversed by GlcN treatment (p < 0.05) ( Figures 1E,F). It was indicated that GlcN improved the metabolic disorder of diabetic mice and especially suppressed the body's fat production and islet damage.
Sequencing Data Validation and Alpha Diversity Analysis of Gut Microbiota
The V3-V4 variable region sequences of gut microbiota in mice are classified according to their similarities, one of which is the Operational Taxonomic Units (OTU). By random sampling of these sequences, the Rarefaction Curve was constructed based on the number of reads sampled and the corresponding OTUs they represented ( Figure 2A). The Shannon-Wiener Curve was presented by the microbial diversity index of each sample at different sequencing depths ( Figure 2B). Both the Rarefaction Curve and Shannon-Wiener Curve tended to be flat, indicating FIGURE 2 | Sequencing data validation and diversity analysis of gut microbiota among experimental groups. After the treatment with NCD, HFD, and/or GlcN (1 mg/ml, in drinking water) for 5 months, all mice were sacrificed and the cecal contents were collected for 16S rDNA sequencing. Rarefaction curve (A) and Shannon-Wiener curve (B) were drawn by the OTU number sampled. Indices of Chao1 (C), observed_species (D), PD_whole_tree (E), and Shannon index (F) were used for the α diversity analysis. Data were shown as means ± SEM (n 5). **p < 0.01 indicates the difference between HFD group and NCD group; ## p < 0.01 indicates the difference between HFD + GlcN group and HFD group.
Frontiers in Pharmacology | www.frontiersin.org June 2021 | Volume 12 | Article 694107 that the amount of sequencing data was large enough to reflect the vast majority of microbial information in these samples. The analysis of α diversity in a single sample reflects the abundance and diversity of the microbial community. Specifically, the indices of Chao1 ( Figure 2C), observed_species ( Figure 2D), and PD_whole_tree ( Figure 2E) separately represent the species richness, number of observed OTUs, and pedigree diversity. The results show that there were no significant differences between the four experimental groups. As shown in Figure 2F, the Shannon index was significantly increased in the HFD group (p < 0.01, vs. NCD group), which was reduced by GlcN treatment (p < 0.01, vs. NCD group). It was suggested that GlcN treatment remarkably reversed the increase in intestinal bacteria diversity in HFD-fed mice.
Analysis on Gut Microbiota Composition Among Experimental Groups
Principal Component Analysis (PCA), based on the evolutionary distance between species, is usually used to reflect the differences of microbial community composition among biological samples. The more similar the sample composition, the closer the distance that was reflected in PCA. As shown in Figure 3A, the OTUs FIGURE 3 | Effect of GlcN treatment on gut microbiota composition of HFD-fed mice. After the treatment with NCD, HFD, and/or GlcN (1 mg/ml, in drinking water) for 5 months, all mice were sacrificed and the cecal contents were collected for 16S rDNA sequencing. The differences in gut microbial diversity and abundance among four experimental groups were shown by Principal Component Analysis (PCA) (A), histogram of phylum level (B), and heatmap of genus level (C).
Frontiers in Pharmacology | www.frontiersin.org June 2021 | Volume 12 | Article 694107 5 between four experimental groups were distinguished, indicating the significant changes in compositions of gut microbiota.
In addition, as shown in Figure 3B, the abundances of major phyla in the gut of HFD-fed mice were significantly changed. Among them, Firmicutes and Actinobacteria were statistically reduced (p < 0.05, vs. NCD group), whereas Bacteroidetes, Proteobacteria, and Deferribacteres were markedly increased (p < 0.05, vs. the NCD group). In contrast, GlcN treatment significantly reversed the changes of Actinobacteria and Proteobacteria in the HFD-fed mice (p < 0.05). Also, the abundances of several undefined microbes were found to be affected by HFD feeding (p < 0.05, vs. NCD group). More specifically, as shown in Figure 3C, there are significant changes in the composition and abundance of intestinal microbes between different experimental groups at the family level.
Bacteria Contributed to Gut Microbiota Variety Among Experimental Groups
Through linear discriminant analysis effect size (LEfSe) analysis, we identified the representative gut microbes in each Data were shown as means ± SEM (n 5). *p < 0.05, **p < 0.01 indicates the difference between HFD group and NCD group; # p < 0.05, ## p < 0.01 indicates the difference between HFD + GlcN group and HFD group.
Frontiers in Pharmacology | www.frontiersin.org June 2021 | Volume 12 | Article 694107 6 experimental group ( Figure 5). Based on the LDA scores ( Figure 5A) and cladogram assay ( Figure 5B), the dominant gut microbes in the HFD group include Lachnospiraceae, Desulfovibrionaceae, Porphyromonadaceae, Ruminococcaceae, and Peptostreptococcaceae at the family level, and Desulfovibrio, Romboutsia, and Ruminiclostridium_9 at the genus level. In contrast, the most significant gut microbes in the HFD + GlcN group were f_ Coriobacteriaceae and the corresponding Coriobacteriacea_UCG_002 and Coprococcus_1 at the genus level, showing the effect of GlcN treatment on the two types of microbes in HFD-fed mice.
Effect of Glucosamine on Metabolic Function of Gut Microbiota in High-Fat Diet-Fed Mice
The change in gut microbiota composition often means the alteration of their bio functions. Thus, we explored metabolic FIGURE 5 | Identification of most significantly contributed intestinal microbes among experimental groups using Linear discriminant analysis (LDA) and Linear discriminant analysis Effect Size (LEfSe). After the treatment with NCD, HFD, and/or GlcN (1 mg/ml, in drinking water) for 5 months, all mice were sacrificed and the cecal contents were collected for 16S rDNA sequencing. LEfSe analysis was performed based on the intestinal microbial abundances and compositions, and the results were displayed in the form of LDA value distribution histogram (A) and cladogram (B).
Frontiers in Pharmacology | www.frontiersin.org June 2021 | Volume 12 | Article 694107 7 pathways of the intestinal microbe community among groups by PICRUSt. As shown in Figure 6A, 22 KEGG pathways of HFD-fed mice were found to be changed as compared to those of mice in the NCD group. Among these pathways, glucolipid metabolism was the most involved ones, including amino sugar and nucleic acid sugar metabolism, energy metabolism, glycolysis/gluconeogenesis, fructose and mannose metabolism, pyruvate metabolism, glutathione metabolism, fatty acid metabolism, lipid metabolism, glycan synthesis and metabolism, carbohydrate digestion and absorption. Other changed pathways were mainly associated with biosynthesis and signaling transduction. After GlcN treatment, nine KEGG were improved compared with the HFD group such as amino sugar and nucleic acid sugar FIGURE 6 | Metabolic function changes of gut microbiota among experimental groups. After the treatment with NCD, HFD, and/or GlcN (1 mg/ml, in drinking water) for 5 months, all mice were sacrificed and the cecal contents were collected for 16S rDNA sequencing. By using PICRUSt analysis, the metabolic alteration of KEGG pathways in the HFD group (A) and the reversal effect of GlcN treatment (B) were analyzed.
Suppressive Effect of Glucosamine on Damage to Colon Tissues of High-Fat Diet-Fed Mice
Next, we investigated the protection of GlcN against HFDinduced damage to colon tissues in mice. As indicated in Figure 7A by H&E staining, the mucin of intestinal barrier of HFD-fed mice was significantly reduced, which was improved after GlcN treatment, indicating the protective effect of GlcN on the integrity of colon tissues in HFD-fed mice. Further experiment confirmed that GlcN treatment inhibited the mRNA expressions of MCP-1 (p < 0.01, vs. HFD group), IL-6 and, IL-1β (p < 0.01, vs. HFD group) to a large extent ( Figures 7B-D).
Inhibitory Effect of Glucosamine on Inflammation in White Adipose Tissues and Blood of High Fat Diet-Fed Mice
WAT is involved in a variety of physiological or pathological processes such as insulin sensitivity, inflammation, and glucose and lipid metabolism. By H&E staining, it was observed that HFD-fed mice had severe adipose hyperplasia ( Figure 8A), while GlcN treatment significantly reduced the adipocyte size of WATs (p < 0.01, vs. HFD group) ( Figure 8B). To further confirm the beneficial effect of GlcN on WATs, the mRNA levels of several pro-inflammatory cytokines were measured by qRT-PCR. The result shows that the expressions of PPARγ ( Figure 8C), MCP-1 ( Figure 8D), and CD11c ( Figure 8E) in the HFD grwhy usoup were significantly increased (p < 0.05, vs. NCD group), which were inhibited by GlcN (p < 0.05, vs. NCD group). Also, GlcN remarkably reduced the number of white blood cells ( Figure 8F) and lymphocytes ( Figure 8G) in the blood of HFD-fed mice. Besides, the treatment of GlcN and/or HFD had no effect on the number of red blood cells ( Figure 8H).
Correlation Analysis Between Gut Microbiota and Physiochemical Indices in High Fat Diet-Fed Mice
The correlation of intestinal microbes changed significantly at the genus level and the representative physiological indicators of mice were analyzed by Spearman correlation and displayed in the form of a heat map (Figure 9). The result confirmed that some beneficial bacteria including Bifidobacterium, Akkermansia, Lactobacillus, and Allobaculum were negatively correlated with the level of MCP-1 in WAT, the adipose size, the lymphocyte FIGURE 7 | GlcN suppressed inflammatory responses in colon tissues of HFD-fed mice. After the treatment with NCD, HFD, and/or GlcN (1 mg/ml, in drinking water) for 5 months, all mice were sacrificed. The colon tissues were stained by H&E and photographed under a microscope (magnification of ×100, ×200 and ×400) (A). The expressions of MCP-1 (B), IL-6 (C), and IL-1β (D) in colon at mRNA level were detected by qRT-PCR. Data were shown as means ± SEM (n 5). *p < 0.05 indicates the difference between HFD group and NCD group; ## p < 0.01 indicates the difference between HFD + GlcN group and HFD group.
Frontiers in Pharmacology | www.frontiersin.org June 2021 | Volume 12 | Article 694107 9 number, and the liver triglyceride (Liver-TG) content in blood. Other bacteria with reduced abundances after GlcN treatment, including Oscillibacter, Roseburia, Desulfovibrio, Intestinimonas, and Blautia, were positively correlated with the levels of MCP-1 and CD11c in WAT, the adipose size, the numbers of white blood cells and lymphocytes, and the Liver-TG content in blood, implying that the changes of gut microbes affected the inflammatory responses of host.
DISCUSSION
In this study, we documented that in the diabetic model with dysregulated glucolipid metabolism established by HFD feeding, GlcN supplementation (1 mg/ml in drinking water) could reverse the imbalance of intestinal flora, intestinal barrier damage, and inflammatory responses of colon tissues and blood, accompanied by the improvement of physiological indices such as blood sugar and glucose tolerance in HFD-fed mice ( Figure 10).
Obesity, as a chronic relapsed disease process, is an important driving force for diabetes and many other diseases (Johansson and Hansson, 2016;Menini et al., 2020;Murray et al., 2020). Among them, diet is the primary contributor, especially foods with high energy density such as high-fat and high-sugar diets (Bray et al., 2017). In this study, we adopted a high-fat diet of 45 kcal% fat to construct diabetic mouse model with glucose and lipid metabolism disorder. This method has been generally applied to metabolism-related pharmacological studies (Karim et al., 2018;Li et al., 2019). After the HFD-fed mice were intervened with GlcN in drinking water, we found that GlcN significantly reduced the fasting blood glucose and improved glucose tolerance of HFD-fed mice. In addition, GlcN decreased the weight of WATs of HFD-fed mice, suggesting the inhibitory effect on adipogenesis (Figure 1). This result was partly FIGURE 8 | GlcN inhibited inflammatory responses in white adipose tissues (WATs) and blood of HFD-fed mice. After the treatment with NCD, HFD, and/or GlcN (1 mg/ml, in drinking water) for 5 months, all mice were sacrificed. WATs were morphologically observed by H&E staining (A) and the size of adipocytes was statistically analyzed (B). The mRNA levels of pro-inflammatory and adipose differentiation-related cytokines were determined by RT-PCR, including PPARγ (C), MCP-1 (D), and CD11c (E). The number of white blood cells (F), lymphocytes (G), and red blood cells (H) in the blood of each experimental group were also measured. Data were shown as means ± SEM (n 5). *p < 0.05, **p < 0.01 indicates the difference between HFD group and NCD group; # p < 0.05, ## p < 0.01 indicates the difference between HFD + GlcN group and HFD group.
Frontiers in Pharmacology | www.frontiersin.org June 2021 | Volume 12 | Article 694107 consistent with previous studies, in which GlcN improved insulin resistance and glucose intolerance caused by HFD, but increased the body weight and fat weight of normal mice (Hwang et al., 2015). This divergence may be due to the difference in physiological conditions of mice and GlcN dosage used.
In recent years, a line of evidences has demonstrated that diabetes is closely related to the imbalance of intestinal microbes (Arora et al., 2021). As early as 2007, it was reported that sterile mice were less likely to be obese than ordinary mice even if they were fed with HFD (Backhed et al., 2007). In contrast, germ-free mice receiving gut microbes from obese mice exhibited an obese phenotype, suggesting the involvement of intestinal floral in the occurrence of obesity (Clarke et al., 2014). Specifically, the ratio of Firmicutes to Bacteroides in mice fed with HFD was reduced by 50-60% (Yan et al., 2017). This result was in accordance with the phylum changes in our 16s sequencing data ( Figure 3B). At the genus level, we found that GlcN could evidently restore the altered abundances of Bifidobacterium, Akkermansia, Lactobacillus, Allobaculum, Roseburia, Desulfovibrio, Intestinimonas, Blautia, and Oscillibacter in the gut of HFDfed mice (Figure 4). Among them, Bifidobacterium and Lactobacillus were recognized as physiologically beneficial bacteria, which displayed positive effects such as regulating intestinal flora, enhancing immunity, improving intestinal function, and alleviating insulin resistance in diabetic mice (Hsieh et al., 2018). As a new generation of probiotics, the abundance of Akkermansia was reduced in obesity, diabetes mellitus and other diseases, followed by the destroyed intestinal barrier, increased plasma endotoxin level, and chronic inflammation (Everard et al., 2013). And the supplementation with Akkermansia could significantly prevent mice from obesity caused by HFD and reduced blood sugar in diabetic mice (Plovier et al., 2017;Depommier et al., 2019). In addition, Allobaculum was found to be the main reason for the suppression of glucose digestion in host, and its abundance was also reduced in HFD-fed mice (Everard et al., 2014;Herrmann et al., 2017). On the contrary, our results show that GlcN inhibited the growth of harmful bacteria in HFD-fed mice, FIGURE 9 | Correlation between gut microbiota and physiochemical indices among experimental groups. After the treatment with NCD, HFD, and/or GlcN (1 mg/ ml, in drinking water) for 5 months, all mice were sacrificed and the cecal contents were collected for 16S rDNA sequencing. The correlation between changed gut microbiota at the genus level and physiochemical indices was displayed in a heat map. The orange-white-blue grid corresponded to the R-value of 0.5-0-(−0.5). Orange color, positive correlation; blue color, negative correlation. Data were shown as means ± SEM (n 5). *p < 0.05, **p < 0.01. including the pro-inflammatory bacteria Desulfovibrio and the alcoholic fatty liver-related bacteria Blautia (Lennon et al., 2014;Zhang-Sun et al., 2015;Shen et al., 2017). The results meant that GlcN could regulate the intestinal microbe imbalance in HFD-fed mice. As far as we know, there is no report about the regulatory effect of GlcN on intestinal microbes. Of course, rodents and humans have different diets, and there are also differences in the microbiota in the intestines, which is also our limitation. In subsequent studies, it is necessary to recruit volunteers to conduct human experiments to verify our results. By using PICRUSt analysis to predict the functional changes of gut microbiota due to their altered compositions and abundances, we observed that GlcN mainly interfered with the glucolipid metabolism and biosynthesis pathways of intestinal floral in HFD-fed mice ( Figures 6A,B). Among these pathways, glycolysis/gluconeogenesis, lipid biosynthesis protein, and energy metabolism are closely associated with the development of diabetes mellitus (Kosanam et al., 2014;Batista et al., 2021;Wang et al., 2021). As an effective arthritis inhibitor, GlcN has been proved to suppress the activation of nucleotide-binding oligomerization domain-like receptor containing pyrin domain 3 (NLRP3) inflammasome in mouse and human macrophages. After oral administration, it could down-regulate the concentrations of pro-inflammatory cytokines (IL-1β, IL-6, MCP-1, and TNF-α) and thus displayed antiinflammatory activity (Chiu et al., 2019). Indeed, our study also documented that GlcN statistically abrogated the inflammatory attack in colon and fat tissues of HFD-fed mice (Figure 7; Figure 8). Based on previous reports, the damage to the intestinal barrier will not only cause the entrance of harmful bacteria into the blood, but also increase the endotoxin level in the circulatory system. These pathological stimuli lead to organ inflammation, in vivo oxidative stress, and fat accumulation in the development of diabetes mellitus. Since white adipose tissue, muscle, and liver are major tissues responsible for glucose and lipid metabolism, their damages by pathological intestinal bacteria or bacterial products will directly induce the dysfunction of glucose metabolism in the body. Considering that the imbalance of intestinal microbes will result in the destruction of the intestinal barrier and subsequent inflammatory responses leading to the disorder of glucose and lipid metabolism (Cani et al., 2007;Cani et al., 2008;Rouland et al., 2021), we speculate that GlcN may inhibit the inflammatory responses in HFD-fed mice by reversing the imbalance of gut microbiota. This was confirmed by the correlation analysis, in which the most significantly changed intestinal bacteria were positively or negatively related to the occurrence of inflammation in the colon and fat tissues of HFDfed mice (Figure 9).
In conclusion, our studies proved that GlcN could regulate the composition and function of gut microbiota in HFD-fed mice, suppressed the inflammatory responses of colon and fat tissues, and thus improved physiological indices like glucose tolerance. Our results provide a theoretical basis for the potential application of GlcN to glucolipid metabolism disorder through the regulation of gut microbiota.
DATA AVAILABILITY STATEMENT
The data presented in the study are deposited in the (online) repository, accession number (NCBI PRJNA731000).
ETHICS STATEMENT
The animal study was reviewed and approved by the Animal Ethics Experiment Committee of Institute of Process Engineering, Chinese Academy of Sciences (Approval number: IPEAECA2018051).
AUTHOR CONTRIBUTIONS
YD and HL designed the study. XY, JZ, LR, SJ, and CF were responsible for the acquisition of data. HL, XY, and JZ were the major contributors in drafting and revising the manuscript. All authors read and approved the final manuscript. | 7,103.6 | 2021-06-03T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Transcriptomics of Environmental Enrichment Reveals a Role for Retinoic Acid Signaling in Addiction
There exists much variability in susceptibility/resilience to addiction in humans. The environmental enrichment paradigm is a rat model of resilience to addiction-like behavior, and understanding the molecular mechanisms underlying this protective phenotype may lead to novel targets for pharmacotherapeutics to treat cocaine addiction. We investigated the differential regulation of transcript levels using RNA sequencing of the rat nucleus accumbens after environmental enrichment/isolation and cocaine/saline self-administration. Ingenuity Pathways Analysis and Gene Set Enrichment Analysis of 14,309 transcripts demonstrated that many biofunctions and pathways were differentially regulated. New functional pathways were also identified for cocaine modulation (e.g., Rho GTPase signaling) and environmental enrichment (e.g., signaling of EIF2, mTOR, ephrin). However, one novel pathway stood out above the others, the retinoic acid (RA) signaling pathway. The RA signaling pathway was identified as one likely mediator of the protective enrichment addiction phenotype, an interesting result given that nine RA signaling-related genes are expressed selectively and at high levels in the nucleus accumbens shell (NAcSh). Subsequent knockdown of Cyp26b1 (an RA degradation enzyme) in the NAcSh of rats confirmed this role by increasing cocaine self-administration as well as cocaine seeking. These results provide a comprehensive account of enrichment effects on the transcriptome and identify RA signaling as a contributing factor for cocaine addiction.
INTRODUCTION
Some people experimenting with cocaine become addicted at first exposure, while others are resistant, even after many exposures. These important individual differences in vulnerability to addiction are a function of the interaction between genes and the environment (McGue et al., 1996). Genetic factors that play an essential role in individual differences in susceptibility to drugs of abuse are well studied; however, environmental influences on gene expression is an area in need of further study (Thiriet et al., 2008).
Environmental enrichment is a non-drug, non-surgical, non-genetic manipulation producing a protective addiction phenotype in rodent models (Bardo et al., 2001;Green et al., 2002Green et al., , 2010Chauvet et al., 2009;Thiel et al., 2009Thiel et al., , 2011Solinas et al., 2010;Alvers et al., 2012;Chauvet et al., 2012;Nader et al., 2014). In the enriched condition (EC), animals are group-housed with access to children's plastic toys which are changed and rearranged daily, while those in the isolated condition (IC) are single-housed without disturbance. In drug self-administration studies, EC rats self-administer less cocaine than IC rats in the acquisition, maintenance, extinction, and reinstatement phases of cocaine self-administration (Green et al., 2010). Understanding the mechanisms underlying the protective phenotype of environmental enrichment may help uncover novel pharmacotherapeutic targets for prevention and treatment of addiction.
The nucleus accumbens (NAc) is an essential brain region for reward (among several regions) and altering the expression of genes in the NAc shell affects cocaine self-administration in rats (Green et al., 2010;Larson et al., 2011;Zhang et al., 2014). First we utilize quantitative RNA sequencing to analyze expression of 14,309 transcripts in the NAc of EC and IC rats selfadministering cocaine or saline. Next, a transcriptomic analysis of topographical gene expression was performed to identify genes expressed selectively in the mouse NAc shell, using tools and data from the Allen Brain Atlas 1 . The convergence of RNA-seq and topographical gene expression analyses pointed clearly to retinoic acid (RA) signaling as the most promising pathway to target.
As the active metabolite of vitamin A, RA acts as an essential molecule in multiple biological processes, such as embryonic development (Rhinn and Dolle, 2012), immune response (Mielke et al., 2013), cell proliferation and differentiation (Chen et al., 2014), and maintenance of the nervous system (Maden, 2007). There is increasing evidence that RA plays an important role in the adult brain (Maden, 2007). Although no study has yet reported on the role of RA in addiction-related behavior per se, there is some evidence that double null mutants of the RA receptor β (RARβ) with the retinoid X receptor RXRβ or RXRγ decrease dopamine D2 receptor expression selectively in the shell of the NAc (Krezel et al., 1998). These rats displayed decreased cocainestimulated locomotor activity; however, they also had severe decrements in the rotarod task and spontaneous locomotor activity.
Retinoic acid is highly concentrated in the brain, including the striatum (Kane and Napoli, 2010). RA is synthesized in the cytoplasm in two steps: first, retinol is oxidized to retinaldehyde via retinol dehydrogenase (Radh a.k.a. Adh). Then, retinaldehyde is irreversibly converted to RA through retinaldehyde dehydrogenase (Raldh a.k.a. Aldh1a1-3). Excess RA is degraded by Cyp26b1 into polar metabolites (Maden, 2007;Chen et al., 2014). We show here that knockdown of Cyp26b1 via a novel adeno-associated viral vector increases cocaine selfadministration. 1 http://www.brain-map.org
Animals
For the RNA-seq study, male Sprague-Dawley rats (Harlan, Houston, TX, USA) arrived at 21 days of age and were enriched or isolated for 30 days before behavioral testing. EC rats (n = 20) were group-housed in a large metal cage (70 cm × cm 70 × 70 cm) with 14 hard children's plastic toys changed and rearranged daily. This density (245 cm 2 /rat) was higher than a majority of environmental enrichment studies, but was still more than double that required by NIH. EC rats were split into two cages after 50 days of age. IC rats (n = 20) were single housed in standard polycarbonate cages. These conditions produce a resistant (i.e., EC) and susceptible (i.e., IC) behavioral addiction phenotype (Green et al., 2002(Green et al., , 2003(Green et al., , 2010. While it is true that short term isolation is a stressor, enriched rats (even at lower density) show greater signs of chronic stress, even though these rats do not show outward signs of stress (Crofton et al., 2015). Rats remained in these homecage conditions throughout all behavioral tests, except during testing. For vector injection and behavioral tests, male Sprague-Dawley rats were obtained at 225-249 g. Rats were pair-housed and maintained in a controlled environment (temperature, 22 • C; relative humidity, 50%; and 12 h light/dark cycle, lights on 0600 h) in an Association for Assessment and Accreditation of Laboratory Animal Care (AAALAC) approved colony and procedures were approved by the UTMB Institutional Animal Care and Use Committee and conform to the NIH Guide for the Care and Use of Laboratory Animals.
Intravenous Cocaine Self-Administration with Environmental Enrichment
Rats were anesthetized with ketamine (100 mg/kg, IP) and xylazine (10 mg/kg, IP), and implanted with an indwelling Silastic catheter (0.2 mm I.D.; Fisher Scientific, Pittsburgh, PA, USA) into the jugular vein. The catheter passed under the skin to exit on the rat's back. The catheters were infused with 0.1 ml of a sterile saline solution containing heparin (30.0 U/ml), ticarcillin (250,000 U/ml) and streptokinase (8000 IU/ml) daily, to prevent infection and maintain catheter patency throughout the duration of experiments.
One week after catheter surgery, rats were allowed to selfadminister 0.5 mg/kg/infusion cocaine (National Institute on Drug Abuse, Bethesda, MD, USA) or saline under a fixed ratio 1 (FR1) schedule for 2 h/day for 14 days. The session terminated when the rat received 30 infusions to eliminate cocaine intake differences between EC and IC rats. Tissue was harvested 3 h after the beginning of the last session ( Figure 1A).
Quantitative RNA Sequencing
Rat brains were harvested 3 h after the beginning of the last self-administration session and the left side of the NAc was dissected on an ice-cold platform for mRNA analysis. The right side was used for protein quantification (Lichti et al., 2014). The RNA was extracted and purified with the RNeasy mini kit (Qiagen, Valencia, CA, USA). cDNA libraries were created by reverse transcribing the RNA and creating the second strand. Blunt ends were phosphorylated and "a-tailed" so that adapters could be ligated to both ends. Adapters were individually "bar coded" and thus samples were not pooled despite having 4 samples per flow cell, yielding n = 7-8 for each condition (total N = 30). RNA was sequenced with a HiSeq 1000 system from Illumina. cDNA was amplified using "bridge" amplification. Base calls were made using fluorescently labeled nucleotides. More than 100 million reads with 50 bp (paired-end reads) were mapped for each rat and quality was checked with FastQC (v0.9.1) (Andrew, 2010). Reads were mapped to the rat reference genome (RN4) using Tophat2 (v2.0.4) (Kim et al., 2013) and Bowtie2 (v2.0.0.6) (Langmead and Salzberg, 2012) software packages. The R package EdgeR (v.3.0.8) (Anders and Huber, 2010;Robinson et al., 2010) was then used for analysis using the log-transformed "trimmed mean for M-values" (TMM) method for normalization and tag-wise dispersion using "count" data. A likelihood ratio F-test was used for generating p-values to compare EC vs. IC rats and cocaine vs. saline. Cross-validation of RNA-seq results was achieved by looking at correlation of expression between qPCR and RNA-seq for Fabp5 (Forward: 5 CTTGCACCTTGGGAGAGAAG 3 , Reverse: 5 CATCTTCCCGTCCTTCAGTT 3 ) and Hspa5 (Forward: 5 AACCAAGGATGCTGGCACTA 3 ; Reverse: 5 ATGACCCGC TGATCAAAGTC 3 ), with the qPCR being normalized to Reep5 (Forward: 5 GGTTCCTGCACGAGAAGAACT 3 ; Reverse: 5 GAGAGAGGCTCCATAACCGAA 3 ) (Supplementary Figure S6). The Fabp5 was chosen because it is in the RA pathway and selectively expressed in the NAc shell, both aspects central to this paper. Hspa5 (Bip) was chosen because it has extremely high expression levels, and because we have studied this transcript with cocaine in the past (Pavlovsky et al., 2013). Using qPCR to cross-validate the RNA-seq data is somewhat problematic. Our conclusion is that using a less accurate and less precise method to validate a more accurate and precise method is difficult at best. The problem encountered is that qPCR needs to be normalized to a control or "housekeeping" transcript. The variance from the transcript of interest gets compounded with the variance inherent in the housekeeping gene, increasing variance. Multiple transcripts were assessed for normalization and the end results were quite different depending solely upon which normalization gene was chosen. Many traditional "housekeeping" transcripts were regulated (e.g., gapdh) or trended toward regulation (e.g., β-actin) by enrichment and/or cocaine. In the end, Reep5 was chosen because it was a housekeeping gene not regulated by cocaine or enrichment. However, the variability of the Reep5 expression among samples washed out main effects.
Ingenuity Pathway Analysis (IPA)
In order to study the biological functions and pathways regulated by cocaine and environmental enrichment, transcripts significantly regulated (p < 0.05) were analyzed with IPA. Canonical Pathways, Upstream Regulators, Diseases and Biological Functions, and Networks were used to identify significantly regulated transcript sets. Because regulation of any given gene could be a statistical anomaly (i.e., false positive), bioinformatic analyses have been developed under the assumption that regulation important for function will occur in a coordinated fashion at multiple targets within a given pathway. Thus, the IPA analysis assesses over-representation of multiple targets within known pathways. For this analysis, the list of significantly regulated genes is further analyzed for significantly regulated pathways, thus minimizing the effect of individual false positives. Preliminary analysis of ribosomal proteins, tyrosine phosphatases, de-ubiquitinating enzymes and proteasomal proteins, all of which show highly coordinated regulation (see Supplementary Figures S4A,B) demonstrate that the p < 0.05 cutoff does not produce an overabundance of false positives.
Gene Set Enrichment Analyses (GSEA)
To complement the IPA analysis and avoid the problems of Type I vs. II error, normalized expression intensity (normalized by EdgeR using TMM) of all identified transcripts were analyzed by GSEA. The normalized enrichment score (NES) indicates the degree to which this gene set is overrepresented at the top or bottom of the ranked list of genes in the expression dataset (Subramanian et al., 2005). In IPA analyses, a p-value cutoff is required to decide whether transcription is significantly regulated or not. The problem of setting a cutoff is that high stringency introduces false negative results (i.e., high Type II error), whereas low stringent cutoffs introduce false positive results (i.e., high Type I error). To complement the IPA analysis and avoid the problem of Type I vs. II error, normalized expression intensity of all identified transcripts were analyzed by GSEA. In the GSEA analysis, all transcripts are ranked by a signal2noise metric (the difference of means scaled by the standard deviation), and the significance of a given set of genes is determined using a runningsum statistic to determine rank-order over-representation. Thus, this analysis produces a statistic at the gene set level without the need for a p-value cutoff. Gene sets were from the Broad Institute's set (v.4.0) of curated gene sets [C2.all], gene ontology gene sets [C5.all] and transcription factor target gene sets [C3.tft]. In GSEA, the Enrichment Score (ES) for the gene set is the degree to which this gene set is overrepresented (i.e., "enriched") at the top or bottom of the ranked list of genes in the expression dataset. NES indicates the enrichment score for the gene set after it has been normalized to adjust for size of the gene set.
Quantitative Proteomics
A secondary analysis of our previously published liquid chromatography tandem mass spectrometry (LC-MS/MS) protein data from these same rats (Lichti et al., 2014) was used to corroborate RNA results where appropriate.
Adeno-Associated Virus knockdown of Cyp26b1
In order to knock down CYP26b1 expression, five 24-nucleotide sequences were identified within the CYP26b1 mRNA sequence (Ensembl transcript ID: ENSRNOT00000020505) using the criteria previously described (Hommel et al., 2003;Benzon et al., 2014) (Supplementary Table S1). The oligonucleotide sequences were synthesized and the annealed hairpin oligonucleotides were cloned into pAAV-shRNA plasmids (pAAV-Cyp26b1shRNA). Hairpin expression from these plasmids was driven by the mouse U6 promoter using a pol-III mechanism. In addition to the hairpin, enhanced green fluorescent protein (eGFP) was expressed from a separate expression cassette driven by a pol-II promoter (CMV).
In order to determine the most effective hairpin, all five hairpins were screened in vitro. Since HEK293 does not express rat Cyp26b1, a Cyp26b1 overexpression plasmid was constructed. To create the Cyp26b1 overexpression plasmid to test knockdown efficiency, the rat CYP26b1 gene sequence was amplified from rat genome cDNA using polymerase chain reaction (forward primer: TAGGAATTCCTCCTGGGTTTCTTCGAGGG; Reverse: TAG GTCGACATCCAAGAGGGTGGGAGTCA) and cloned into the pAAV-IRES-hrGFP plasmid (Agilent Technologies, Santa Clara, CA, USA). The various pAAV-Cyp26b1-shRNA plasmids or pAAV-Control shRNA plasmid was co-transfected with pAAV-Cyp26b1-IRES-hrGFP plasmid into HEK-293 cells using FuGENE R 6 Transfection Reagent (Promega)/Lipofectamine 2000 (Life Technologies, Grand Island, NY, USA).
The cells were harvested 24-48 h later, followed by RNA extraction and reverse transcription to cDNA. The RNA was extracted using RNeasy Mini Kit (Cat No. 74104). Contaminating DNA was removed (TURBO DNA-Free, Life Technologies, Carlsbad, CA, USA) and 5 µg total RNA was reverse transcribed into cDNA (SuperScript III First Strand Synthesis: Invitrogen catalog # 18080051). Relative knockdown was measured with Real-time PCR (SYBR Green: Applied Biosystems, Foster City, CA, USA) on an Applied Biosystems 7500 fast thermocycler with Cyp26b1 qPCR primers (forward: CCAGCAGTTTGTGGAGAATG; Reverse: GTCCAGGGCGTCTGAGTAGT). The results were normalized to Gapdh (forward: AACGACCCCTTCATTGAC; reverse: TCCACGACATACTCAGCAC). All primers were validated and analyzed for specificity and linearity prior to experiments (Alibhai et al., 2007).
In vivo Knock Down of Cyp26b1
An AAV2-based vector that expresses Cyp26b1 shRNA and eGFP or a non-targeted hairpin control vector (n = 10-11) was injected bilaterally into the rat NAc shell (1 µl/side over 10 min) using coordinates AP = 1.7, L = 2.2, D = −6.7. To validate the knockdown efficiency of AAV in vivo, AAV-control shRNA or AAV-Cyp26b1 shRNA (n = 6) was injected in the nucleus accumbens shell (NAcSh) in rats. Two microliter of AAV was injected per side of the NAc to increase the number of infected neurons. NAc regions with eGFP fluorescence were collected and tissues from two rats were pooled together to increase the yield of protein concentration. The expression of CYP26B1 protein level was detected using western blot described above. For behavioral tests, an AAV2-based vector that expresses Cyp26b1 shRNA and eGFP, or control vector (n = 10-11 each) was injected bilaterally with 1 µl/side into the rat NAc. Pair-housed rats were used instead of isolated rats for this study to increase relevance to the scientific community by demonstrating the effects of Cyp26b1 independent of the EC/IC procedure. Behavioral tests started 5 weeks after stereotaxic surgery (Figure 7). Accurate placement was verified immunohistochemically after the conclusion of behavioral testing.
Cocaine Self-Administration
Acquisition One week after catheter surgery, all rats were placed in operant chambers (30 cm × 24 cm × 21 cm; Med-Associates, St. Albans, VT, USA) and allowed to self-administer 0.2 mg/kg/infusion unit dose of cocaine for 2 h per session for 5 days; then 0.5 mg/kg/infusion for 3 days on a fixed ratio (FR1) schedule. Each infusion was delivered intravenously in a volume of 0.1 ml over 5.8 s. The infusion was signaled by illumination of two cue lights for 20 s, which signaled a timeout period during which no further infusions could be attained. Fixed ratio dose response: Each rat was allowed to self-administer 0.5, 0.25, 0.125, 0.06, 0.03, 0.015, 0.0075, 0.00325 mg/kg/infusion cocaine in descending order on an FR1 schedule each day for five consecutive days. Rats self-administered each dose of cocaine for 30 min. Cue responding: Rats were subjected to forced abstinence in their home cages for 7 days. On the 8th day, rats were placed in the operant chamber and allowed to self-administer saline under an FR1 schedule for 1 h with cue light presentation contingent on bar pressing. Extinction: Stably responding rats underwent a within-session extinction procedure for 3 days. All rats were allowed to self-administer 0.5 mg/kg/infusion cocaine under an FR1 schedule for 1 h followed by extinction for 3 h. During the extinction period, lever responding resulted in cuelight illumination under an FR1 schedule, but the infusion pump did not deliver cocaine. Reinstatement: All rats received 0.5 mg/kg/infusion unit dose of cocaine under an FR1 schedule for 1 h followed by 3 h of extinction. Next, all rats received an IP injection of cocaine of one of five doses (0, 2.5, 5, 10, 20 mg/kg) in a random order for each rat across the five sessions, followed by 3 h reinstatement responding session.
Immunohistochemistry
For Figure 7D, the placement of AAV-Cyp26b1shRNA expression in vivo was validated by immunofluorescence staining with eGFP. The brains were extracted, post fixed, cryoprotected and sectioned into 40 µm slices containing the NAc on a sliding freezing microtome (Leica Biosystems, Richmond, IL, USA). The slices remained floating and were rinsed with 1xPBS prior to blocking with 3% normal donkey serum (Jackson ImmunoResearch, West Grove, PA, USA) with 0.3% triton. NAc slices were incubated with eGFP primary antibody overnight (1:500, chicken, Aves labs, Tigard, OR, USA) with 3% donkey serum, 0.3% triton in 1xPBS. After washing, slices were incubated with secondary Alexa 488 donkey anti chicken antibody (Jackson ImmunoResearch, West Grove, PA, USA) in 1xPBS. Finally, slices were mounted, dehydrated using ethanol and CitriSolv (Fischer Scientific, Waltham, MA, USA) and coverslipped with DPX (Fisher Scientific).
Statistical Analysis for Behavior
Two-factor analyses of variance (ANOVAs) and two-factor repeated-measures ANOVAs were performed to compare four treatment groups. Significance between only two conditions was analyzed using a Student's t-test. All t-test data passed the Shapiro-Wilk test of normality. All data are expressed as mean ± SEM. Statistical significance was set at p < 0.05.
Descriptive Statistics
Raw and processed RNA sequencing data can be found in the Gene Expression Omnibus database with the project number GSE88736.
After the primary data alignment and analysis, 14,309 transcripts were quantified as the result of RNA-seq. The first step of our analysis was to investigate the regulation of individual transcripts. Based on the likelihood F-tests, the Venn diagram shows 106 transcripts significantly regulated (p < 0.001) by cocaine, 683 transcripts significantly regulated by environmental enrichment and 64 transcripts significant for the interaction (Figure 1B). In addition, the Venn diagram also displays the number of transcripts common among the effects, which indicates the overlapping effects of cocaine and environmental enrichment. Note that there are more transcripts upregulated than downregulated by cocaine, while there are many more downregulated transcripts than upregulated by environmental enrichment (Figure 1C).
In the significantly regulated gene lists, we first looked at the top 50 upregulated and downregulated transcripts from cocaine and environmental enrichment. These transcripts give the highest confidence of regulation. For cocaine (Figure 1D), immediate early genes such as EGR4, NR4A3, FOS, EGR2 were induced, which agrees with previous publications (Hope et al., 1992;Berke et al., 1998;Werme et al., 2000;Guez-Barber et al., 2011). For enrichment, the top regulated transcripts include transcription factors, such as NR4A1, EGR3, ARC, EGR2 and NR4A3, etc. (Figure 1E). In our search for a therapeutic target, we moved beyond individual transcripts to explore molecular pathways.
Cocaine Effects on Transcription in the NAc
To investigate which biological functions and cellular pathways are regulated by cocaine, significantly regulated transcripts were analyzed with IPA. Figure 2A lists some top-ranked diseases and biological functions of interest for the cocaine main effect. Complete IPA data can be found in the supplemental information.
Data Validation
In large data sets such as RNA sequencing, some orthogonal data cross-validation is important to provide confidence in the validity of the results. One approach of validating the RNA-seq data is to compare results with previous cocaine studies. In the top-ranked canonical pathways (Figure 2B), Signaling by Rho GTPases is represented in Supplementary Figure S1 and was previously shown to be repressed by cocaine in the NAc (Kim et al., 2009;Gourley et al., 2011). The Endoplasmic Reticulum Stress pathway identified in the current results was confirmed previously (Pavlovsky et al., 2013). In the Upstream Regulator analysis, cocaine and Creb1 are predicted to be upstream regulators by the IPA analysis (Supplementary Figures S1B,C). Full results of the upstream regulator analysis are presented in table form in the supplemental information. Additionally, Depressive Disorder and Anxiety Disorders, which show comorbidity with cocaine abuse (Sonne et al., 1994;Morton, 1999), were highlighted (Supplementary Figure S1D).
Retinoic Acid Pathway
One novel pathway highlighted in the cocaine main effect is RA signaling. Specifically, Retinoic acid receptor (RAR) activation was identified as a regulated canonical pathway (p = 0.023). Further, in the upstream regulator analysis, RA (p = 9.78E-10, activation Z-score = −1.553; Figure 2C), RA receptor α (RARA; p = 1.69E-7, Z-score = 0.723; Figure 2D), and the RARγ agonist CD437 (p = 4.25E-4, Z-score = −3.15, data not shown) were predicted to be upstream regulators of the cocaine main effect, all suggesting that RA signaling is important for the effects of cocaine. Full results of the upstream regulator analysis are presented in table form in the supplemental information. A secondary proteomic analysis from our previously-published report (Lichti et al., 2014) confirms CD437 as an upstream regulator at the protein level (p = 7.70E-4, Z-score = −2.23; Figure 5).
Environmental Enrichment Effects on Transcription in the NAc
To explore the molecular mechanism of environmental enrichment, IPA and GSEA were used to analyze the biological functions and pathways. For the environmental enrichment main effect, many mental disorder-related functions and diseases were significantly regulated ( Figure 3A). The top-ranked canonical pathways involved Protein Translation-Related EIF2 Signaling, PKA Signaling, Mitochondrial Dysfunction, Kinase Signaling, etc. (Figure 3B). Complete IPA data can be found in the supplemental information.
Retinoic Acid Signaling Pathway
The RA signaling pathway was also significantly regulated by environmental enrichment (p = 9.77E-06) ( Figure 3B). Within the RA signaling pathway, the transcripts involved in RA synthesis and translocation are upregulated by environmental enrichment, such as retinol binding proteins 1 and 4 (Rbp1 and Rbp4), retinol dehydrogenase 10 (Rdh10), and cellular RA binding protein 2 (Crabp2), while the repressors of this pathway, such as the kinases Akt and Pkc are mainly downregulated ( Figure 3C). Additionally, 215 RA target genes are regulated by environmental enrichment, most being downregulated (p = 1.84E-5, activation Z-score = −2.655; Figure 3E). Further, the agonist of the RA receptor γ, CD-437, was predicted to be inhibited as an upstream regulator of enrichment (p = 3.63E-18, activation Z-score = −7.541; Figure 3D). A secondary analysis of protein data from these rats (Lichti et al., 2014) confirms CD437 as an upstream regulator (p = 4.83E-15, Z-score = −3.13; Figure 5), highlighting the importance of RA signaling for further study.
Other Functions and Pathways
One of the most striking environmental enrichment effects was the regulation of transcription. The Gene Ontology Figure S3C; p = 1.06E-07, activation z-score = 1.420). In addition, Esrra (Supplementary Figure S3D; p = 2.11E-3, activation z-score = 2.668) was also identified as an activated upstream regulator. This result agrees with our prior research that Esrra was identified as an upstream regulator with energy metabolism proteins regulated in the proteomic study from tissue from these same rats (Lichti et al., 2014). Another transcription factor that was identified as upstream regulator was Srf (Supplementary Figure S3E; p = 2.34E-5, activation z-score = 0.101).
In addition to protein synthesis, mRNA for the Protein Ubiquitination pathway (p = 3.63E-09) was also significantly differentially regulated by environmental enrichment (Supplementary Figure S4B). These results were confirmed by the GSEA of the Gene Ontology gene set for Proteasome Complex (NES = 1.50, p = 0.024). In the protein ubiquitination process, target polyubiquitinated proteins undergo either degradation by the proteasome or de-ubiquitination by de-ubiquitinating enzymes (DUBs). Our results show an increase in transcription of ubiquitin C (UBC) and many proteasomal subunits with a coordinated decrease in transcription of DUBs (Supplementary Figures S4B,C), indicating that the enrichment condition likely enhances protein degradation through ubiquitination, an effect in agreement with our prior investigations of the proteomics of environmental enrichment (Fan et al., 2013b;Lichti et al., 2014). In addition to the pathway analysis, the role of UBC is highlighted in the Network analysis by IPA (Supplementary Figure S4D, network score = 35). Transcription of 21 out of 35 UBC target genes was upregulated by environmental enrichment. Sumoylation and ubiquitination have an important crosstalk in determining protein fate (Gill, 2004;Ulrich, 2012;Sriramachandran and Dohmen, 2014). SUMO proteins 1, 2, and 3 are the major hubs in another network determined by IPA (Supplementary Figure S4D), with 29 of 44 transcripts downregulated.
Cocaine X Environment Interaction
The transcription regulated by the interaction indicates that EC and IC rats respond differently to cocaine (Figure 4). Studying this differential response helps to identify the molecular mechanisms of the protective EC phenotype. For the interaction, Drug Dependence (p = 3.38E-6; Supplementary Figure S5A) and Release of Dopamine (p = 5.41E-7; Supplementary Figure S5B) were top-regulated diseases and biological functions in IPA. Regulated transcripts in Drug Dependence were dominated by ion channels and G-protein coupled receptors (GPCRs). Release of Dopamine was also dominated by GPCRs.
Retinoic Acid Signaling Pathway
Retinoic acid receptor (RAR) activation was also identified in the Canonical Pathway analysis (p = 5.13E-05) in the interaction of Cocaine and Enrichment (Figure 4B). Some essential genes in this pathway showed significant interaction at the mRNA level, such as retinol binding protein (Rbp4), retinol dehydrogenase (Rdh10), retinoid X receptor (Rxr), etc. (Figure 4D). Additionally, RA was identified as an upstream regulator (p = 1.6E-2; Figure 4C). These results indicate that the activity of the RA signaling pathway and the transcripts of RA target genes are differentially regulated by cocaine in EC and IC rats, and are therefore a promising avenue for developing novel addiction therapeutics.
Other Functions and Pathways
Protein kinases play a significant role in post-translational modification to activate or inhibit target proteins through phosphorylation. For the interaction, Activation of Protein Kinases was also identified as differently regulated (p = 1.81E-6; Supplementary Figure S5C). This prediction is based on the regulation of kinase and kinase related transcripts, including 6 different mitogen-activated protein kinases (MAPKs; Supplementary Figure S5D). In addition to the expression of kinases in general, Protein Kinase A Signaling (p = 1.6E-05) ranked 12th among the regulated canonical pathways. Our results also show that corticosterone was identified as an upstream regulator in the NAc (p = 8.66E-4; Supplementary Figure S5F). This result is not surprising because it has been found that EC rats have blunted induction of corticosterone induced by psychostimulants (Stairs et al., 2011;Crofton et al., 2015). One important function in the NAc that responded differently to cocaine in EC and IC rats was Transport of Ca 2+ (p = 3.67E-5; Supplementary Figure S5G). Angiotensinogen (AGT), ATPase (ATP2B4) and voltage-dependent calcium channel (CACNA1G) lead to activation of transport of Ca 2+ , while parathyroid hormone-like hormone (PTHLH) and arginine vasopressin (AVP) lead to inhibition (Supplementary Figure S5G). In addition to the above functions, NMDA receptor downstream transcripts (p = 2.72E-6; Supplementary Figure S5E) also responded differently to cocaine in EC and IC rats.
Validation of Quantitative RNA Sequencing
To confirm the validity of the quantitative RNA sequencing technique, real-time PCR was used to quantify the mRNA expression of Fabp5 and Hspa5 from the same rats. mRNA fold change results from RNA sequencing and qPCR of Fabp5 (R 2 = 0.5155, p < 0.0001) and Hspa5 (R 2 = 0.4301, p = 0.0003) was compared for every rat. These results indicate qPCR and RNA sequencing results are well correlated in addition to the orthogonal validation of comparing the current results against previous cocaine and enrichment findings.
Regional Enhancement (NAc shell) of Retinoic Acid-Related Genes
A topographic transcriptomic analysis for genes with selectively enhanced gene expression in the NAc shell from the Allen Brain Atlas identified 178 transcripts with selective expression in the NAc shell ≥ 1.25 fold over surrounding regions. An IPA analysis of these 178 genes revealed the central RA pathway as being significant with Stimulated by RA 6 (Stra6), Retinoic acid receptor β (Rarb), Fatty (McCaffery and Drager, 1994;Zetterstrom, 1999). These nine NAc shell-enhanced genes are depicted in the pathway for Figure 6H.
RA Signaling in NAc Shell Increases Cocaine Self-Administration
Two strategies for changing concentrations of RA are to either alter synthesis or degradation. However, because there are several different subtypes of Rdhs and Raldhs, knocking down any one could be compensated by the other RA synthases. Therefore, we decided to alter RA concentration by knocking down the degradation enzyme, Cyp26b1, since RA signaling was regulated by cocaine, enrichment, and their interaction, and many genes showed selectively enhanced expression in the NAc shell. With decreased expression of Cyp26b1, RA subsequently builds up in neurons, enhancing RA downstream signaling (Kim et al., 2014). To knock down the expression of Cyp26b1, an shRNA targeting the Cyp26b1 coding sequence was designed and knockdown efficiency was examined in vitro and in vivo. Compared with a non-targeted control shRNA, Cyp26b1 shRNA significantly decreased expression of Cyp26b1 in HEK293 cells in both mRNA and protein ( Figure 7A) and in rat NAc shell at the protein level ( Figure 7B). Figure 7C shows the schematic diagram of the experimental timeline for behavioral tests. For behavioral testing, AAV-Cyp26b1shRNA or control vector was injected into the NAc shell of rats ( Figure 7D, atlas comparison Figure 7E). For acquisition of cocaine self-administration, results demonstrate significant acquisition across sessions [F(4,80) = 3.855, p = 0.006; Figure 7F] with a trend for increased acquisition in Cyp26b1 shRNA rats [interaction F(4,80) = 2.142, p = 0.083]. A ttest showed that Cyp26b1 shRNA rats responded significantly more for Sessions 3 and 4. For maintenance responding (Figure 7G) Figure 7H; t(20) = −2.572, p = 0.018]. Finally, in a withinsession extinction procedure (Figure 7I), rats with Cyp26b1 knockdown exhibited increased responding compared to control rats, with significant main effects of Session [F(2,36) = 15.317, p < 0.001] and the Vector main effect is at the threshold of the p-value cutoff for significance [F(1,18) = 4.433, p = 0.05], indicating that knockdown of Cyp26b1 in the NAc shell enhances drug-seeking behavior. In cocaine-induced reinstatement, high variance prevented detection of a difference in reinstatement between the two groups of rats at any dose.
DISCUSSION
These studies highlight mechanisms of the protective addiction phenotype of environmental enrichment and identify novel targets that play a role in regulating addiction-related behavior. Among the novel molecules and pathways identified, the RA signaling pathway was predicted to play an important role in the differential response to cocaine in EC and IC rats. Separately, a topographic transcriptomic analysis identified RArelated genes and RA target genes as being selectively expressed in the NAc shell, further highlighting the likely importance of RA in addiction-related behavior. These results generated a hypothesis-driven experiment that confirmed the role of RA in addiction-related behavior.
Cocaine Transcriptomic Effects
In the upstream analysis of cocaine-regulated transcripts, cocaine itself and CREB1, an important mediator of the effects of psychostimulants (Carlezon et al., 1998;Pliakas et al., 2001), ranked at the top of the list as upstream regulators, strongly supporting that cocaine-regulated transcription seen here agrees with previous studies. Our prior research demonstrated that enriched rats have less phospho-CREB in the NAc and that decreasing CREB function in the accumbens shell produces a behavioral phenotype identical to that of environmental enrichment (Bowling and Bardo, 1994;Green et al., 2002Green et al., , 2010, an interesting behavioral phenotype marked by increased sensitivity to the rewarding effects of stimulants (as measured by CPP) coupled with decreased self-administration (Pliakas et al., 2001;Larson et al., 2011).
Environmental Enrichment and Transcription
Compared to the cocaine main effect, there were approximately 5X more transcripts significantly regulated by environmental enrichment, revealing that environment has a much more extensive impact on gene expression than cocaine exposure. Transcription factors were the most regulated gene sets by environmental enrichment. Another impressive difference between EC and IC rats is the regulation of transcripts involved with EIF2 signaling. Even though Eif2 itself is not regulated at the mRNA level, decreased upstream inhibitors of Eif2 and upregulated downstream ribosomal subunits suggests regulation of the protein translation process. In addition to protein synthesis, the protein degradation system is also altered by enrichment. Prior research from this laboratory demonstrated that expression of ubiquitin target proteins is different in EC and IC rats (Fan et al., 2013a,b). Ubiquitination is also important in differential expression of proteins from the current rats (Lichti et al., 2014). The current mRNA data revealed increased transcription of ubiquitin and proteasomal subunits, but reduced mRNA expression for deubiquitinating enzymes in the NAc, possibly suggesting enhanced protein degradation in EC rats. Taken together, enhanced protein translation and degradation likely indicate more rapid protein turnover in EC rats compared with IC rats.
Retinoic Acid Signaling
Retinoic acid genes are selectively expressed in the NAc shell, as shown by a topographic transcriptomic analysis of the Allen Brain Atlas and the published literature (McCaffery and Drager, 1994;Zetterstrom, 1999). Given the importance of the NAc shell to addiction, the RA signaling pathway offers promising targets for novel therapeutic development for cocaine addiction. One previous report found that constitutive whole-body RARβ/RXRβ or RARβ/RXRγ double null mice had a selective decrease of dopaminergic D2 receptors in the shell of the NAc, with concomitant decrements in locomotor and rotarod performance (Krezel et al., 1998). Thus far, however, there have not been any other systematic studies aimed at understanding RA signaling and addiction.
The current report provides converging evidence from the Upstream Regulator analysis (predicting function) of environmental enrichment and the vector knockdown of Cyp26b1 suggesting that RA signaling activity in the NAc shell increases susceptibility to drug taking. Given that every core component of the RA signaling pathway involves protein interactions with small molecules (i.e., retinoids) this pathway is a prime candidate for the development of selective small molecule inhibitors as possible pharmacotherapeutics for cocaine addiction. One advantage of choosing targets in this pathway is that nine components of this pathway have some enhancement of expression in the NAc shell (Figure 6), providing some degree of regional selectivity and thereby decreasing the likelihood of unwanted side effects. These regionally-enhanced components include the binding proteins Stra6, Rbp1, and Fabp5, the synthesis enzymes Adh10, Aldh1a1, and Aldh1a3, the degradation enzyme Cyp26b1, and the RA receptors Rarβ and Rxrγ. Ongoing experiments are investigating which of these targets would be most suitable for pharmacotherapeutic development.
CONCLUSION
Environmental factors play a significant role in individual differences in responses to drugs of abuse. Although some transcription factors, such as FosB and CREB, have been reported to mediate the protective addiction phenotype of environmental enrichment (Green et al., 2010;Zhang et al., 2014), RNA sequencing technology has produced a broader view of transcriptomic responses of the NAc in EC and IC rats after cocaine. Taken together, the discovery-based transcriptomic analyses and hypothesis-driven behavioral tests have revealed RA signaling as a novel mechanism involved in regulating the responses to both cocaine and environmental enrichment, revealing a novel pharmacotherapeutic target for the effective treatment of drug addictions.
AUTHOR CONTRIBUTIONS
TG, YZ, EC, and FK participated in the design of the work. TG, YZ, EC, FK, MS, DL, and XF participated in the acquisition, analysis, or interpretation of data for the work. All authors participated in the final approval of the manuscript and the review. | 8,537.2 | 2016-11-16T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
A Case of Limbic Encephalitis: Antibody LGI1 Associated Encephalitis
Limbic encephalitis (LE) is a rarely encountered disease in modern clinical practice. It is basically autoimmune in nature, with its pathophysiology determined by a number of antibodies to neural surface proteins. However, 20% of cases are paraneoplastic and have a tumour source in the body which leads to limbic encephalitis secondarily. Studies have shown that VGKC complex (Voltage Gated Potassium Channels) is one of the common antibodies found in this disease and is represented by three such proteins out of which LGI1 is most prevalent in limbic encephalitis. This entity is characterized by monophasic presentation with acute or subacute onset memory loss, confusion, seizures and psychiatric symptoms. The presence of anti LGI1 antibodies in serum or CSF confirms the diagnosis. We hereby report a case of a 65 year old woman with 8 months history of peripheral neuropathy followed by memory impairment, focal seizures with behavioural and psychiatric changes. No tumour was found on imaging and the classic paraneoplastic panel was negative. However, she was found to be positive for Antibody LGI1 proving autoimmune basis of her illness which responded majorly to immuno-modulatory therapy in the form of high dose steroids.
Introduction
Limbic Encephalitis (LE) is an uncommon neurological disorder encountered rarely in day to day clinical practice. In Indian settings, a clear statistical data on its frequency of occurrence is not available but few case reports have been published here and there. It is characterized by a variety of clinical presentations and lack of symptom specificity including seizures, memory problems, irritability, depression, confusion and dementia leading to wide range of differential diagnosis [1]. Viral infections, inflammatory or autoimmune disorders (lupus, Sjogren's, Hashimoto thyroiditis and CNS vasculitis), toxic and metabolic encephalopathy, paraneoplastic syndromes and autoimmunity become the possible clinical etiologies. Classically, limbic encephalitis was mostly considered a paraneoplastic disorder [2,3] but many new antibodies are being discovered in patients of Limbic Encephalitis which are not associated with tumours, thereby arising possibility of autoimmune basis to this disease [1,4].
Limbic encephalitis is a rare disorder affecting the medial temporal lobe of the brain. It was first described by Breirly et al. in 1960s when they reported 3 cases of sub-acute encephalitis involving the limbic area [5]. In 1968 Corsellis et al. coined the term "limbic encephalitis" and also established the relationship between limbic encephalitis and systemic cancer [2]. Many neuronal antibodies have been associated with LE. These can be directed either against intracellular antigens (classic paraneoplastic), including Hu, CV2/CRMP5, Ma2 and amphiphysin or against cell membrane antigens, including VGKC (voltage-gated potassium channels), N-methyl-D-aspartate receptor (NMDA) and Glutamic Acid Decarboxylase receptors (GAD) expressed in the neuropil of hippocampus and cerebellum [6,7]. The former category is related to cancer (paraneoplastic limbic encephalitis), showing limited response to immunomodulatory therapy whereas the latter category is autoimmune in nature being less frequently associated with tumours and responding significantly better to immunomodulatory therapy. It has also been recognized that some patients presenting with limbic encephalitis with negative antibody screen in serum and CSF show full recovery after treatment with steroids or immunomodulatory therapy indicating an autoimmune etiology [8,9].
Here we report an interesting case of a 65 year old woman who presented with memory loss, repetitive episodes of focal seizures without loss of consciousness, with behavioural abnormalities. Imaging suggested a diagnosis of limbic encephalitis which was further confirmed with antibody LGI1 being positive. Further imaging to look for etiology revealed no evidence of any mass lesion, tumour or secondaries anywhere in the body. This case presented here is unique and scientifically relevant, as it intends to raise awareness of Limbic Encephalitis associated with autoimmune antibody, as a potentially reversible cause of a medical emergency.
Case Report
A 65 year old lady presented to the emergency department of this hospital in April 2016 with presentation of 3 episodes of generalized tonic clonic seizures one after the other in a span of half an hour leaving the patient in an unconscious state on arrival to hospital.
Since last 20 days, patient had been getting repeated attacks of focal seizures which used to begin in the form of localized twitching over face progressing over any one half of the face along with repeated jerky movement of head lasting for 5-10 seconds at once, followed by a state of postictal confusion and drowsiness which used to last for another 20 to 30 minutes. Sometimes the focal seizures used to involve one hand but were mostly noticed over face. These episodes used to initially occur 3 to 4 times in a day but their frequency gradually increased in due course of time so much so that on the day of presentation to this hospital, she was getting these attacks after every 20 minutes approximately with sustained postictal confusion and drowsiness until the next attack happened. Along with these recurrent seizure episodes, patient also developed memory impairment especially with disturbances in recent memory and inability to remember family members but with intact past memory. She had behavioural impairment in the form of speaking inappropriate words at unusual times, along with laughing and crying abruptly at odd hours. She used to forget if she had eaten food or if she had gone to toilet. She also sometimes used to start removing her clothes inappropriately at odd times.
For last 9 months, she used to also complain of burning sensation over both feet mainly over the sole of feet at all times of the day with no associated numbness or needle like sensations. This pointed to peripheral neuropathy with which she was suffering from in the last year with underlying cause not known. Patient had undergone cholecystectomy operation 8 years back in a private hospital due to cholelithiasis. No records of the surgery were available.
On admission, she was given loading dose of antiepileptic lorazepam and phenytoin to control seizure activity along with symptomatic treatment with close monitoring of vitals. She regained consciousness the next day but the seizures in the form of focal and complex partial seizures continued to occur at close intervals. Also in the intervening time she continued to remain confused and drowsy. Her MMSE (Mini Mental State Examination) at that time was 18/30 and behavioural abnormalities also persisted.
Serum sodium was found to be low initially (108 meq/ml) which was corrected with 3% NS infusion along with dietary modifications. It rose to normal levels (132 meq/ml) in 2 days and continued to remain in normal range thereafter.
Contrast enhanced MRI Brain was done which revealed hyper intense signal in T2W and T2W FLAIR (Fluid Attenuated inversion recovery) images in the right hippocampus ( Figure 1). No restricted diffusion or contrast enhancement was seen. Differentials included limbic encephalitis, herpetic encephalitis and sub-acute infarct. Considering the clinical profile of the patient, the possibility of limbic encephalitis was suggested. CSF was done which showed no significant abnormality. EEG was done which revealed no significant abnormality (Figure 2). Further to look for tumour, HRCT Thorax (High resolution computed tomography) and CT Abdomen were done where no evidence of any tumour growth was found (Figures 3 and 4).
Autoimmune panel revealed Positive Antibody
LGI1 which is part of VGKC. Patient was then started with immunomodulatory therapy in the form of high dose steroids (Injection Methylprednisolone 1 gm IV OD) along with sodium valproate and clonazepam. The frequency of seizures dropped down drastically and no more episodes were witnessed after 48 hours of starting this treatment. Even the memory disturbances and behavioural abnormalities got minimized significantly (MMSE rose to 24/30) and patient's general condition improved considerably. Steroids were later changed to oral alternatives and patient had been keeping well thereafter. After a week of discharge, PET-CT scan (Positron Emission Tomography) was done in order to completely rule out possibility of any tumour in the body, and it revealed no significant abnormality. Patient had responded well to immunomodulatory treatment and remains in follow up since then.
Discussion
Limbic encephalitis is a diagnostic challenge. It classically presents with a constellation of symptoms including memory disturbances, complex partial seizures along with behavioural abnormalities and sleep disturbances [2]. Etiology wise, autoimmune limbic encephalitis can either be associated with antibodies to intracellular neuronal antigens, which include all previously known paraneoplastic antigens, and the other associated with antibodies to cell membrane antigens, including the VGKC, NMDA receptor, GAD which have no evidence of any tumour in body [10].
The paraneoplastic ones are difficult to treat and may respond minimally only if the primary tumour source is removed but the autoimmune ones without any tumour presence in body respond considerably more to immunomodulatory treatment and their prognosis is comparatively better. Limbic encephalitis which is antibody LGI1 associated (part of VGKC) is many a times linked with development of hyponatremia. In a study conducted in the United States, 60% of the patients experienced hyponatremia which can be related to syndrome of inappropriate secretion of Antidiuretic Hormone by the LGI1 expression in the hypothalamus and the kidney [11].
In the above mentioned patient, she presented with a history of recurrent seizures along with memory disturbances and behavioural abnormalities. She had sleep disturbances and frequent vacant stare looks. She also had a past history of peripheral neuropathy in the form of burning sensation in both soles of feet. She was detected with hyponatremia at the time of admission even before anti-epileptic drugs were started. Hyponatremia is known to occur commonly in people with anti LGI1 associated limbic encephalitis [11]. However, hyponatremia is a nonspecific sign, because it is also caused by other medical conditions such as decreased salt intake, salt loss by vomiting and diarrhoea, or medical disorders including hypothyroidism or syndrome of inappropriate antidiuretic hormone secretion. Cognitive impairment and confusion, which are cardinal manifestations of LE, are also found in patients with hyponatremia. In our patient the symptoms of seizures along with memory impairment and behavioural abnormalities persisted even after correction of sodium levels to normal. So an alternative diagnosis had to be considered. CSF study was found to be normal ruling out any infective (viral, bacterial, tubercular) cause of her symptoms. MRI Brain revealed non enhancing altered signal intensity in the right hippocampus without any diffusion restriction and suggested the possibility of LE. Keeping this in mind, further work up was done to confirm the provisional diagnosis and to know its etiology, so that effective treatment could be planned.
To rule out paraneoplastic etiology, CT whole abdomen, HRCT thorax and PET CT of whole body was done which showed no evidence of any tumour, mass lesion or secondaries anywhere in body. Results of autoimmune workup revealed positive LGI1 antibody which is part of VGKC. That confirmed our diagnosis. Patient was then immediately started on immunomodulatory treatment in the form of Injection, Methyl Prednisolone 1 gm IV OD along with sodium valproate and clonazepam. Subsequently, patient showed major signs of recovery in the form of reduction in number of seizure episodes and memory improvement. Behavioural disturbances settled down and patient became fully oriented to time place and person. Henceforth, a final diagnosis of limbic encephalitis was made which was autoimmune in nature associated with antibody LGI1.
Anti-LGI1 LE usually involves the medial temporal area, which causes memory dysfunction and seizures. This disorder was recently identified as autoimmune encephalitis, which has distinctive clinical features, such as limbic dysfunction, seizures, and occasional hyponatremia. In Anti-LGI1 LE, MRI may show signal changes in medial temporal lobes and basal ganglia. However, a number of patients do not show abnormalities on MRI or PET, and evidence of inflammation, including pleocytosis or protein elevation, can be absent from CSF. Accordingly, typical clinical manifestations are necessary for prompt antibody diagnoses.
Corticosteroids, intravenous immunoglobulin and plasma exchange are most frequently used as therapeutic options. Other immune-suppressive agents like cyclophosphamide and rituximab can also be utilized as a therapeutic option [9,12]. A study conducted by Saidha et al. [13] also shows promising results with decrease in seizure frequency and improvement in behaviour memory testing in these patients with use of mycophenolate mofetil. Development of these therapeutic options highlights the need for early detection and aggressive management for patients with autoimmune limbic encephalitis. However in cases where evidence of tumour is found, efforts to remove it or treat it by means of surgery, radiotherapy or chemotherapy can be sought after. If this disease process is considered early, diagnosed promptly and treated appropriately, it can be reversed and the patient can be restored to their premorbid state.
Conclusion
Limbic encephalitis is a rare diagnosis but if detected timely and with underlying cause detected correctly, proper effective treatment is available which can remarkably benefit a patient. Classically thought to be of autoimmune etiology, more and more cases are being diagnosed having a paraneoplastic source as well. Henceforth, a thorough investigative work up is required to look for paraneoplastic evidence to disease. It is important that we consider autoimmune limbic encephalitis in our differential diagnosis in adults with encephalopathy, particularly if psychiatric symptoms are seen. Therapeutic responsiveness of this condition reiterates the importance of diagnosing a reversible neurologic pathology. With timely intervention, clinicians may be able to avoid permanent cognitive and behavioural damage. | 3,056.2 | 2016-01-01T00:00:00.000 | [
"Medicine",
"Biology",
"Psychology"
] |
Effects of shear during the cooling on the rheology and morphology of immiscible polymer blends
The aim of this work was the generation of a microfibrillar structure in immiscible polymer blends using a new technique. The blend polymer model is the emulsion formed by a mixture of polypropylene (PP) with polystyrene (PS) in the proportion of PP10/PS90. In the first case the pellets of polystyrene and polypropylene were blended on the twin-screw mini extruder in the classical manner with different shear rates. In the second case, the same blend was prepared in the same way followed by a dynamic cooling at different shear rates. The phase morphologies of PP in the blend were determined by Scanning Electron Microscopy on two directions (transversal and longitudinal direction to the flow). In the two cases, the dispersed phase size decreased with the increase of the shear rate in the extruder. An anomaly was registered in the classical method at 200 rpm, where the size of the dispersed phase increases with the increase of the shear rate. The dynamic cooling technique recorded smaller diameters (4 to 5 times) of the dispersed phase compared to the conventional technique. In addition, the reappearance of the microfilaments at 200rpm was observed. The rheological properties were determined by RS100 (Thermo Scientific Haake). Using this new technique, it was noticed that he elastic modulus increases with one decade compared to the classical method and the complex viscosity decreases with the increase of the shear rate. An anomaly was registered in the classical technique, where the dynamic viscosity at 200rpm increases with increasing the shear rate in the extruder.
Introduction
Blending of polymers is an effective way to obtain new materials with improved properties [1]. Since most of the polymers are immiscible, thus a decrease of specific properties is usually observed. The extrusion of an immiscible polymer blend in which the dispersed phase forms in situ reinforced nodules or fibers is the preferable way to achieve the highest mechanical properties [2]. In order to obtain such structure, so called microfibrillar composites were developed [3][4][5].
The morphology of the dispersed phase is the key parameter which governs the all properties of the materials. It is directly related to the processing conditions and concentration of the dispersed phase [6]. Various phenomena were related to the dynamics of droplet such as deformation, rupture, time- dependent behavior and coalescence. Several studies have been carried out in order to control the morphology of the blend polymer, such as the rapid cooling of the extruded in iced water technique [7] and, quenching the extrudate [8][9][10]. The major studies were conducted on specific devices [11,12]. Most properties of immiscible polymer blends depend essentially on the morphology development during the phase preparation [13]. The principal goal to structuring the polymer blends was to attain a possible finest morphology and good distribution of the dispersed phase in the matrix. During the preparation of the blend polymer, a great diversity of morphologies (droplets, filament and co-continuous phases) can be obtained [1]. The final size and shape of the dispersed phase are determined by the competition between coalescence and breakup. Coalescence is a process in which two or more particles collide and physically merge into one particle. Two classes of coalescence play critical roles in the development of morphology during the processing of immiscible polymer blends [14,15]. In the first class, the coalescence is caused by the thermal Brownian motion and, in the second class; the collisions are caused by hydrodynamic forces [16]. On the other hand the breakup of the dispersed phase during the extrusion of the polymer blends is mainly controlled by the interfacial tension, rheological properties and complex strain field of the extruder [17]. However, deformation and rupture of the droplets are governed by two factors: the viscosity ratio between dispersed phase and the matrix phase, and the capillary number which presents the ratio between the viscous and interfacial forces [18].
This investigation was performed on the extruder in order to approximate real conditions for the preparation of the polymer blend. The main goal of this study was to introduce a new technique for the preparation of the polymer blend. The classical technique consists in the blending at different shear rate followed by the solidification of the molten blend under air flow. However in this early stage, the static coalescence plays an important role. In order to suppress the static coalescence of the dispersed phase, the dynamic cooling technique was adopted to produce the finest system morphology and smallest droplet sizes. This technique was based on two parameters during the cooling stage (rotation speed and temperature). Due to the difference in sensitivity of the blend compounds, the temperature variation during extrusion can produce different ratios of viscosity in the blend and that gives an opportunity for manipulation finest droplet and fibrillar behavior in blends. The dynamic solidification of the dispersed phase due to the crystallization was shown to be a good way to improve the stability of the fibrillar structures with respect to the shear out during extrusion.
Materials
The materials used in this study were commercial grades. Standard polystyrene was supplied by BASF S.A (Germany). Its average molar mass was w = 215160g/mol with polydispersity coefficient I=2.35. Isotactic polypropylene (100-GA01) was produced by INEOS Company (Switzerland), with a melt flow index (MFI) of 0.9g/10min and density of 0.923 g/cm 3 .
Sample preparation
The principal factor to take into account was the difference in temperatures between the dispersed phase (melting temperature) and the matrix phase (glass transition temperature). All polymers were dried at least 18 h in a vacuum oven at 80°C. The polymer granulate was pre-mixed and injected in the twin screw mini extruder (Thermo Haake) at 200°C. In the conventional technique the samples were extruded at (30, 60, 90, 120, 150 and 200) rpm and 200°C, followed by cooling stage on air in the extruder. The dynamic cooling technique used for structuring the blend polymers is described as follows. The sample was blended for 3mn at 200°C and 60rpm, followed by a rapid change of the extrusion speed while keeping the same temperature. After 3min, the cooling phase was started, maintaining the speed extrusion to the point of crystallization of the dispersed phase (maximum torque of the extruder), in order to freeze the generated microstructures (suppress of the static coalescence).
Scanning Electron Microscopy
A Hitachi S-3340 Scanning Electron Microscopy (SEM), operating at 15KV accelerating voltage, was used to observe the blend morphology. The surfaces taken from cryofractured samples were observed by SEM in two directions to the flow. After drying procedure, the samples were coated with goldpalladium thick film. The adopted technique to detect the morphological evolution was based on an image of the sample on two-dimensional (2D) object, because the fibrillar morphology can be viewed as droplets when the fibrils are observed perpendicularly to the flow direction. For this reason, it was suggested [19,20] to examine the collection of images from two orthogonal planes [21]. An average of 50 to 100 particles per sample was examined.
Thermal properties
The thermal properties were carried out by Differential Scanning Calorimetry using a Q100 DSC (TA Instruments, USA), with samples of about 5 mg in aluminum pans under nitrogen atmosphere. The temperature and heat flow calibrations were performed using Indium (T m =156.6°C) at a heating and cooling rate of 10°C/min. The matrix (PS) presents a glace transition temperature of about 96°C and a melting and crystallization temperatures of the dispersed phase (PP) as 165°C and 112°C respectively. These parameters are valuable information, which enables us to get a large difference between the crystallization of the dispersed phase and the glass transition of the matrix temperatures.
Rheological properties
The rheological characterization of the pure and blends polymers was carried out using a stress controlled rheometer (RS 100) from Thermo Haake Rheostress, using parallel plate geometry (diameter 20 mm and a gap of 1 mm). Samples with 20 mm and a thickness of 1 mm were compression molded from the test material using hydraulic press (Carven-USA) at 200°C. For all measurements, the stress amplitude was fixed at 10Pa to insure the viscoelastic linearity of the response. All materials functions were determined in the frequency range from 0.001 to 10s -1 . Figure 1 shows the micrographs of PP10/PS90 blends prepared with different shear rates using static cooling technique. The micrographs on the left side were obtained on the perpendicular facet to the flow, on the right side the micrographs were photographed on the longitudinal facet. The sample sheared at 30rpm presents deformed droplets with a shape factor slightly greater than 1.
1. Static cooling technique blends morphology
The sample sheared at 60rpm presents a mixture of filaments and droplets oriented in the flow direction. The blend polymer sheared at high speed presents a circular shape of the dispersed phase without orientation; this is probably due to a static cooling stage. The dispersed phase has the necessary time to relax before the crystallization. It was found that the major reduction of the particle size of the dispersed phase occurs at an initial stage of less than 1 or 2min, during blending [13]. However, an anomaly was recorded at 200rpm, where the size of the dispersed phase increases with the increase of the extrusion speed. These observations are in good agreement with those obtained by [22,23]. It is may be due to the intensity of the extrusion which promotes dynamic calescence of the dispersed phase.
Dynamic cooling technique blends morphology
The morphology development obtained by dynamic cooling technique is shown Figure 2. When the sample was not cooled by dynamic technique, the dispersed phase was presented in the transversal side by circular coarseness form. In the longitudinal side, a fibrillar form of the dispersed phase was developed with length that exceeds 100µm. When the dynamic cooling technique was applied, the size of the dispersed phase decreases sharply. Reducing the size of the dispersed phase is inversely proportional to the increase of the extrusion speed. Fibrillar microstructure reappeared again in the speed extrusion of 200rpm with a shape factor of 20; this may be due to the extrusion intensity. The treatment of the micrographs (Image J software) of the blends prepared with the two techniques was realized in terms of the average diameter of the dispersed phase ( Table 1). The average diameter of the dispersed phase was decreased with increasing the extrusion speed in the two cases. The dynamic cooling technique develops diameters five times lower than in the classical technique. This may due to the suppression of the coalescence in the cooling stage.
Rheological properties 3.2.1. Static cooling technique
Frequency sweeps were performed at the same conditions (F=0.001 to 10 s -1 and 200°C). Figure 3-a and Figure 3-b present respectively loss modulus and complex viscosity in function of the shear frequency for the blends prepared with the static cooling technique. All values of storage modulus were situated in between of those of the components of the blends Figure 3-a. A less decrease was noticed when the extrusion speed increases. An enhanced elasticity at low frequency is obviously shown. The shoulder in G' of the blend with droplet represents an additional relaxation due to the deformation of the dispersed phase [24]. Molten polymers show constant viscosity (Newtonian behavior) only in the low frequency region. At higher frequency, the viscosity generally decreases with the increase of the shear rate, indicating a pseudoplastic behavior. The rheological properties of the blends were predominant by the matrix phase. An anomaly was observed in the evolution of the complex viscosity of the blend prepared at 200rpm, where the complex viscosity increases with the increasing of the extrusion speed. This may be explained by the increase of the size morphology of the dispersed phase at the same extrusion speed (Table 1) [22][23]. At low frequencies a negative deviation behavior of the complex viscosity was noticed, which probably originates in slip at the interface [25][26] due to poor or insufficient adhesion between the matrix and the dispersed phase. Figure (4-a) shows the evolution of storage modulus of PP10/PS90 blends prepared with dynamic cooling technique. All the storage moduli are situated in between the moduli of the components. The elasticity of the blends was predominant by the elasticity of the dispersed phase. At high frequencies the storage moduli of the blends were superposed on the moduli of both components. The anomaly shown in the first case (static cooling) was suppressed when the cooling dynamic technique is used. All values of complex viscosities Figure (4-b) were situated in between the components of the blends (PP and PS). Increasing the rotation speed in the extruder decreases slightly Newonian viscosity. The rheological properties of the polymer blends are predominated by those of the disperse phase, thus increasing the elasticity and viscosity of the blends. At low frequencies, a pronounced deviation on the elastic modulus was observed. It presents the additional relaxation of the dispersed phase [27]. Obviously, these rheological parameters deviating from linear rheological model seems a useful approach to assessing the critical point of phase separation in polymer blend [28].
Conclusion
In this study the morphology evolution in polymer blend composed by polypropylene (10%) and polystyrene (90%) was evaluated by two techniques, static and dynamic cooling. The new technique was based on the crystallization of the dispersed phase during the extrusion. The size of the droplets generated by the dynamic cooling technique is 4 to 5 times less than that generated by the conventional technique. The new technique minimizes the coalescence of the dispersed phase during the blend solidification. At high shearing, the cooling dynamic technique generated microfibrillar morphology with a high shape factor (20). In the two techniques, the reduction in domain size by the increase of the shear rate in the molten blends originates from the breakup of the droplets caused by the coalescence suppression effect. The storage modulus developed by dynamic cooling technique was higher compared to the conventional technique by one decade. At static cooling technique the elastic moduli and the complex viscosity were predominant by the matrix phase, at dynamic cooling technique; the rheological properties of the blends were predominant by the dispersed phase. The storage modulus in low frequencies region developed by the new technique present a pronounced deviation; this corresponds to an increase in elasticity and longer relaxation time compared to the matrix. The main result in this study is the important role played by the coalescence in the determining of the morphology of the blends. | 3,379.6 | 2014-08-22T00:00:00.000 | [
"Materials Science"
] |
Application of 99mTc-Labeled WL12 Peptides as a Tumor PD-L1-Targeted SPECT Imaging Agent: Kit Formulation, Preclinical Evaluation, and Study on the Influence of Coligands
With the development of PD-1/PD-L1 immune checkpoint inhibitor therapy, the ability to monitor PD-L1 expression in the tumor microenvironment is important for guiding therapy. This study was performed to develop a novel radiotracer with optimal pharmacokinetic properties to reflect PD-L1 expression in vivo via single-photon emission computed tomography (SPECT) imaging. [99mTc]Tc-HYNIC-WL12-tricine/M (M = TPPTS, PDA, ISONIC, 4-PSA) complexes with high radiochemical purity (>97%) and suitable molar activity (from 100.5 GBq/μmol to 300 GBq/μmol) were prepared through a kit preparation process. All 99mTc-labeled HYNIC-WL12 radiotracers displayed good in vitro stability for 4 h. The affinity and specificity of the four radiotracers for PD-L1 were demonstrated both in vitro and in vivo. The results of biodistribution studies displayed that the pharmacokinetics of the 99mTc-HYNIC-conjugated radiotracers were significantly influenced by the coligands of the radiotracers. Among them, [99mTc]Tc-HYNIC-WL12-tricine/ISONIC exhibited the optimal pharmacokinetic properties (t1/2α = 8.55 min, t1/2β = 54.05 min), including the fastest clearance in nontarget tissues, highest tumor-to-background contrast (e.g., tumor-to-muscle ratio, tumor-to-blood ratio: 40.42 ± 1.59, 14.72 ± 2.77 at 4 h p.i., respectively), and the lowest estimated radiation absorbed dose, highlighting its potential as a clinical SPECT imaging probe for tumor PD-L1 detection.
Introduction
Since immunotherapy began in 2011, immune checkpoint inhibitor (ICI) therapy based on the programmed death protein 1 (PD-1)/programmed death protein ligand 1 (PD-L1) signaling pathway has played an increasingly important role in cancer treatment [1].Recent clinical studies have demonstrated that PD-L1 expression has a significant influence on therapeutic efficacy, and positive responses to PD-1/PD-L1 ICI therapy are only possible in patients with tumors that contain high levels of expressed PD-L1 [2,3].The main method used in the clinic to assess PD-L1 expression in the tumor microenvironment is invasive biopsy in conjunction with immunohistochemistry (IHC).However, due to the high heterogeneity of PD-L1 expression within both primary tumors and metastases, the ability of IHC detection to accurately evaluate the PD-L1 expression status in realtime and predict treatment response is limited, especially for patients with metastatic diseases [4].Compared with IHC, nuclear medicine techniques allow real-time, noninvasive visualization of tumor PD-L1 expression in the whole body, which can overcome the shortcomings of IHC methods.
Currently, developing a novel WL12-based radiotracer with optimal excretion kinetics is highly desirable to obtain a high target-to-background ratio for PD-L1-positive tumor imaging.In this study, we focused on developing 99m Tc-labeled HYNIC-conjugated WL12 for SPECT imaging, which is expected to provide a simple, convenient, and inexpensive diagnostic tool for assessing the status of PD-L1 in cancer patients.99m Tc is the most widely used radionuclide due to its excellent nuclide properties, low cost, and high availability through a 99 Mo/ 99m Tc generator.Herein, 6-hydrazino nicotinamide (HYNIC) was used as the bifunctional chelating group to conjugate the -Orn of WL12 for the following reasons: (1) A high 99m Tc-labeling efficiency can be achieved at very low concentrations of HYNICconjugated biomolecules [44], which is beneficial for obtaining radiotracers with high molar activity through a kit preparation process while following the requirements of good manufacturing practice (GMP).(2) During the 99m Tc-HYNIC radiolabeling process, coligands are essential because they occupy the remaining sites of the 99m Tc coordination sphere to form 99m Tc-HYNIC complexes with good stability.Due to the significant effect of coligands on their physicochemical properties, such as hydrophilicity, charge, and stability [44][45][46][47][48], a strategy is convenient for optimizing the pharmacokinetic properties of 99m Tc-HYNIC complexes via the selection of coligands.In this study, lyophilized kits containing different coligands, tricine/M (M = triphenylphosphine-3,3 ′ ,3 ′′ -trisulfonic acid trisodium salt (TPPTS), isonicotinic acid (ISONIC), 3,5-pyridine dicarboxylic acid (PDA), or 4-pyridinesulfonic acid (4-PSA), respectively), were developed to prepare 99m Tc-WL12 complexes with high radiochemical purity and suitable molar activity.In vitro and in vivo evaluations of these radiotracers were performed, and the results were compared to develop a favorable PD-L1-targeted tumor imaging agent with optimal pharmacokinetic properties for SPECT imaging.
Peptide Synthesis
First, the peptide WL12 was synthesized via solid-phase synthesis.The HYNIC moiety was introduced into the -Orn of the WL12 peptide to produce the HYNIC-WL12 peptide (Scheme 1).Both the WL12 and HYNIC-WL12 peptides were purified by high-performance liquid chromatography (HPLC).The identities of the final peptides were confirmed by electrospray ionization mass spectrometry (ESI-MS).The chemical purity of each sample was analyzed by HPLC.The ESI-MS and HPLC results obtained for the final peptides are shown in the Supplemental Information (Figures S1-S4). in vivo evaluations of these radiotracers were performed, and the results were compared to develop a favorable PD-L1-targeted tumor imaging agent with optimal pharmacokinetic properties for SPECT imaging.
Peptide Synthesis
First, the peptide WL12 was synthesized via solid-phase synthesis.The HYNIC moiety was introduced into the -Orn of the WL12 peptide to produce the HYNIC-WL12 peptide (Scheme 1).Both the WL12 and HYNIC-WL12 peptides were purified by high-performance liquid chromatography (HPLC).The identities of the final peptides were confirmed by electrospray ionization mass spectrometry (ESI-MS).The chemical purity of each sample was analyzed by HPLC.The ESI-MS and HPLC results obtained for the final peptides are shown in the supplemental information (Figures S1-S4).
Radiochemistry
As shown in Scheme 1, the radiochemical synthesis of [ 99m Tc]Tc-HYNIC-WL12-tricine/M (M = TPPTS or PDA or ISONIC or 4-PSA, respectively) was performed by adding 99m Tc eluent into the kit containing the HYNIC-WL12 peptide and coligands and reacting the substances at 100 °C.[ 99m Tc]Tc-HYNIC-WL12-tricine/M (M = TPPTS or PDA or ISONIC or 4-PSA, respectively) was obtained with a high labeling yield (>98%) under the optimized reaction conditions.The molar activity (Am) of the obtained radiotracer ranged from 1.5 GBq/µmol to 300 GBq/µmol according to the added radioactivity.
The octanol-water partition coefficient (log D) of the four radiotracers was determined in a mixture of phosphate-buffered saline (PBS, 0.1 M, pH = 7.4) and n-octanol.As shown in Table 1, all four radiotracers were hydrophilic.Among them, [ 99m Tc]Tc-HYNIC-WL12-tricine/TPPTS was the most hydrophilic, with a Log D of −1.71 ± 0.09, followed by
Radiochemistry
As shown in Scheme 1, the radiochemical synthesis of [ 99m Tc]Tc-HYNIC-WL12tricine/M (M = TPPTS or PDA or ISONIC or 4-PSA, respectively) was performed by adding 99m Tc eluent into the kit containing the HYNIC-WL12 peptide and coligands and reacting the substances at 100 • C. [ 99m Tc]Tc-HYNIC-WL12-tricine/M (M = TPPTS or PDA or ISONIC or 4-PSA, respectively) was obtained with a high labeling yield (>98%) under the optimized reaction conditions.The molar activity (A m ) of the obtained radiotracer ranged from 1.5 GBq/µmol to 300 GBq/µmol according to the added radioactivity.
To evaluate the stability, the radiotracers were incubated in saline at room temperature or mouse serum at 37 • C for 4 h, and the [ 99m Tc]Tc-HYNIC-WL12-tricine/M samples were analyzed by radio-HPLC.As shown in Figure 1, the RCP of all the samples remained above 99%, which demonstrated the good stability of [ 99m Tc]Tc-HYNIC-WL12-tricine/M in vitro.To evaluate the stability, the radiotracers were incubated in saline at room temperature or mouse serum at 37 °C for 4 h, and the [ 99m Tc]Tc-HYNIC-WL12-tricine/M samples were analyzed by radio-HPLC.As shown in Figure 1, the RCP of all the samples remained above 99%, which demonstrated the good stability of [ 99m Tc]Tc-HYNIC-WL12-tricine/M in vitro.
Cellular Uptake and Blocking Assays
To evaluate the ability of [ 99m Tc]Tc-HYNIC-WL12-tricine/M (M = TPPTS, ISONIC, PDA, and 4-PSA) to target PD-L1, mouse colon cancer MC38 and human PD-L1 genetransfected MC38 cells (MC38-B7H1) were used.The PD-L1 expression levels in the MC38-B7H1 and MC38 cell lines were evaluated by flow cytometry.As shown in the supplemental information (Figure S6), PD-L1 expression was lower in MC38 cells than in MC38-B7H1 cells.Therefore, the affinity and specificity of 99m Tc-labeled WL12 radiotracers for PD-L1 were evaluated in the MC38-B7H1 and MC38 cell lines as positive and negative models, respectively.
In cellular uptake studies, the RCP of 99m Tc-labeled WL12 radiotracers was greater than 97%, with Am values ranging from 30 to 60 GBq/µmol.As shown in Figure 2A-D, the cellular uptake of the four 99m Tc-HYNIC-WL12 radiotracers in PD-L1-positive MC38-B7H1 cells was significantly greater than that in MC38 cells at each time point, indicating that the uptake of the radiotracers in the cells was dependent on PD-L1 expression.
Cellular Uptake and Blocking Assays
To evaluate the ability of [ 99m Tc]Tc-HYNIC-WL12-tricine/M (M = TPPTS, ISONIC, PDA, and 4-PSA) to target PD-L1, mouse colon cancer MC38 and human PD-L1 genetransfected MC38 cells (MC38-B7H1) were used.The PD-L1 expression levels in the MC38-B7H1 and MC38 cell lines were evaluated by flow cytometry.As shown in the Supplemental Information (Figure S6), PD-L1 expression was lower in MC38 cells than in MC38-B7H1 cells.Therefore, the affinity and specificity of 99m Tc-labeled WL12 radiotracers for PD-L1 were evaluated in the MC38-B7H1 and MC38 cell lines as positive and negative models, respectively.
In cellular uptake studies, the RCP of 99m Tc-labeled WL12 radiotracers was greater than 97%, with A m values ranging from 30 to 60 GBq/µmol.As shown in Figure 2A-D, the cellular uptake of the four 99m Tc-HYNIC-WL12 radiotracers in PD-L1-positive MC38-B7H1 cells was significantly greater than that in MC38 cells at each time point, indicating that the uptake of the radiotracers in the cells was dependent on PD-L1 expression.The PD-L1 specificity of 99m Tc-labeled WL12 radiotracers was further confirmed by blocking studies (Figure 2E-H).The uptake of the four radiotracers in MC38-B7H1 cells was clearly blocked by the WL12 peptide (** p < 0.01, *** p < 0.001).In the presence of 11 µM WL12, the cellular uptake decreased by approximately 94%, 68%, 82%, and 87% for coligands as tricine/M (M = TPPTS or PDA or ISONIC or 4-PSA), respectively.
Effect of Molar Activity on Biodistribution
To determine the suitable molar activity of the radiotracer for radiolabeling with a kit formulation, we first investigated the effect of excess ligands on the biological properties of [ 99m Tc]Tc-HYNIC-WL12-tricine/TPPTS using female C57BL/6N mice bearing MC38-B7H1 tumors, which were confirmed to be PD-L1-positive tumor models by immunohistochemistry (Supplementary Information, Figure S7).All animal experiments were approved by the Institutional Animal Care and Use Committee of Beijing Normal University and were carried out in accordance with the Principles of Laboratory Animal Care and the guidelines of the Ethics Committee.
Ex Vivo and In Vivo Studies 2.4.1. Effect of Molar Activity on Biodistribution
To determine the suitable molar activity of the radiotracer for radiolabeling with a kit formulation, we first investigated the effect of excess ligands on the biological properties of [ 99m Tc]Tc-HYNIC-WL12-tricine/TPPTS using female C57BL/6N mice bearing MC38-B7H1 tumors, which were confirmed to be PD-L1-positive tumor models by immunohistochemistry (Supplementary Information, Figure S7).All animal experiments were approved by the Institutional Animal Care and Use Committee of Beijing Normal University and were carried out in accordance with the Principles of Laboratory Animal Care and the guidelines of the Ethics Committee.
Dosimetry Estimation
Based on the biodistribution results, time-activity curve fitting and subsequent dose calculation were performed using OLINDA/EXM, version 1.1.The nonlinear curve fitting parameters were applied to derive the best curve fit for the residence time of activity in the source organ.The derived organ residence times were entered in the assumed human model data to derive the absorbed doses to all the organs and the whole-body effective dose, which were generated in mSv/MBq.As shown in Table 8, the effective doses of [ 99m Tc]Tc-HYNIC-WL12-tricine/M (M = TPPTS, PDA, ISONIC, or 4-PSA) were 2.90 × 10 −3 , 2.12 × 10 −3 , 2.24 × 10 −3 , and 2.53 × 10 −3 mSv/MBq, respectively.The high effective dose of [ 99m Tc]Tc-HYNIC-WL12-tricine/TPPTS was attributed to the high absorbed radiation dose in the kidneys.Among them, [ 99m Tc]Tc-HYNIC-WL12-tricine/ISONIC displayed the lowest organ doses for the liver and kidneys and the lowest effective dose for the whole body.
Discussion
A simple, efficient, and reproducible kit-based radiolabeling process is essential for the clinical application of 99m Tc-radiolabeled radiopharmaceuticals.In this study, a kit formulation was developed for the routine preparation of [ 99m Tc]Tc-HYNIC-WL12-tricine/M (M = TPPTS, PDA, ISONIC, and 4-PSA).During the process of optimizing the kit formulation, a high radiolabeling yield (>97%) of the radiotracers could be obtained with 5 µg of the cold peptide HYNIC-WL12 (the lowest amount tested, Figure S5).However, significant glass surface absorption of these radiotracers was also observed under these low levels of cold peptide.After all the [ 99m Tc]Tc-HYNIC-WL12-tricine/TPPTS solution was removed from the common glass vial, more than 80% of the radioactivity remained on the glass wall.The glass-surface absorption of the radiotracer could be effectively reduced by utilizing silanized glass vials and adding more HYNIC-WL12 peptide as a carrier.However, the addition of cold ligands decreased the molar activity of radiotracers.Generally, high molar radioactivity is needed for receptor-targeted probes due to the limited binding sites and low concentration of biomarkers (usually at the nanomolar level).To evaluate
Discussion
A simple, efficient, and reproducible kit-based radiolabeling process is essential for the clinical application of 99m Tc-radiolabeled radiopharmaceuticals.In this study, a kit formulation was developed for the routine preparation of [ 99m Tc]Tc-HYNIC-WL12-tricine/M (M = TPPTS, PDA, ISONIC, and 4-PSA).During the process of optimizing the kit formulation, a high radiolabeling yield (>97%) of the radiotracers could be obtained with 5 µg of the cold peptide HYNIC-WL12 (the lowest amount tested, Figure S5).However, significant glass surface absorption of these radiotracers was also observed under these low levels of cold peptide.After all the [ 99m Tc]Tc-HYNIC-WL12-tricine/TPPTS solution was removed from the common glass vial, more than 80% of the radioactivity remained on the glass wall.The glass-surface absorption of the radiotracer could be effectively reduced by utilizing silanized glass vials and adding more HYNIC-WL12 peptide as a carrier.However, the addition of cold ligands decreased the molar activity of radiotracers.Generally, high molar radioactivity is needed for receptor-targeted probes due to the limited binding sites and low concentration of biomarkers (usually at the nanomolar level).To evaluate the impact of excessive cold HYNIC-WL12 ligand, a comparative biodistribution experiment was conducted between [ 99m Tc]Tc-HYNIC-WL12-tricine/TPPTS injection with or without excess cold ligand.The results showed that both tumor uptake and tumor-to-background ratios were significantly reduced when excess cold HYNIC-WL12 was removed by further HPLC purification.This result suggested that excessive mass of cold HYNIC-WL12 exerts a positive effect on [ 99m Tc]Tc-HYNIC-WL12-tricine/TPPTS sensitivity in PD-L1-positive tumors.As shown in the biodistribution data of [ 99m Tc]Tc-HYNIC-WL12-tricine/TPPTS with different molar activities (Max: >3 TBq/µmol by HPLC purification, Min: 1.5 GBq/µmol), tumor uptake exhibited a bell-shaped trend with decreasing molar activity.A similar phenomenon was also found for several reported peptide-based radiotracers [49][50][51] and PD-L1-targeted radiolabeled antibodies [52,53].This was probably because cold HYNIC-WL12 could occupy nonspecific or PD-L1 binding sites in nontarget tissues [8,54], allowing more "free state" radiotracers to accumulate in tumors with high PD-L1 expression.We concluded that a radiotracer with an A m ranging from 100.5 GBq/µmol to 300 GBq/µmol yielded the best tumor uptake and tumor-to-background contrast.At the typical radiopharmaceutical dose (740-1110 MBq), a kit containing 15 µg of HYNIC-WL12 peptide in a silanized glass vial would be suitable for routine clinical 99m Tc radiolabeling.
The results of the IC 50 determination (as shown in Table S1) displayed that the introduction of the HYNIC moiety in the -Orn of WL12 has little influence on the affinity of HYNIC-WL12 for the PD-L1 protein.The results of in vitro cellular assays further demonstrated that four 99m Tc-labeled HYNIC-WL12 radiotracers bind to tumor cells in a PD-L1 expression-dependent manner.The cellular uptake of the four radiotracers in MC38-B7H1 cells (PD-L1-positive) was approximately 2.38-6.73-foldhigher than that in MC38 cells (PD-L1-negative), which could also be significantly blocked by the addition of the WL12 peptide (p < 0.01).The uptake of [ 99m Tc]Tc-HYNIC-WL12-tricine/M (M = TPPTS or PDA or ISONIC or 4-PSA) in MC38-B7H1 tumors was 18.22 ± 4.57, 4.61 ± 1.32, 6.63 ± 0.80, and 6.96 ± 1.15%ID/g at 2 h p.i., respectively, which was 3.11-6.93-foldgreater than that in MC38 tumors at the same time points (Figure 3, 2.63 ± 0.98, 1.48 ± 0.55, 1.70 ± 0.27, and 1.22 ± 0.19%ID/g, respectively).The difference in radioactive uptake between the two tumor models was consistent with the IHC staining results, in which PD-L1 expression in the MC38-B7H1 tumors was higher than that in the MC38 tumors (Figure S7).In addition, radioactive accumulation in MC38-B7H1 tumors was reduced by approximately 76.44-89.44% in the blocking group (Figure 3, p < 0.01).These results suggested that the tumor uptake of [ 99m Tc]Tc-HYNIC-WL12-tricine/M was PD-L1-specific and associated with the expression level of PD-L1.
General Information
99m Tc was obtained from a 99 Mo/ 99m Tc generator (Guangzhou Diqi Trading Co., Ltd., Guangzhou, China) and eluted with saline.All chemical reagents and solvents performed using the Student's t-test for unpaired data to determine the significance of differences.Differences at the 95% confidence level (p < 0.05) were considered statistically significant.
Funding:
This work was supported by the National Natural Science Foundation of China [21976019] and the National HighLevel Hospital Clinical Research Funding [2023-PUMCH-E-007].Institutional Review Board Statement: Animal studies were carried out in accordance with the principles of laboratory animal care and the guidelines of the Ethics Committee of Beijing Normal University (permit no.BNUCC-EAW-2023-002). Informed Consent Statement: Not applicable.Data Availability Statement: Data are contained within the article. | 3,969 | 2024-07-01T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Phospholipid translocation captured in a bifunctional membrane protein MprF
As a large family of membrane proteins crucial for bacterial physiology and virulence, the Multiple Peptide Resistance Factors (MprFs) utilize two separate domains to synthesize and translocate aminoacyl phospholipids to the outer leaflets of bacterial membranes. The function of MprFs enables Staphylococcus aureus and other pathogenic bacteria to acquire resistance to daptomycin and cationic antimicrobial peptides. Here we present cryo-electron microscopy structures of MprF homodimer from Rhizobium tropici (RtMprF) at two different states in complex with lysyl-phosphatidylglycerol (LysPG). RtMprF contains a membrane-embedded lipid-flippase domain with two deep cavities opening toward the inner and outer leaflets of the membrane respectively. Intriguingly, a hook-shaped LysPG molecule is trapped inside the inner cavity with its head group bent toward the outer cavity which hosts a second phospholipid-binding site. Moreover, RtMprF exhibits multiple conformational states with the synthase domain adopting distinct positions relative to the flippase domain. Our results provide a detailed framework for understanding the mechanisms of MprF-mediated modification and translocation of phospholipids.
A minoacyl phospholipids, such as lysylphosphatidylglycerol (LysPG), are an important class of lipids with crucial biological functions in antibiotic resistance, pathogenicity, stress response, and motility of bacterial cells 1 . They are widely distributed in microbes such as Staphylococcus aureus, Mycobacterium tuberculosis, and many other pathogenic bacteria [2][3][4] . An integral membrane protein known as Multiple Peptide Resistance Factor (MprF) catalyzes biosynthesis of aminoacyl phosphatidylglycerol (aaPG) by using phosphatidylglycerol and aminoacyl-tRNA as substrates [5][6][7] . MprF orthologs from different species can modify phosphatidylglycerol (PG) or cardiolipin with distinct aminoacyl groups including lysyl, alanyl, arginyl, and ornithyl groups 1,5,8 . When mprF was knocked out in S. aureus, the bacteria became highly susceptible to daptomycin and cationic antimicrobial peptides (CAMPs) including defensins from human neutrophil and bacteriocins 9,10 . Daptomycin is a cyclic lipopeptide antibiotic of last resort for treating infections caused by methicillin-resistant and vancomycinresistant S. aureus (MRSA and VRSA) as well as other multidrug-resistant gram-positive bacteria 11,12 . Numerous mutations of mprF have been identified in the daptomycinresistant (DAP-R) S. aureus and among them, several gain-offunction mutations were verified as the causes of DAP-R phenotype [13][14][15][16][17][18] .
MprF is a bifunctional protein with two separate domains and the antimicrobial peptide resistance of S. aureus requires the presence of both domains 5,9 . While the cytoplasmic domain of MprF functions as an aaPG synthase 19 , the membrane-spanning domain serves as a phospholipid flippase mediating translocation of aaPG from the inner leaflet to the outer leaflet of the membrane 9 . Occurrence of LysPG in the outer leaflet of bacterial membrane might help to repel CAMPs from reaching the membrane surface through electrostatic repulsion 20 , modulate the peptide-membrane interactions 21 or inhibit the formation of membrane leaks induced by the CAMPs 22 . As a widespread virulence factor causing CAMP and antibiotic resistance in pathogenic bacteria, MprF is considered as a promising target for development of anti-infective strategies against drug-resistant bacteria 23,24 . Although crystal structures of the synthase domain of MprF have been solved recently and provided preliminary insights into its substrate-binding sites 19 , little is known about the mechanisms of aaPG translocation, interaction of MprF with antibiotics or coupling between the two domains, mainly due to lack of the full-length MprF structure. Here we present the cryoelectron microscopy (cryo-EM) structures of MprF from Rhizobium tropici (RtMprF) at two different states, unraveling notable features related to LysPG recognition and translocation as well as antibiotic resistance.
Results
Overall structure and oligomeric state of RtMprF. Recombinant RtMprF protein with a hexahistidine tag fused to its carboxylterminal region was expressed in E. coli cells, and the protein was purified through immobilized metal affinity chromatography in solutions with either n-dodecyl-β-D-maltoside (β-DDM) or glycodiosgenin (GDN) (see "Methods" for more details). For singleparticle cryo-EM analysis, the purified RtMprF protein was further reconstituted with PG into lipid nanodiscs, a nanoscale complex system consisting of a small patch of lipid bilayer and the target protein surrounded by engineered membrane-scaffold protein 25 ( Supplementary Figs. 1a and 2a). Two-dimensional (2D) and three-dimensional (3D) classes of the single-particle images indicate that RtMprF exists mainly as homodimers in nanodiscs ( Supplementary Figs. 1b, c and 2b-d). The cryo-EM maps for RtMprF(DDM)-nanodiscs and RtMprF(GDN)-nanodiscs (full-length RtMprF protein purified in β-DDM/GDN and reconstituted in nanodiscs) were refined to 3.7 and 2.96 Å resolution, respectively (Supplementary Figs. 1d-f and 2e-g). The cryo-EM map of RtMprF(GDN)-nanodiscs exhibits well-defined features allowing construction of a structural model with ∼94.4% amino acid residues of the full-length RtMprF protein and identification of four lipid molecules per monomer (Supplementary Fig. 3 and Table 1). The structure of RtMprF(DDM)-nanodiscs contains three lipid molecules and represents a state different from that of RtMprF(GDN)-nanodiscs as discussed below. The crystal structure of the catalytic domain of RtMprF in the C-terminal region has been solved at 2.0 Å resolution ( Supplementary Fig. 4), and serves as the initial model for building the corresponding region in the cryo-EM structures of the full-length RtMprF.
While the MprF protein from S. aureus (SaMprF) may oligomerize into homodimers or homotetramers 26 , RtMprF in nanodiscs mainly exists as an arch-shaped homodimer with the C2 symmetry axis running through the dimerization interface ( Fig. 1a-d). The function of RtMprF is related to polymyxin B (a lipopeptide antibiotic) resistance, acid tolerance, nodulation competitiveness of R. tropici under low pH conditions 27,28 . To analyze the oligomeric state of RtMprF on the membrane, the E. coli membrane with recombinant RtMprF protein was prepared and incubated with a bifunctional amine-reactive crosslinking reagent (disuccinimidyl suberate, DSS). After crosslinking, the products were solubilized in β-DDM solution, separated through sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and detected through western blot by using the anti-His-Tag antibody. The crosslinking result demonstrates that the dimeric form of RtMprF protein is present in the membrane (Fig. 1e). Dimerization of RtMprF is mainly mediated by the transmembrane domain and a total surface area of 12080.9 Å 2 is buried in the RtMprF dimer with 4263.3-Å 2 interface area between two adjacent monomers, indicating the dimeric state of RtMprF is stable in lipid nanodiscs, according to the result of quantitative analysis through PISA (proteins, interfaces, structures, and assemblies) program 29 . Curiously, four lipid molecules, namely PG1, PG2, and their symmetry-related molecules PG1′ and PG2′, are located at the monomer-monomer interface (Fig. 1f). They collectively contribute to dimerization of RtMprF by forming polar and hydrophobic interactions with two adjacent monomers simultaneously. While the protein-protein contact within the dimer contributes merely 629.5-Å 2 interface area, the four PG molecules form 1646.3 and 1643.1-Å 2 interface area with the two adjacent monomers, respectively, suggesting that these interfacial lipid molecules have crucial roles in stabilizing the dimeric state of RtMprF. In RtMprF(GDN)-nanodisc, the hydrophobic group of a GDN molecule occupies the binding site of the 2-acyl chain of PG2 molecule observed in RtMprF (DDM)-nanodisc, while the other PG molecule (PG3) at the peripheral region of the dimerization interface is located nearby GDN ( Supplementary Fig. 3c).
The carboxyl-proximal region of RtMprF forms a water-soluble synthase domain active in synthesizing aaPG on the cytoplasmic Fig. 1 Overall structure of RtMprF(DDM)-nanodic homodimer at 3.7-Å resolution. Cryo-EM densities of RtMprF dimer embedded in a nanodisc viewed along membrane plane (a) and along membrane normal from periplasmic side (b). Color codes: light green and light blue, two adjacent monomers of RtMprF dimer; yellow, lipid molecules; gray, nanodisc scaffold, uninterpreted lipid and other densities from adjacent RtMprF dimer. Cartoon models of RtMprF dimer viewed along membrane plane (c) and along membrane normal from periplasmic side (d). The four phospholipid molecules at the dimer interface are highlighted as sphere models and the LysPG inside monomers are shown as stick models. PG phosphatidylglycerol. e Western blot of the crosslinked products of RtMprF protein in the membrane. The asterisk indicates the position of RtMprF monomer, while the arrowhead labels the position of RtMprF dimer. DSS was used for the crosslinking experiment. The experiment was repeated independently three times with similar results. f The interfacial tubular void space at the dimer interface accommodating the four acyl chains from the PG molecules. The sectional view of the surface model of monomer A is shown and monomer B is omitted for clarity. PG molecules are shown as stick models. side 8 . It contains the binding sites for aminoacyl-tRNA and PG according to the previous work on the synthase domain structures of MprF homologs from Pseudomonas aeruginosa and Bacillus licheniformis (PaMprF and BlMprF) 19 . The synthase domain of RtMprF (Pro539-Gly860) superposes well with those of PaMprF and BlMprF (root-mean-square deviation of α-carbons at 0.977 and 1.297 Å, respectively). They share similar tandem repeats of the General Control Nonderepressible 5 (a histone acetyltransferase of a transcriptional regulatory complex) related N-acetyltransferase (GNAT) folds (GNAT folds 1 and 2) ( Supplementary Fig. 4). The GNAT fold 1 of the synthase domain is covalently linked to TM14 in the flippase domain through a flexible loop (residues 531-539, invisible in the map). Meanwhile, it associates closely with the flippase domain through Cytoplasm TM14 TM10 TM11 90°T M12 TM13 TM9 TM1 TM8 TM2 TM5 TM4 H2 H4 H3 B2 B1 TM3b TM3a H1 TM6 TM7a TM7b LysPG1 LysPG1 LysPG2 Subdomain 2 non-covalent interactions (Fig. 2d). First, amino acid residues from α3 (Asp604 and Glu611), α4 (Asp635), and β5 (Arg639) regions of the GNAT fold 1 in RtMprF form close interactions (salt bridges and hydrogen bond) with Arg456 from the TM10-TM11 loop, Arg136 from TM3b, and Thr145 from TM4 (Fig. 2e). Their interactions contribute to the formation of Interface 1 between the synthase domain and the flippase domain. Second, the loop region before α10 of GNAT fold 2 contacts with the β-hairpin loop between TM5 and TM6 from the cytoplasmic surface of Subdomain 1 of the flippase domain. This interface (Interface 2) is mainly stabilized by three pairs of hydrogen bonds (Fig. 2f). Among the amino acid residues involved in the interdomain interactions, Arg456 of RtMprF is highly conserved in other MprF homologs ( Supplementary Fig. 7).
The flippase domain of RtMprF harbors two internal lipidbinding sites within membrane-embedded cavities. Remarkably, the flippase domain of RtMprF contains two deep cavities located on the cytoplasmic and periplasmic sides of the membrane, respectively (Cavities C and P, Fig. 3a). Cavity C opens to the inner leaflet of the lipid bilayer through a lateral portal measuring 6-8-Å wide (Fig. 3b). Meanwhile, the cavity penetrates deep into a central region of Subdomain 1 near the estimated middle plane of lipid bilayer. The wall of Cavity C is mainly shaped by the transmembrane helices in Subdomain 1, namely TM2, TM3a-3b, TM4, TM5, TM6, TM7a-7b, and TM8. On the other side, Cavity P is surrounded by TM1, H1, TM7b, and TM8 from Subdomain 1 as well as TM9, H3, TM10, TM11, TM12, and TM13 from Subdomain 2. While the internal pocket of Cavity P also extends deep into the central region close to the tip of Cavity C, it has a lateral portal on the other side opening 6-8-Å wide toward the outer leaflet of the lipid bilayer (Fig. 3a). In the central region, the two cavities are separated from each other by a barrier around Arg304 on TM8 (Fig. 3c). Arg304 forms a salt bridge with Glu280 and is hydrogen-bonded to Ala274 and Gly275 from TM7a-7b loop region. The hydrogen bonds between Arg304 and the carbonyl groups of Gly275 and Ala274 might serve to stabilize the side chain of Arg304 in a favorable orientation for establishing ionic interaction with the side chain of Glu280. Strikingly, one LysPG molecule each (LysPG1 and LysPG2) is trapped in Cavities C and P of the RtMprF(GDN)-nanodisc structure (Fig. 3a). The cryo-EM density of LysPG1 is welldefined and matches well with the model, while the putative LysPG2 shows well-resolved density for the fatty-acyl chains and relatively weak density for the head group. The density of LysPG1 exhibits three arms with similar shape and length ( Supplementary Fig. 3b). The first arm is buried inside Cavity C and surrounded by polar amino acid residues, such as Asn117, Asp234, Ser238, and Arg304. The second arm is sandwiched between TM7a and TM4, and surrounded by hydrophobic residues. The density of the third arm is the weakest among the three and it is located at the outmost region exposed to the hydrophobic area of lipid bilayer. Although the local resolution of the three individual arms may appear insufficient for distinguishing the phospho-[3-lysyl (1-glycerol)] head group and two fatty-acyl groups, interpretation of the lipid molecule is assisted by considering the compatibility of the individual groups with their local environments. As a result, the first arm is assigned as the phospho-[3-lysyl(1glycerol)] head group and the other two arms most likely belong to the fatty-acyl chains of LysPG molecule. The model is further verified through mutagenesis and biochemical analysis (described below). As shown in Fig. 3b, the LysPG1 molecule in Cavity C has the characteristic hook-shaped polar head group inserted deeply toward the center of Subdomain 1, while the hydrophobic fattyacyl chains of LysPG1 extend outwardly to the inner leaflet of lipid bilayer. The fatty 2-acyl chains of LysPG1 crawl upward along the external surface of RtMprF and form hydrophobic interactions with non-polar amino acid residues from TM7a and TM4. The head group of LysPG1 bends upward to a position near the middle plane of lipid bilayer, instead of pointing downward to the cytoplasmic surface. In comparison, PG molecules at the dimerization interface adopt the head-group-down inward-facing configuration common to bulk phospholipids in the inner leaflet (Fig. 1f).
The lysyl group of LysPG1 molecule binds to Asp234 and Tyr307 through its side-chain ε-amino group (Fig. 3d). The αamino group of LysPG is sandwiched between Ala274 and Tyr303, and located merely ∼4.4 Å from the barrier site (Arg304) between Cavities C and P. Between the α-amino group of LysPG and Ala274, there is a water molecule serving as a bridge connecting them through hydrogen bonds ( Supplementary Fig. 3d). Asp234 of RtMprF is conserved in the homolog from Pseudomonas aeruginosa and is replaced by a similar residue (Glu, also an acidic residue favorable for lysyl group binding) in some other species. Besides, Tyr303, Arg304, and Tyr307 in RtMprF are part of the YRXXY motif highly conserved among various MprF homologs ( Supplementary Fig. 7). When they are individually mutated to alanine in RtMprF, the amount of LysPG co-purified with the D234A and R304A mutant protein samples is reduced significantly compared to the wild type (Fig. 3e, f), indicating that these two charged residues are crucial for LysPG binding. The results are consistent with the current model of LysPG1 molecule with its lysyl head group buried inside Cavity C. Besides, the head-group glycerol of LysPG1 is hydrogen-bonded to Asn117 and forms van der Waals contact with Phe155. Thereby, the lipid-binding site in Cavity C functions to stabilize the head group of LysPG1 in the upward position through specific interactions. The characteristic hook-like shape of LysPG1 in Cavity C indicates that flipping of LysPG may begin at the initial stage of translocation process on the inner leaflet side instead of the outer leaflet side.
Unlike LysPG1, LysPG2 located in Cavity P has an extended conformation ( Supplementary Fig. 3b). Its head group is positioned on the periplasmic surface and both fatty-acyl chains extend deep into the cavity toward the center of Subdomain 1 (Fig. 3a). There are actually three well-resolved fatty-acyl chain densities inside Cavity P of the RtMprF(GDN)-nanodisc structure ( Supplementary Fig. 8a). Among the three acyl chains, the two long ones join each other at the head-group region near periplasmic surface and belong to a phospholipid molecule tentatively assigned as LysPG2 in the model. The third acyl chain is much shorter than the other two, likely belonging to a detergent molecule. While the density for the two fatty-acyl chains of the phospholipid molecule are fairly strong and clear, the head-group density is relatively weak. When the map contour level is lowered, the density corresponding to the lysyl group becomes visible and appears to be connected to the glycerol group ( Supplementary Fig. 8b). Therefore, the lipid density feature in Cavity P is interpreted as a LysPG molecule with highly flexible lysyl group. Alternatively, a PG molecule may also occupy the site. The backbone glycerol-3-phosphate group of LysPG2 may bind to adjacent residues through hydrogen bonds, whereas the fatty-acyl chains form van der Waals and hydrophobic interactions with nearby residues including Arg304, Phe276, and several other hydrophobic residues (Fig. 3g). Such a well-resolved lipid feature found in Cavity P is only present in RtMprF(GDN)-nanodisc but not in RtMprF(DDM)-nanodisc.
In the RtMprF(DDM)-nanodisc structure, Cavity C is also occupied by a LysPG molecule similar to the one observed in RtMprF(GDN)-nanodiscs ( Supplementary Fig. 3e, b). In contrast, Cavity P only contains some detergent-like density features much weaker than that of LysPG2 in RtMprF(GDN)-nanodiscs. Thinlayer chromatography (TLC) and mass spectrometry analysis results indicate that the RtMprF protein sample does contain LysPG in the lipids co-purified along with the protein (Supplementary Fig. 9a, b). As the E. coli cell does not produce endogenous LysPG by itself 6,20 , the LysPG molecules bound to RtMprF should be its own product. Through the TLC experiments, the stoichiometry of LysPG co-purified with RtMprF (DDM) protein sample is estimated to be ∼1.2 LysPG molecules per RtMprF monomer ( Supplementary Fig. 9c). The strong lipid density in Cavity C indicates that it may take up one LysPG molecule, whereas Cavity P in RtMprF(DDM)-nanodiscs is likely occupied by detergent or lipid molecule at very low occupancy. In comparison, the LysPG:protein stoichiometry of the RtMprF (GDN) sample is ∼2.6 ( Supplementary Fig. 9f, g), much higher than the LysPG:protein stoichiometry of the RtMprF(DDM) sample. Such difference may account for the strong phospholipid density in Cavity P of RtMprF(GDN)-nanodisc due to higher occupancy of LysPG, consistent with the interpretation of the lipid as LysPG2.
Evident conformational differences exist between RtMprF (DDM)-nanodisc and RtMprF(GDN)-nanodisc structures in the regions around Cavity P, despite that their overall structures are similar (Fig. 3h). By superposing them, it is apparent that TM12 moves 2.2 Å closer to TM11 and TM14 moves 4.2 Å closer to TM9 upon binding of LysPG2 in Cavity P. Besides, the amino acid residues involved in binding the fatty-acyl chains of LysPG2 are also adjusted slightly. Previously, it was found that a truncation mutant of SaMprF lacking the bulk region of Subdomain 1 is inefficient in translocating LysPG to the outer leaflet and failed to confer CAMP resistance, whereas the production of LysPG was unaffected 9 . As Subdomain 1 is involved in the formation of lipid-binding sites in both Cavities C and P, the absence of Subdomain 1 will abolish the lipidtranslocating function of MprF mainly because the mutant protein cannot either bind LysPG from the inner leaflet or host it on the outer leaflet side.
The lipid-binding site in Cavity C of RtMprF can accept different aaPGs. In the flippase domain of RtMprF, the amino acid residues (D234, Y303, R304, Y307) involved in binding the head group of LysPG directly or indirectly are identical or highly similar among various homologs ( Supplementary Fig. 7). While MprF homologs from R. tropici and S. aureus (and many others) catalyze biosynthesis of LysPG, the one from P. aeruginosa and one of the two homologs from C. perfringens (CpMprF2) produce alanyl-phosphatidylglycerol (AlaPG) instead of LysPG 7,34 . To find out whether the flippase domain of RtMprF can accept AlaPG or not, we have constructed an MprF chimera (RtPaMprF) by fusing the flippase domain of RtMprF with the synthase domain of PaMprF (Fig. 4a). The chimeric RtPaMprF protein was expressed in E. coli and can be purified in sufficient amount for lipid extraction and analysis. The lipid analysis result indicates that the RtPaMprF is active in synthesizing AlaPG but not LysPG, and AlaPG could be co-purified along with the RtPaMprF protein ( Fig. 4b). Mutation of the four key residues involved in binding LysPG results in significant decrease of the amount of AlaPG copurified with the protein (Fig. 4c). Therefore, it is apparent that the preference for different aaPG is mainly conferred by the synthase domain of RtMprF, presumably by the selective binding of different aminoacyl-tRNA in its active site.
For the flippase domain of RtMprF, the substrate specificity appears to be broad, allowing it to accept either LysPG or AlaPG. Similarly, the flippase domains of MprF homologs from S. aureus and C. perfringens also exhibit relaxed substrate specificities 35 . The head groups of LysPG and AlaPG most likely share the same binding pocket in Cavity C of the flippase domain. Binding of AlaPG in the pocket is more sensitive (than LysPG) to the mutation of the four key residues involved in aaPG binding, as the alanyl group contains a small side chain and may require the presence of the bulky side chains of Tyr303 and Tyr307 in the flippase domain of RtMprF for its binding through van der Waals interactions.
The LysPG-binding site in Cavity C is involved in regulation of LysPG synthesis and translocation. What is the functional role of the LysPG-binding site in Cavity C of RtMprF protein? Do the mutations of LysPG-binding sites affect the overall level of LysPG production and the flippase function under in vivo condition? To answer the questions, we have analyzed the overall LysPG levels of E. coli cells expressing D234A, Y303A, R304A, and Y307A mutants of RtMprF and compared them to the cells expressing wild-type protein. Remarkably, the cells expressing the mutant A cross-sectional view of the electrostatic potential surface model is shown. Blue, positive potential; white, neutral; red, negative potential. The LysPG1 molecule (shown as stick model in yellow) has its head group buried deep in Cavity C. In contrast, the LysPG2 molecule has its head group positioned outside, while two long fatty-acyl chains are buried deep in Cavity P. The cryo-EM densities of the LysPG molecules in Cavities C and P are shown as green meshes (contoured at 3.0 σ level). b A zoom-in-view of LysPG1 molecule in Cavity C showing its fatty-acyl chains extending into membrane region through the lateral portal. c The contribution of Arg304-Glu280 ionic pair as the barrier between Cavity C and Cavity P. Arg304, Glu280, Ala274, and Gly275 are shown as stick models. d Interactions between the head group of LysPG1 and adjacent amino acid residues. e Analysis of LysPG co-purified with wild-type RtMprF and mutants through the TLC experiment. The same amount of WT or mutant protein was used for extraction of lipid samples for TLC. The plate was stained by ninhydrin (an amino group-specific dye). PE phosphatidylethanolamine. The lipid spots are identified according to the standard samples of LysPG, PE, and other phospholipids shown in Supplementary Fig. 9d. f Quantification of the relative amount of LysPG co-purified with four RtMprF mutants in comparison with the wild type (WT). The error bars indicate the standard errors of the mean values (n = 3). p = 0.000388 between WT and D234A, ***; p = 0.2153 between WT and Y303A, ns not significant; p = 0.000105 between WT and R304A, ***; p = 0.1283 between WT and Y307A, ns (twosided unpaired t-test between WT and various mutants). The experiment was repeated independently twice with similar results. g Interactions of LysPG2 with nearby amino acid residues. The residues in van der Waals contacts or hydrophobic interactions with LysPG2 are shown as silver stick models. The images of a-d and g represent the structure of RtMprF(GDN)-nanodics at 2.96 Å with two internal LysPG molecules. h Superposition of the structures of RtMprF at two different states. Color codes: blue, RtMprF(DDM)-nanodiscs; red, RtMprF(GDN)-nanodiscs. The arrows indicate the putative motion of TM12 and TM14 when the protein switches from one state to the other. In c, d, and g, the numbers labeled nearby the green dashes indicate the distances (Å) between two adjacent groups. proteins have higher level of LysPG (per unit protein) than those with the wild-type protein (Fig. 4d, e), while the protein expression levels are slightly variable among the wild type and mutants (Fig. 4e). Among the four mutants, the cells expressing R304A mutant exhibit the highest level of total LysPG, while those with Y307A mutant have lower level of total LysPG than the other three mutants but are still 1.5 times of those with wild-type protein. Similarly, the levels of LysPG on the outer leaflet of membrane (accessible by the fluorescamine dye) appear to be also higher for the mutants than for the wild type (Fig. 4f, g). Fluorescamine is a membrane-impermeable dye mainly reacting with the LysPG exposed on cell surface (on the outer leaflet) and was previously used for characterizing the in vivo flippase function of SaMprF in S. aureus 9 . To further analyze the functional role of Glu280 in RtMprF, three point mutants, namely E280A, E280K, and E280Q, were generated. While the protein expression level of E280A mutant is too low to be detected in western blot, the E280K and E280Q mutants can be expressed, but at much lower level than the wild type ( Fig. 4i). Nevertheless, the E280K and E280Q mutants exhibit more than 30 times increase in the total LysPG level per unit protein relative to the wild type, despite that the two mutant proteins express at much lower levels than the wild type (Fig. 4h, i). Similarly, the amounts of fluorescamine-labeled LysPG per unit protein also increase significantly in the two mutants (Fig. 4j, k).
Moreover, the proportion of LysPG in total phospholipids extracted from cells expressing the six mutants mentioned above are about 1.5-5 times of the LysPG/total phospholipids ratio in cells expressing wild-type RtMprF ( Supplementary Fig. 9d, e). Therefore, the mutations on the LysPG-binding sites in Cavity C or the Glu280-Arg304 ionic pair of RtMprF have dramatic stimulating effect on its synthase function, presumably by removing the potential inhibitory effect imposed by LysPG bound to Cavity C. Besides, the mutations may also enhance its flippase function leading to increased level of LysPG on the outer leaflet of the membrane.
RtMprF exhibits variable conformations with the synthase domain rearranged relative to the flippase domain. MprFs utilize aminoacyl-tRNA from cytosol and PG from membrane as the substrates for aminoacyl phospholipid synthesis 7 . The potential binding site of lysyl-tRNA in the synthase domain of RtMprF is located at the cleft between GNAT folds 1 and 2 ( Supplementary Fig. 10a, b), according to the previous studies on an alanyl transferase FemX involved in peptidoglycan biosynthesis (in complex with an aminoacyl-tRNA analog) and BlMprF in complex with L-lysine amide (LYN) 19,36 . Apparently, the active site of the synthase domain is separated from the LysPG-binding sites in the flippase domain by a large distance at 47.0 (within the same subunit) or 71.4 Å (between adjacent subunits). Such a large gap is apparently unfavorable for direct translocation of lipid molecules (PG or LysPG) between the two domains. To overcome the problem, RtMprF may go through large conformational changes and rearrange the synthase domain to a position close to membrane surface in order to acquire PG from the membrane and release LysPG back to the membrane.
In addition to the major class (class A) of symmetrical RtMprF dimer, there are minor classes (classes B-D) exhibiting asymmetric conformations (Supplementary Fig. 10c). Within the asymmetric RtMprF dimers, one monomer rearranges the synthase domain more dramatically than the adjacent one. When the flippase domains of classes B-D are superposed with that of class-A symmetric dimer, it becomes apparent that the synthase domains adopt distinct positions in the minor classes (Supplementary Fig. 10d-f). For class A, the long axis of the synthase domain forms 115.2°angle with the membrane plane. In comparison, the corresponding ones of classes B-D form much smaller angles (12.8-32.4°) with the membrane plane (Supplementary Fig. 10c). In these cases, the synthase domains may rotate from the upright position to nearly horizontal positions and detach from the cytoplasmic surfaces of the flippase domains. Consequently, the restraints restricting its movement are greatly reduced so that the synthase domain can readily move across long distance through Brownian motion. The inter-domain flexibility of RtMprF, as reflected by the variable positions of the synthase domain, may allow it to change conformation so as to acquire hydrophobic substrate (PG) from the membrane and to release LysPG back to the membrane for translocation by the flippase domain.
Discussion
Single-nucleotide polymorphisms of SaMprF are frequently found to be associated with daptomycin resistance of S. aureus, Fig. 4 Substrate selectivity and functional role of the LysPG-binding site in Cavity C of RtMprF. a Cartoon diagram of the RtPaMprF chimera constructed by fusing the flippase domain from RtMprF (residues 1-541, violet) and the synthase domain from PaMprF (residues 554-881, orange). b Analysis of the lipid samples extracted from the cell expressing the RtPaMprF chimera and purified protein samples through the TLC method. The lipid extracted from cells carrying empty vector (pET21b) and pET21b-RtMprF construct is loaded as controls. The lipid spots on the TLC plate were stained by ninhydrin. c Relative amount of AlaPG co-purified with RtPaMprF protein and various mutants. p = 0.000093 between WT and D234A, ****; p = 0.001696 between WT and Y303A, **; p = 0.0001 between WT and R304A, ****; p = 0.00019 between WT and Y307A, ***. whereas the action mechanism of SaMprF in causing daptomycin resistance remains largely unclear 11,13 . A recent work reported that T345A single-nucleotide polymorphism of SaMprF can cause daptomycin resistance in S. aureus reproducibly, while the mutation did not affect LysPG synthesis or translocation 14 . To analyze the locations of DAP-R-related mutations on SaMprF 14,37 , a structural model of SaMprF is generated through the comparative protein structure modeling method 38 . The mutation sites are mainly located in the flippase domain, while only three of them are in the synthase domain ( Supplementary Fig. 11a, b). Among the 24 sites in the flippase domain of SaMprF, eighteen are located in Subdomain 2 and the remaining six are in Subdomain 1. Interestingly, TM9 in Subdomain 2 contains a region of high-frequency DAP-R mutations including two gain-of-function mutations (T345A and V351E) responsible for significantly enhanced DAP-R phenotype 14 ( Supplementary Fig. 11c). It was proposed that the mutations may modulate specific interactions of SaMprF with the antibiotic molecule instead of affecting LysPG synthesis or translocation 14 .
Computational docking analysis suggests that daptomycin molecule can fit well in Cavity P of SaMprF ( Supplementary Fig. 11d). The daptomycin molecule inserts its N-terminal decanoyl fatty-acyl group and tryptophan side chain in two deep pockets of Cavity P (Supplementary Fig. 11e). Several DAP-R mutations are located near the entrance of Cavity P (Supplementary Fig. 11f) or on the wall of its internal pockets (Supplementary Fig. 11g). The mutations may influence the interactions between daptomycin and SaMprF by altering the overall shape and surface property of Cavity P. While the earlier work 14 and our hypothetical model both suggest that daptomycin may interact with SaMprF, more biochemical and other evidences are needed to verify their interaction and to address the question about whether SaMprF can directly translocate daptomycin or not. The structural model serves as a preliminary framework to guide further experiments in order to reveal the mechanism of daptomycin resistance caused by SaMprF variants. Moreover, the model might also be useful for discovery and development of SaMprF inhibitors as anti-infective agents. In practice, the SaMprF model could potentially be used in the structure-based virtual screening of antibiotic drugs for treating MRSA or VRSA infections.
As shown in the structure and crosslinking experiment (Fig. 1), RtMprF protein forms homodimer and larger oligomers, consistent with previous biochemical study on SaMprF 26 . There might be potential functional advantages for the RtMprF proteins to form homo-oligomers (dimer or tetramer). First, the dimerization interface of RtMprF homodimer contains four PG molecules, which may also serve as substrate for the synthase domain. Upon dimerization, the lipid substrate (PG) of RtMprF may be enriched in the milieu to promote synthesis of aaPG. Second, the two monomers may help each other in translocating aaPG if it is released to the membrane after being produced by the synthase domain. Dimerization or tetramerization of MprF may help to concentrate multiple flippase domains in a local region enriched with LysPG so that the translocation of LysPG can occur more efficiently as free diffusion of LysPG in the membrane is a relatively slow process and translocation of lipid molecule across the membrane is also a rate-limiting process 39 .
In bacterial cells, MprFs carry out dual functions by synthesizing aaPG and translocating it to the outer leaflet of bacterial membrane 9 . Although the synthase domain of MprF alone can catalyze synthesis of LysPG by itself under in vitro conditions, efficient production of LysPG in vivo requires the presence of both domains, and it was suggested that the transmembrane domain of MprF may help to position the catalytic domain in the cytosol during aaPG synthesis 8 . The cryo-EM structure of RtMprF in complex with LysPG reveals the close relationship between the two domains and apparently, the membraneembedded flippase domain does serve to position the catalytic domain in the cytosol by forming specific interactions with it (Fig. 2d). As shown in Fig. 2e, f, the catalytic domain interacts with amino acid residues from the cytoplasmic surfaces of Subdomains 2 and 1 mainly through salt bridges and hydrogen bonds. Such specific interactions enable the catalytic domain to approach membrane surface and acquire the lipid substrate (PG) from the membrane more efficiently than the catalytic domain expressed alone (not attached to the Subdomain 2). Although the in vitro activity assay reveals that the purified catalytic domain of MprF is active in the absence of transmembrane domain 8,19 , the function of isolated catalytic domain of MprF relies on delivering of lipid substrate through detergent (Triton X-100) micelles in the solution, a condition which does not exist in the in vivo environments. Therefore, the catalytic domain expressed alone in E. coli is not functional in producing of aaPG 8 , mainly because it is inefficient in acquiring the lipid substrate from the membrane through random diffusion process.
The characteristic hook-shaped LysPG1 and its binding site in Cavity C observed in the structures shed light on the mechanism of lipid selectivity in the flippase domain of MprF. The aminoacyl head groups of LysPG or AlaPG are specifically recognized by amino acid residues on the wall of Cavity C as observed in the RtMprF structure (Fig. 3d). The electronegative surface within Cavity C (Fig. 3b) is favorable for the binding of positively charged aminoacyl group of LysPG or AlaPG, but unfavorable for the anionic phospholipids (such as PG and cardiolipin (CL)) to bind. Moreover, the head groups of phosphatidylethanolamine (PE), phosphatidylserine (PS), and PG are much smaller in size and lacks the lysyl or alanyl group when compared with LysPG and AlaPG. Even if PE, PS, or PG from the membrane can enter Cavity C in MprF, they cannot form stable interactions with nearby residues. As for CL, it contains four fatty-acyl chains and is too big to fit in Cavity C of MprF. Therefore, the unique phospholipid-binding site in Cavity C of MprF likely selects LysPG or AlaPG against the other phospholipids through compatible shape, size, and surface charge property.
Near the interface between the two domains of RtMprF, LysPG is found within the inner cavity (Cavity C) of the flippase domain instead of binding to the active site of the synthase domain. Therefore, the RtMprF(DDM)-nanodisc structure may represent an intermediate state of the RtMprF protein after LysPG is synthesized and before it is translocated to the outer leaflet. How does MprF coordinate the two domains to facilitate synthesis of LysPG at the interface between cytosol and membrane surface (on the intracellular side) under physiological conditions? After LysPG is synthesized, how is it translocated by MprF from the inner leaflet to the outer leaflet of the membrane? To address the questions, here we propose a mechanistic model to account for the LysPG synthesis and translocation process mediated by MprF basing on the structural and biochemical analysis results of RtMprF (Fig. 5). For MprF to acquire the substrate PG molecules from membrane, the synthase domain may need to approach the cytoplasmic surface of the membrane in a nearly horizontal position (State 1). After LysPG is synthesized and tRNA is released, the synthase domain returns to the upright position (State 2) and LysPG in the inner leaflet of membrane diffuses into the binding site in Cavity C (State 3, as observed in RtMprF (DDM)-nanodisc structure with one LysPG bound). Diffusion of LysPG into Cavity C and its translocation is likely driven by the electrostatic repulsive force from the positively charged cytoplasmic surface near the entrance of Cavity C. Subsequently putative channel connecting the two cavities (State 4). A second molecule of LysPG may further enter Cavity C, so that Cavities C and P are both loaded with LysPG as observed in the RtMprF (GDN)-nanodisc structure (State 5). On the outer leaflet side, the LysPG molecule in Cavity P might be attracted by the negatively charged periplasmic surface around the lateral portal so that it can be released to the outer leaflet of the membrane. The LysPG translocation process may cycle among States 3-5 till the LysPG pool in the inner leaflet side is depleted. Afterwards, MprF returns to the initial apo-state (State 0) for the next round of LysPG synthesis-translocation process.
In the RtMprF(GDN)-nanodisc structure, there is an additional lipid-like density nearby LysPG1 (Supplementary Fig. 12). The density is sandwiched by LysPG1 and the β-hairpin loop between TM5 and TM6. As the molecule is located at the entrance of Cavity C, it may serve as a potential site for LysPG loading after it is synthesized and released from the synthase domain. Alternatively, in case that a PG molecule is captured at this site, it may serve as the substrate for the synthase domain if it could diffuse laterally to the active site of the synthase domain at a horizontal position close to the membrane surface. The third possibility is that it may belong to the bulk lipid (such as PE or elses) from the membrane, serving to stabilize LysPG1 in Cavity C and prevent unloading of LysPG1 on the cytoplasmic side by blocking the cytoplasmic portal of Cavity C.
While the head group of LysPG1 is blocked from entering Cavity P by the putative gate around the Arg304-Glu280 ionic pair in RtMprF (Figs. 3c and 5, State 3), the remaining parts of LysPG1 (including the glycerol-3-phosphate backbone, two fattyacyl chains and the head-group glycerol group) are prevented from entering Cavity P by the steric hindrance effects of amino acid residues from TM7a, TM7b, TM3a, TM3b, as well as the TM7a-TM7b and TM3a-TM3b loops. Among them, His271 on TM7a-7b loop region serves to block the 1-acyl chain of LysPG from entering the cleft between the TM7a-TM7b and TM3a-TM3b loops. Such a narrow cleft is closed at a contact point where Pro273 and Ala274 (in the TM7a-TM7b loop region) form van der Waals interaction and hydrogen bond with Phe121 and Thr118 in the TM3a-TM3b loop region, respectively. Pro273 of RtMprF is highly conserved in other MprF homologs ( Supplementary Fig. 7) and it corresponds to Pro247 in SaMprF. The P247A mutant of SaMprF not only affects Lys-PG production, but also increases daptomycin susceptibility of S. aureus, suggesting a decreased flippase activity of the mutant compared to the wild type 26 . For the PG group of LysPG1 to pass through the cleft and enter Cavity P, either TM7a-TM7b motif or TM3a-TM3b motif needs to move away from the central region so that a larger portal can form between them so as to facilitate translocation of LysPG1 into Cavity P.
Expression of a construct with Subdomain 1 (residues 1-320) of SaMprF separately can support LysPG translocation to some degree, when the synthase domain (328-840, including Subdomain 2 and the C-terminal catalytic domain) is co-expressed along with the Subdomain 1 in the same cell 26 . Through a bacterial two-hybrid assay, Ernst et al. discovered that Subdomain 1 interacts with Subdomain 2 (also termed Syn-h in the literature) and the C-terminal catalytic domain (Syn-cyt) 26 , and these interactions have been unraveled in detail through our structural analysis on RtMprF (Fig. 2d, e). It was also demonstrated that Subdomain 1 is essential for the flippase activity of SaMprF and the optimal flippase function may rely on the interaction of Subdomain 1 with the synthase domain (Syn-h-cyt) 14,26 . As shown in the structure of RtMprF (Fig. 3), Subdomain 1 harbors most of the LysPG-binding site in Cavity C and part of Cavity P. As Cavity P may serve to host LysPG temporarily and facilitate release of LysPG to the outer leaflet as proposed in our mechanistic model, the integrity of its local structure is probably essential for the optimal flippase activity of MprF. Consistently, when Subdomain 1 was fused with two extra transmembrane helices from Subdomain 2 (or Syn-h) and co-expressed along with the synthase domain, it exhibited higher flippase activity than the one without the extra transmembrane helices 26 .
Does the aaPG translocation process mediated by MprF requires energy? Unlike the P-type ATPase lipid flippase (Drs2p-Cdc50p) or ATP-Binding Cassette transporter (MsbA), MprF does not contain any ATPase or ATP-binding cassette domain ( Supplementary Fig. 6). It is not a primary active transporter and does not utilize the energy of ATP to transport aaPG. Recently, a bioinformatics study suggests that MprF may belong to the Major Facilitator Superfamily (MFS) basing on the observation that the transmembrane region of MprF contains a MFS-like domain 40 . The MFS members mediate facilitated diffusion or cation (H + or Na + )-dependent transport and solute:solute antiport of substrate molecules 41 . For the proton-dependent oligopeptide transporters of the MFS superfamily, they share a conserved ExxER/K motif in the transmembrane domain and two pair of salt bridges serving to stabilize the transporter at different conformational states 42 . In RtMprF, only one pair of salt bridge is found in the membraneembedded region, namely the Glu280-Arg304 pair. In the previous work by Ernst et al. 26 , an alanine mutation was introduced to the Asp254 site of SaMprF (corresponding to Glu280 in RtMprF), and the S. aureus strains with the D254A mutant exhibited significantly increased daptomycin susceptibility (suggesting a reduced flippase activity of SaMprF) compared to the wild type. Moreover, the R279A mutation (corresponding to Arg304 in RtMprF) also leads to similar loss-of-function effect. For RtMprF, while the E280A mutant does not express in E. coli (presumably due to high toxicity of the target protein), the E280K and E280Q mutants (Fig. 4h-k) as well as the R304A mutant ( Fig. 4d-g) can be expressed in E. coli and exhibit enhanced level of both synthase and flippase activities relative to the wild type. The distinct functional phenotypes between R304A of RtMprF and R279A of SaMprF may be due to the differences in protein expression level and host species.
As the Glu280-Arg304 pair is located at the gate between the two cavities, protonation of Glu280 may lead to weakening of Glu280-Arg304 interaction and trigger further conformational changes nearby to open the gate and allow the substrate (LysPG) to go through. In this case, the proton gradient across the membrane might be exploited by MprF to stimulate or drive the translocation of LysPG molecule.
As LysPG is initially synthesized on the cytoplasmic side and inserted to the inner leaflet of the membrane, the concentration of LysPG should be higher in the inner leaflet side than [LysPG] in the outer leaflet. Moreover, there is a LysPG hydrolase found in some bacterial periplasma serving to hydrolyze LysPG and lower its concentration in the outer leaflet constantly 43,44 . Therefore, there should be a concentration gradient of LysPG between the inner leaflet and the outer leaflet of the membrane at the initial stage. In this case, MprF likely serves as a uniporter to facilitate diffusion of LysPG down its concentration gradient and the process might not need to consume energy, but it could probably be stimulated by protonation of Glu280. Alternatively, it may function like a lipid scramblase, such as TMEM16, which harvest the energy of the phospholipid gradient and may utilize a hydrophilic cavity on the surface of membrane-embedded domain for transport of lipids 45 . For future works, analyzing the flippase activity of MprF through the liposome-based transport assay under asymmetric (and symmetric) pH conditions across the membrane will be helpful in addressing the question about whether the flippase activity of MprF is regulated by proton/ion gradient or not.
Methods
Protein expression and purification. The mprF gene from R. tropici (RtMprF) was cloned into the pET21b vector and the recombinant plasmid was used to transform E. coli C41(DE3) cells. The cells were cultured in Terrific Broth media containing 50-μg mL −1 ampicillin, and protein expression was induced with 0.5-mM isopropyl β-D-thiogalactopyranoside (IPTG) overnight at 16°C after OD 600 reached 1.0. For purification of RtMprF, cells were harvested through centrifugation, resuspended in a buffer containing 25-mM Tris-HCl (pH 8.0) and 300-mM NaCl and then incubated with 1% lysozyme at 4°C for 30 min. The suspension was sonicated and centrifuged at 11,000 g (JL-25.50 rotor, Beckman) for 30 min. The supernatant was ultracentrifuged at 158,000 g (Type 45 Ti rotor, Beckman) to collect membrane pellets. The pellets were resuspended in a buffer with 25-mM Tris-HCl (pH 8.0), 300-mM NaCl, and 1.5% β-DDM (Anatrace) and incubated at 4°C for 30 min. After centrifugation at 40,000 g for 30 min, the supernatant was loaded onto a Ni-NTA column and the protein was purified at 4°C by a step-wise elution method with three different buffers containing 25-mM Tris-HCl (pH 7.5), 300-mM NaCl, 20, 50, or 300-mM imidazole, and 0.05% β-DDM for MprF(DDM) sample or 0.02% GDN for MprF(GDN) sample. The fraction eluted in the buffer with 300-mM imidazole was pooled and concentrated to 10 mg mL −1 in 100-kDa molecular weight cutoff (MWCO) concentrator (Millipore) for further experiments.
To construct the RtPaMprF chimera, the cDNA sequence of RtMprF synthase domain (residues 542-869) on the pET21b-RtMprF vector was replaced by the coding sequence of PaMprF synthase domain (residues 554-881, AlaPG synthase) through two PCR reactions. In detail, the region encoding PaMprF synthase domain was amplified through the first PCR reaction by using a pair of primers with sequences of 5′-CCGGCAACGAAGCGGCCGGAGCCTGTCAGCGCGGAA GAGCTG-3′ and 5′-GTGGTGGTGCTCGAGTGCGGCCGCAAGCTTGCGTTTC ACCAA-3′. The DNA product contains 30-bp nucleotides in the 5′-and 3′terminal regions matching with the regions upstream and downstream of the coding sequence of RtMprF synthase domain on the vector pET21b, respectively. The replacement was further accomplished through a second PCR reaction by using the DNA product of the first round PCR (with the cDNA encoding PaMprF synthase domain) as primers and the pET21b-RtMprF vector as template. The second PCR reaction adopts the protocol of the Quick Change method. After digestion of the template with DpnI, the product of the second round of PCR was used for transformation of DH5α E. coli competent cells for plasmid amplification. After transformation and antibiotics resistance screening, the clone with target plasmid was selected and verified through DNA sequencing. The protocols for RtPaMprF protein expression and purification were the same as the one used for wild-type RtMprF.
The DNA encoding the synthase domain of RtMprF (RtMprF-SD, residues 542-862) was cloned into the pET21b vector and transformed into the E. coli BL21 (DE3) cells for protein expression. For purification of the recombinant RtMprF-SD protein expressed in BL21(DE3) cells, the cell pellets harvested through centrifugation were resuspended in a buffer with 25-mM Tris-HCl (pH 8.0) and 700-mM NaCl. After the cells were lysed through sonication, the suspension was centrifuged at 40,000 g (JL-25.50 rotor, Beckman) for 30 min. The protein was purified by using the Ni-NTA column through the step-wise elution protocol with buffers containing 25-mM Tris-HCl (pH 7.5), 700-mM NaCl and 20, 50, or 300-mM imidazole. The fraction eluted in the buffer with 300-mM imidazole was pooled and concentrated to 15 mg mL −1 in 30-kDa MWCO concentrator. The protein was further purified through gel filtration on a Superdex 200 Increase 10/ 300 GL column (GE Healthcare) in a buffer with 25-mM Tris-HCl (pH 7.5) and 700-mM NaCl. The major peak fractions were collected and concentrated to 15 mg mL −1 for crystallization. The Se-Met protein was purified through the same procedure as the one used for native protein purification except that 2-mM DTT and 0.2-mM EDTA are added to the buffers. The detailed information about cell strains, plasmids, and primers used for cloning and protein expression is included in Supplementary Table 2.
Crystallization of RtMprF-SD and structure determination. The protein crystals of the RtMprF-SD were obtained through the hanging-drop vapor diffusion method at 16°C with a well solution containing 0.1-M NaAc (pH 6.0) and 16% PEG3350. The Se-Met derivative crystals were grown with the well solution containing 0.1-M NaAc (pH 6.0) and 10% PEG3350. Both the native and the derivative crystals were cryo-protected in a solution with 0.1-M NaAc (pH 6.0), 10% PEG3350, and 20% glycerol before being flash-frozen in liquid nitrogen. The native and Se-Met derivative datasets were collected at 0.96000 and 0.97909-Å wavelength, respectively, on BL1A and NW12A beamlines in the Photon Factory (Tsukuba, Japan) using the UGUI v2 software, and were processed by using the HKL2000 program 46 . The phases were solved through the single-wavelength anomalous method using Phenix AutoSol program 47 . Model building and structure refinement were accomplished by using Coot 48 Reconstitution of RtMprF in nanodiscs. The purified RtMprF protein was incorporated into lipid nanodiscs with a molar ratio of RtMprF protein:membranescaffold-protein 1E3D1 (MSP1E3D1):POPG at 1:2:100. The mixture was incubated at 4°C for 1 h on a sample rotator. Reconstitution was initiated by removing detergent through addition of Bio-beads (Bio-Rad) to the sample and incubation at 4°C overnight with constant rotation. In the next day, the old Bio-beads were replaced by fresh Bio-beads and the sample was further incubated for 2 h. Subsequently, Bio-beads were removed from the sample and the nanodisc reconstitution mixture was incubated with 0.25-mL Ni-NTA resin for 1 h at 4°C to enrich nanodiscs with the target protein and remove the empty ones. The resin was washed with five column volumes of wash buffer (20-mM Tris-HCl pH 7.5, 300-mM NaCl, and 20-mM imidazole) followed by four column volumes of elution buffer (20-mM Tris-HCl pH 7.5, 300-mM NaCl, and 300-mM imidazole). The eluted RtMprF protein in nanodiscs was further purified by loading the sample onto a Superdex 200 increase 10/300 GL size-exclusion column (GE Healthcare Life Sciences) and eluting it in the gel-filtration buffer with 20-mM Tris-HCl (pH 7.5) and 300-mM NaCl.
The grids containing RtMprF nanodisc samples were imaged with a 200-kV Talos Arctica microscope equipped with a Gatan K2 Summit direct detector camera using the SerialEM acquisition software (ver. 3.6.2). An energy filter with slit width of 20 eV was used during data collection at a nominal magnification of ×130,000, resulting in a super-resolution pixel size of 0.5 Å (physical pixel size of 1.0 Å). Movies (32 frames per movie file) were captured with a defocus value at a range of −1.5 to −2.0 μm in the super-resolution mode using a dose rate of~9.6 e − pixel −1 s −1 over 5.2 s yielding a cumulative dose of~50 e − Å −2 .
Image processing. For RtMprF(DDM)-nanodiscs, a total of 2921 cryo-EM movies were aligned with dose-weighting using MotionCor2 program 49 with 5 by 5 patches and a B-factor of 250. Micrograph contrast transfer function (CTF) estimations were performed by using CTFFIND 4.1.15 program 50 . Particle picking, 2D classification, and ab initio 3D reference generation were performed in cryoSPARC v2.9 program 51 . After manual inspection of the micrographs, 2549 were selected and~100 particles were picked manually from the micrograph and sorted into 2D classes. The best classes were selected and used as references for subsequent autopicking procedure. After the process, 887,196 particles were auto-picked and extracted using a box size of 200 pixels. 2D classification was performed to remove ice spots, contaminants, and aggregates, yielding 529,371 particles. The particles were exported from cryoSPARC v2.9 using the UCSF pyem v0.5 script (https://doi. org/10.5281/zenodo.3576630) and re-extracted in RELION-3 program 52 from the original micrographs for 3D classification. Consequently, 247,321 particles were selected for further refinement. Per-particle CTF refinement, with estimation of the beam tilt and Bayesian polishing, was performed in RELION-3. Particles with resolution lower than 4-Å resolution were discarded, and the refinement with C2 symmetry imposed resulted in a 3.7-Å cryo-EM density map from a major class of 160,417 particles. For the minor classes of asymmetric shapes, four classes with tilted synthase domains are subject to a second round of 3D classification and the three classes with particle numbers over 10,000 are chosen for auto-refine with C1 symmetry individually. To improve the local map quality around LysPG in Cavity C, a mask covering Subdomain 1 was generated by Chimera and applied for local refinement in Relion 3. The local refinement procedure with the mask and solvent-flattened Fourier shell correlations yielded a reconstruction for Subdomain 1 at 3.4 Å.
For RtMprF(GDN)-nanodiscs, a total of 2579 cryo-EM movies were aligned with dose-weighting using MotionCor2 program with 5 by 5 patches and a B-factor of 250. Micrograph CTF estimations were performed by using CTFFIND 4.1.15 program. Particle picking, 2D classification, and ab initio 3D reference generation were performed in cryoSPARC v2.9 program. After manual inspection of the micrographs, 2317 were selected and~200 particles were picked manually from the micrograph and sorted into 2D classes. The best classes were selected and used as references for subsequent autopicking procedure. After the process, 1232,621 particles were auto-picked and extracted using a box size of 200 pixels. 2D classification was performed to remove ice spots, contaminants, and aggregates, yielding 343,117 particles. The particles were exported from cryoSPARC v2.9 using the UCSF pyem v0.5 script and re-extracted in RELION-3 program from the original micrographs for 3D classification. Consequently, 276,824 particles were selected, removed duplicates, and re-extracted with a box size at 320 instead of 200 pixels for further refinement. Per-particle CTF refinement, with estimation of the beam tilt and Bayesian polishing, was performed in RELION-3. A tight mask for TM domain was generated by Chimera and RELION-3, followed by 3D classification by skipping alignment. Finally, 144,479 particles were selected and the refinement with C2 symmetry imposed resulted in a 2.96-Å cryo-EM density map.
Model building and refinement. The structural model of the flippase domain of RtMprF was built manually in Coot program 48 , guided mainly by the cryo-EM map. The secondary structure prediction from PSIPRED Server 53 (ver. 4.0) and the transmembrane helix prediction from TMHMM Server 54 (ver. 2.0) were used as references during model building. While most of the transmembrane helices are identified in the map and the models are registered with amino acid sequences, the density for TM14 is too weak for MprF(DDM)-nanodiscs and it is tentatively interpreted with a poly-alanine α-helix model. For the synthase domain, the crystal structure was docked manually into the corresponding region of the cryo-EM map of the full-length RtMprF, subject to rigid body refinement and local adjustment in Coot, and then merged with the flippase domain. The structural model of RtMprF was refined against the cryo-EM map by using phenix.real_space_refine program followed by manual adjustment in Coot. The program refinement and manual adjustment were carried out iteratively till the model-to-map fitting is optimal and the model geometric parameters are within reasonable range (Supplementary Table 1). The final model covers 793 or 820 out of 869 amino acid residues of the full-length RtMprF protein for MprF(DDM)-nanodiscs or MprF(GDN)-nanodiscs, respectively, while several regions in the loops or near the amino-and carboxyltermini Crosslinking of RtMprF. The oligomeric state of the RtMprF protein was analyzed through chemical crosslinking experiment by using the membrane preparation from the E. coli cells expressing the full-length RtMprF. The cells were resuspended in a buffer consisting of 20-mM HEPES (pH 7.5) and 300-mM NaCl (buffer A). After the cells were lysed by passing through a high-pressure homogenizer (ATS Engineering), the cell debris was removed through low-speed centrifugation at 11,000 g for 15 min and the membrane fraction was further collected through ultracentrifugation at 100,000 g for 30 min at 4°C. The membrane pellets were resuspended in buffer A and sonicated with 1 s on, 5 s off for 2 min to homogenize the sample. The membrane suspension was aliquoted and then treated with disuccinimidyl suberate (DSS) at 0-5-mM final concentration for 1 h at 30°C with constant mixing on a shaker. The reactions were quenched by adding 100-mM Tris-HCl (pH 7.5). The crosslinked samples were solubilized by adding 1% β-DDM for 1 h in the shaker. Subsequently, the samples were centrifugated at 18,000 g for 10 min and the supernatant was mixed with 5 × SDS-PAGE loading buffer, and then loaded on the SDS-PAGE gel for electrophoresis. The protein bands on the gel were transferred to polyvinylidene difluoride membrane and then detected through western blot by using the Anti-His Mouse monoclonal antibody (1:80,000 dilution) and Goat anti-Mouse IgG (H + L)-HRP (1:20,000 dilution). After being developed with the western lightning Ultra ECL horseradish peroxidase substrate (Perkin-Elmer), the blots were imaged on a chemiluminescence CCD system (ChemiScope 3500 mini imager, Clinx Science Instruments).
TLC and mass spectrometry. The lipids from E. coli membrane expressing recombinant RtMprF/RtPaMprF or from the purified RtMprF protein samples were extracted according to Bligh and Dyer procedure 55 . In detail, 12-mL chloroform:methanol (1:2, v:v) mixture was added to 3.2-mL sample and mixed well through vortex. Subsequently, 4-mL chloroform was added to the sample and vortexed again to mix. Finally, 4-mL water was added to the sample and vortexed well. The mixture was centrifuged at 100 g for 5 min to get a two-phase system with aqueous phase at the top and organic phase at the bottom. The bottom phase was washed twice with an aqueous upper phase solution (freshly made by mixing chloroform, methanol, and water at 2:2:1.8 (v:v:v) ratio and centrifuging the mixture). Finally, the bottom phase was recovered, dried under vacuum, and dissolved in 100-μL chloroform. The lipid samples were separated on the HPTLC silica gel 60 F254 plates (Merck) in a mobile phase of chloroform:methanol:water mixture (65:25:4). Lipid spots were visualized through staining with iodine or ninhydrin. For separation of AlaPG and PE, a mobile phase of chloroform: methanol:acetic acid:water (80:12:15:4) mixture was used.
The liquid chromatography-mass spectrometry (LC-MS)/MS analysis was performed by using a Thermo Scientific Dionex Ultimate 3000 LC system coupled to a TripleTOF 5600 quadrupole time-of-flight tandem mass spectrometer. An ACQUITY UPLC C18 reversed-phase column (1.7 μm, 2.1 × 100 mm, Waters) was used in LC. Mobile phase A consisted of methanol/acetonitrile/aqueous 15-mM ammonium acetate (1:1:1, vol/vol/vol), and mobile phase B consisted of 80% 2propanol and 20% methanol containing 5-mM ammonium acetate. The LC process was operated at a flow rate of 250 μL min −1 with a linear gradient as follows: 10% B was held constantly for 1 min and then increased linearly to 60% B over 5 min, further to 100% B over 12 min and finally held at 100% B for 2 min. The conditions for MS were set with the following parameters: electrospray voltages, +5500 V (positive ion mode) and −4400 V (negative ion mode); declustering potential, 100 V; GS1 and GS2, 60 psi. The collision-induced dissociation tandem mass spectra were obtained with collision energy of +35 V in the positive ion mode or −35 V in the negative ion mode. Nitrogen was used as the collision gas.
For quantification of lipid:protein molar ratio, the lipids were extracted from 47.5 nmol of purified RtMprF(DDM) protein according to Bligh and Dyer procedure and were dissolved in 100-μL chloroform. Two microliter of the lipid solution were applied on the HPTLC silica gel 60 F254 plates (Merck) and separated in the solvent of chloroform:methanol:water mixture (65:25:4). As the standard samples, 0.25, 0.5, 1.0, 2.0, 4.0-nmol LysPG (Avanti) were also loaded on the same plate. For quantification of LysPG:protein stoichiometry in RtMprF (GDN), 17.8-nmol protein was used and extracted in 40-μl chloroform. The standard samples of 1.0, 2.0, 4.0, and 8.0-nmol LysPG were loaded on the plate as references. Lipid spots were visualized through the iodine staining procedure. It is noteworthy that the data obtained through iodine staining generates a linear standard curve better than those stained with ninhydrin. For quantifying the relative amount of LysPG/AlaPG co-purified with RtMprF/RtPaMprF mutants, lipids extracted from same amount of purified RtMprF/RtPaMprF mutant protein were separated on HPTLC silica gel 60 F254 plates (Merck) and stained by ninhydrin. The protein concentration was measured through the bicinchoninic acid method (TransGen Biotech, Beijing). The mobile phase chloroform:methanol: water mixture (65:25:4) and chloroform:methanol:acetic acid:water (80:12:15:4) were utilized to separate lipids extracted from RtMprF and RtPaMprF mutant protein, respectively. The image of the iodine-or ninhydrin-stained TLC plate was processed by Image J and analyzed by the GraphPad program. For the data presented in Figs. 3e, f, 4c and Supplementary Fig. 9b, f, three aliquots of the sample of the same type are loaded on the TLC plates and the measurements of the three parallel spots are used for statistical analysis.
Relative quantification of total LysPG and fluorescamine-labeled LysPG. The cells were cultured in Terrific Broth media containing 50-μg mL −1 ampicillin, and protein expression was induced with 0.5-mM IPTG for 2 h at 37°C after OD 600 reached 1.0. After the cells were harvested through centrifugation, the pellets were washed and suspended in a solution containing 25-mM HEPES-Na (pH 8.5) and 300-mM NaCl (Buffer B), and then adjusted to a concentration of 4 × 10 9 cell mL −1 .
For total lipid extraction, 5-mL cell suspension was centrifuged and resuspended in 1.2-mL Buffer B. Subsequently, 4.5-mL chloroform:methanol (1:2, v:v) mixture was added to the suspension and vortexed for 30 s. Afterwards, 1.5-mL chloroform and 1.5-mL HEPES buffer were added to the sample sequentially and vortexed for 10 s after each step. After the mixture was centrifuged at 100 g for 5 min, the bottom phase was recovered, dried under vacuum, and dissolved in 150-μL chloroform. To separate LysPG from other lipids, 4.5-μL total lipid samples were loaded on the HPTLC plate and the plate was developed in a solvent of chloroform:methanol: water (65:25:4) mixture, dried, and then stained with ninhydrin.
For detection of LysPG in the outer leaflet of the membrane, 50-μL fluorescamine (50-mM stock solution in DMSO) was added to 5-mL cell suspension and incubated at room temperature for 20 min. Afterwards, 500-μL Tris-HCl (pH = 8.0, 1 M) was added to the mixture and incubated for 5 min to stop the reaction. Finally, the cells were collected through centrifugation, washed once in 5-mL Buffer B and resuspended in 1.2-mL Buffer B. The following procedures of lipid extraction and TLC experiments were the same with the above protocols used for the total lipid extraction sample, except that the lipid spots were visualized under UV light after they were separated on HPTLC plates. Image analysis was accomplished by using Image J and GraphPad program. For the data presented in Fig. 4d-k, the aliquots of three repeats of distinct samples are loaded on the TLC plates and the measurements of the three parallel spots are used for statistical analysis. For Supplementary Fig. 9e, the data presented are mean values of three independent repeats of TLC experiments with distinct samples.
Computational modeling analysis. The model of SaMprF is constructed through the Modeller 9.23 program 38 by using the cryo-EM structure of RtMprF as the template and the amino acid sequence alignment data of the two homologs as the other input. Virtual docking of daptomycin molecule on SaMprF was carried out through the Autodock Vina program (v1.1.2) 56 by providing the homologous model of SaMprF and the structure of daptomycin downloaded from the Protein Data Bank [PDB code: 1T5M (https://www.rcsb.org/structure/1T5M)]. A cubic box with 60 × 60 × 60 grid points (in the x, y, and z dimensions) and 0.375-Å spacing was applied to define the search region on the SaMprF model during the Autodock analysis.
Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
Data supporting the findings of this manuscript are available from the corresponding author upon reasonable request. A reporting summary for this Article is available as a Supplementary | 15,006.6 | 2021-01-21T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Effect of Inflation, Rupiah Exchange, Dow Jones Index, Nasdaq Index, and S & P500 Index Against Combined Stock Price Index
This research aims to determine empirical evidence of the influence of inflation, Rupiah exchange rate, Dow Jones index, Nasdaq index, A & P500 index on the Composite Stock Price Index. The research uses quantitative methods and uses classical assumption tests, multiple regression analysis, and hypothesis testing. The data used is secondary data during the period January 2019 – November 2023, resulting in a total of 59 samples from each variable using the purposive sampling method. The research results show that inflation has an effect on the Composite Stock Price Index, the Rupiah exchange rate has no effect on the Composite Stock Price Index, the Dow Jones Index has an effect on the Stock Price Index Combined, the Nasdaq Index influences the Composite Stock Price Index, the S & P500 Index influences the Composite Stock Price Index, and Inflation, the Rupiah Exchange Rate, the Dow Jones Index, the Nasdaq Index, and the S & P500 Index simultaneously influence the Composite Stock Price Index.
INTRODUCTION
Capital markets have a very important role in almost all countries because they contribute strategically to a country's economic resilience.This directly influences investors' interest in investing in participating in the capital market, especially in stock investment.Investment is when a person or company invests resources such as money, time or labor in an asset or project with the hope of gaining greater returns or profits in the future.
The Indonesian Central Securities Depository (KSEI) even recorded the number of investors as of July 2023 with the age group under 30 years at 57.26%, the age group 31 to 40 years at 23.18%, the age group 41 to 50 years at 11.29%, the age group 51 to 60 years was 5.41%, and the age group over 60 years was 2.87%.This proves that the younger generation is more sensitive to investing.The main purpose of investing is to generate income, increase capital, or protect the value of assets from inflation.There are various forms of investment, one of which is financial assets such as shares, bonds, mutual funds, certificates of deposit and other instruments.Investors purchase these assets with the hope of obtaining a return on their investment through dividend income, interest, or an increase in the value of the asset.It is important to remember that investment always involves certain risks and the results cannot always be predicted, therefore the choice of investment type and investment strategy must be adjusted to the financial goals, risk tolerance and financial situation of the individual or company.A common strategy to reduce investment risk is diversification, that is, spreading investments across different types of assets.Oil Value, and Global Indices such as the Dow Jones Index, Nasdaq Index, S&P500 Index, and Hang Seng Index.The stock markets of developed countries such as America, Hong Kong, and others have a significant impact on the movement of the Indonesian stock market because a strong country's economy has a big influence on the economy of a weak country.In other words, the stock index of developed countries can influence the stock index of developing countries.
Inflation is an economic phenomenon that is feared in almost all countries.This occurs when the prices of goods and services generally increase continuously over a certain period of time, usually within one year [1].The inflation rate is described as the percentage increase in average prices over the period and is an important indicator in assessing the health of a country's economy [2].Low and stable inflation is generally seen as positive because it contributes to price stability and allows consumers and businesses to plan their finances better.However, if the inflation rate is too high it can reduce people's purchasing power and affect overall economic stability [3].
According to [4], inflation is the annual percentage increase in the general price level as measured by the consumer price index or other price index.[5], [6], [7], and [8] explain that inflation is a general and continuous tendency to increase prices.An increase in the price of one or two goods is not considered inflation unless the increase is widespread and affects most other goods.Increasing inflation can cause a decrease in purchasing power, meaning that the value of money can only be used to purchase goods and services in smaller quantities.When inflation rises, stock prices and returns generally fall, which means stocks with dividends also experience price declines.In this situation, investors can take advantage of this opportunity by buying shares at a cheaper price.
The Dow Jones Industrial Average (DJIA) is a stock index created by Charles Dow, editor of the Wall Street Journal and founder of Dow Jones & Company.The Dow Jones Industrial Average (DJIA) index was founded by Charles Dow in 1896 as a way to measure the performance of the industrial components of the American stock market.The Dow Jones Index is the oldest index on the United States market that is still operated and one of the benchmarks for the health of the United States stock market.The Dow Jones Index currently consists of the 30 largest and most widely listed companies in the United States.Changes in the stock price of the Dow Jones index result in dividend distribution and stock splits affecting the index value.Apart from that, the Dow Jones index has an influence on global markets because the United States is the benchmark for the world economy, for example the dollar is still the main means of payment used throughout the world.
Nasdaq is the abbreviation for National Association of Securities Dealers Automated
Quotations.Nasdaq is a stock index that reflects the performance of the technology market in the United States.Nasdaq is one of the leading stock indexes in the United States and changes in stock prices that occur in it have a big influence on the global stock market because the Nasdaq index is very popular in various media and investors.The Nasdaq index is very influential on the global market because until now the United States is still the benchmark for the world economy because the United States Dollar (USD) is still the main means of payment used throughout the world.The Nasdaq Index is one of the world's most famous stock market indexes that reflects the performance of a large number of companies listed on the Nasdaq Stock Exchange, especially technology companies, software, Internet companies and related sectors.
When mentioning "Nasdaq", what is meant is the Nasdaq Composite Index (abbreviated as Nasdaq Composite Index or Nasdaq Composite), which includes three thousand companies listed on the Nasdaq Stock Exchange.Apart from that, there are also other indexes that focus on certain sectors, such as the Nasdaq-100 which consists of the 100 largest companies traded on Nasdaq and focuses on technology and related sectors.This index is used as a tool to evaluate stock market performance in the United States, particularly with regard to technology companies and other innovative sectors.
An overview of [9] previous research entitled The Effect of Inflation, Rupiah Exchange Rate, Dow Jones Index, Nasdaq Index on the Composite Stock Price Index.Likewise, other research from [10] and [11] with the same title, Analysis of the Nasdaq Index on the IHSG, stated that it had a simultaneous effect that the results of the research conducted produced an influence on the Nasdaq Index on the Composite Stock Price Index.Monitoring the movements of these indices provides insight into changes in market sentiment and the performance of companies in the index portfolio.
Information regarding the value of the Nasdaq index can be found on various financial sites or stock trading platforms.
METHODS
To collect the data needed for this research, the author used movement data on the Composite Stock Price Index (IHSG), Inflation, Rupiah Exchange Rate, Dow Jones Index, NASDAQ Index, and S & P500 Index obtained from the website www.investing.com,which is managed by Investing M.S Fusion Media Ltd. 7 Florinis Str.Greg Tower, 2nd Floor 1065 Nicosia, Cyprus.The selection of data sources is based on the availability of data that is relevant to the research, as well as the belief that the data obtained from this source has a high level of accuracy.The research and data collection process was carried out over a three month period, namely September to December 2023.
The variables studied are independent variables.The aim is to identify and analyze how inflation, the rupiah exchange rate, the Dow Jones index, the NASDAQ index and the S&P500 index can influence other variables such as the movement of the Composite Stock Price Index (IHSG).Therefore, inflation, the rupiah exchange rate, the Dow Jones index, the NASDAQ index and the S & P500 index are considered as variables that influence the dependent variable related to the Composite Stock Price Index in this research.
Population and samples taken via web investing for the period January 2019 to November 2023, totaling 354 samples.This amount is based on each of the six variables that have been determined, namely Inflation, Rupiah Exchange Rate, Dow Jones Index, NASDAQ Index, S & P500 Index and Composite Stock Price Index.
The analysis used in this research is multiple regression analysis because it involves more than two independent variables because if there is only one independent variable then simple regression analysis is used.However, in the regression analysis test, there are four regression assumption tests, including the normality test, multicolosity test, heroscedasticity test, and auto correlation test.
Normality Test
According to [12], the normality test is used to assess whether the variable data studied is normally distributed or not.The normality test has several methods that can be used, such as the normal P-Plot graph or statistical analysis such as the Kolmogorov -Smirnov test.Analysis using the normal P-Plot graph makes decisions based on the distribution of data along the diagonal.If the data spreads diagonally and follows its direction, it can be concluded that the data is normally distributed.Meanwhile, the Kolmogorov-Smirnov test states that data is normally distributed if the significance value is greater than 5% or 0.05 and if the significance value is less than 5% or 0.05 then the data does not follow a normal distribution.
Multicollinearity Test
Multicollinearity is a linear relationship between independent variables.According to [13] states that the multicolosity test is to test whether there is a high or perfect correlation between independent variables in a regression model.This can cause large errors so that when testing the coefficients, the t-count becomes smaller than the t-table.The multicollinearity test can be carried out using the pairwise correlation method, if the correlation coefficient of each independent variable is smaller than 0.90 (> 0.90) then multicollinearity does not occur and vice versa.The multicolosity test is carried out to ensure that there is no multicolosity in the regression model created, so that the results of the regression analysis obtained are accurate.
Heroscedasticity Test
According to [14], the heteroscedasticity test is a statistical test carried out on a regression model to test whether the variance of the residuals of an observation is not the same compared to other observations.Residual is the difference between the observed or observed value and the predicted value.If the residual variation from one observation to another is constant, then this is called homoscedasticity.On the other hand, if the variations in the residuals are different, it is called heteroscedasticity.The heteroscedasticity test can be carried out using a scatterplot graph or the predicted value of the dependent variable called SRESID with residual error ZPRED.The basis for decision making in scatterplot graphic analysis is that if there is no particular pattern and it does not extend above or below zero on the Y axis then it can be concluded that heteroscedasticity does not occur.Apart from that, the heteroscedasticity test can be tested using the glacier test method.The basis for decision making in the glacier test analysis is that if the significance value is greater than 0.05 (>0.05), it can be concluded that there are no symptoms of heteroscedasticity and vice versa.
Auto Correlation Test
According to [15], the autocorrelation test is used to test whether there is a correlation between confounding errors in period t and confounding errors in period t-1 (previous) in the linear regression model.The Durbin-Watson test is one of the most commonly used methods to test autocorrelation.Autocorrelation testing is carried out to ensure that the regression model created is free from autocorrelation so that the resulting regression analysis results can be reliable and accurate.
Multiple regression analysis is a statistical method used to understand how to explain the relationship between a dependent variable (response variable) and two or more independent variables simultaneously.The main goal of multiple regression analysis is to develop a mathematical model that can explain the complex relationships between these variables.The regression coefficient can be written in the form of a mathematical equation as follows: In statistical regression analysis, there are three tests that are generally used to test hypotheses regarding the relationship between variables in the regression model.These three tests are:
T test (partial regression coefficient test)
The t test is used to test whether the regression coefficient of each independent variable in the regression model is individually significant or not.This helps us understand the relative contribution of each independent variable to the dependent variable.
F test (simultaneous regression coefficient test)
The F test tests whether at least one of the independent variables in the overall regression model has a significant influence on the dependent variable.This tests the validity of the entire regression model.
Coefficient of Determination
The coefficient of determination (Adjusted R-squared) measures how well the regression model explains variations in the dependent variable.Provides information regarding the percentage of variation in the dependent variable that can be explained by the independent variables in the model.
RESULTS AND DISCUSSION
Secondary data obtained through the official Bank Indonesia website is accessed via www.investing.com.Inflation data, Rupiah exchange rate data, Dow Jones index data, Nasdaq index data and the Composite Stock Price Index studied in this research are for the period January 2019 to November 2023 with 59 samples along with the analysis results as below: The lowest point of inflation was in September 2019 with a value of -0.27 percent (deflation), while the highest inflation rate occurred in September 2022 with a value of 1.17 percent.The average inflation rate during the research period was 0.2341 percent per month with a standard deviation of 0.27828 percent.This means that the average inflation rate during the research period ranged from -0.04418 percent to 0.51238 percent per month.
The Rupiah exchange rate was at its lowest in January 2020 with a value of IDR 13,650 per USD, while the highest value occurred in March 2020 with a value of IDR 16,300 per USD.Overall, the average rupiah exchange rate during the research period was IDR 14,601.22 per USD with a standard deviation of IDR 553,708.This shows that the rupiah exchange rate has changed (fluctuated) by an average of IDR 553,708 per USD from an average value of IDR 14,601.22.
The Dow Jones Index had its lowest value in March 2020 with a value of IDR 21,917 per share and the highest value was achieved in December 2021 with a value of IDR 36,338 per share.The average price per share of the Dow Jones index during the research period was IDR 30,762.53 with a standard deviation of IDR 3,858,010.This shows that the Dow Jones index has changed (fluctuated) by an average of IDR 3,858,010 per share from an average value of IDR 30,762.53 The Nasdaq index had the lowest value in January 2020 with a value of IDR 7,282 per share and the highest value was achieved in December 2021 with a value of IDR 15,645 per share.The average price per share of the Nasdaq index during the research period was IDR 11,538.12 with a standard deviation of IDR 2,504,094.This shows that the Nasdaq index experienced changes (fluctuated) by an average of IDR 2,504,094 per share from an average value of IDR 11,538.12.
The S & P500 Index had its lowest value in March 2020 with a value of IDR 2,585 per share and the highest value was achieved in December 2021 with a value of IDR 4,766 per share.The average price per share of the S&P500 index during the research period was IDR 3,749.78 with a standard deviation of IDR 630,402.This shows that the S & P500 index has changed (fluctuated) by an average of IDR 630,402 per share from an average value of IDR 3,749.78The classical assumption test is used to measure the relationship between two or more variables, as well as to determine the direction of the relationship between the dependent and independent variables.Classical analysis has four tests, namely normality test, multicollinearity test, heteroscedasticity test, and autocorrelation test.
Normality Test
The Normality Test is carried out to measure whether the dependent variable and independent variables in the regression model have a normal distribution or not.This test method is carried out by analysis of Histogram graphs, Normal P-Plot graphs, and One-Sample Kolmogorov-Smirnov statistical tests.Histogram test results using SPSS produce the following output: Based on the Normal P-plot graphic output in the figure, it can be seen that this research data shows a pattern that matches the normal distribution pattern.This can be seen from the data distribution that follows the diagonal line.Therefore, it can be concluded that the data in this study has normal distribution.To strengthen the results of the normality test previously explained, the researcher carried out a 62 statistical test using the Kolmogorov-Smirnov test and produced the following output.:Based on table 4.9 above, it can be seen that the correlation coefficient value for the inflation variable is -0.246, the Rupiah exchange rate is 0.003, the Dow Jones Index is 0.480, the Nasdaq Index is -0.024 and the S&P500 Index is -0.878.Because the results of all variables are less than 0.90 (< 0.90), this means that there are no symptoms of multicollinearity.Based on the output in Figure 4.3 above, it can be concluded that heteroscedasticity does not occur because the points are spread above and below the number 0 on the Y axis.However, this research is strengthened by carrying out heteroscedasticity testing using the Gleser Test method with decision making based on the significance value of each variable, if the value is significant is greater than 0.05 (> 0.05) then there are no symptoms of heteroscedasticity and vice versa if the variable significance value is smaller than 0.05 (< 0.05) then it can be concluded that heteroscedasticity occurs.The test results of the Glazer Test use SPSS and produce the following
Factors
that can significantly influence stock investment activities and the movement of the Composite Stock Price Index (IHSG) on the Indonesian Stock Exchange come from external and internal factors.Internal factors are influenced by company performance, but external factors are influenced by macroeconomic factors.Macroeconomic factors that can influence the value of the Composite Stock Price Index (IHSG) are Inflation, Rupiah Exchange Rate, World Gold Value, World
Table 1 .
Descriptive Statistical Analysis Source: SPSS Data Processing Results 25.0 Version | 4,397.4 | 2024-03-26T00:00:00.000 | [
"Economics"
] |
Visual Object Tracking Based on Cross-Modality Gaussian-Bernoulli Deep Boltzmann Machines with RGB-D Sensors
Visual object tracking technology is one of the key issues in computer vision. In this paper, we propose a visual object tracking algorithm based on cross-modality featuredeep learning using Gaussian-Bernoulli deep Boltzmann machines (DBM) with RGB-D sensors. First, a cross-modality featurelearning network based on aGaussian-Bernoulli DBM is constructed, which can extract cross-modality features of the samples in RGB-D video data. Second, the cross-modality features of the samples are input into the logistic regression classifier, andthe observation likelihood model is established according to the confidence score of the classifier. Finally, the object tracking results over RGB-D data are obtained using aBayesian maximum a posteriori (MAP) probability estimation algorithm. The experimental results show that the proposed method has strong robustness to abnormal changes (e.g., occlusion, rotation, illumination change, etc.). The algorithm can steadily track multiple targets and has higher accuracy.
Introduction
Visual object tracking is one of the key research topics in the field of computer vision. In recent years, it has had a wide range of applications, such as robot navigation, intelligent video surveillance, andvideo measurement [1][2][3][4]. Despite many research efforts, visual object tracking is still regarded as a challenging problem due to changes in object appearance, occlusions, complex motion, illumination variation and background clutter [5].
A typical visual object tracking algorithm often includes three major components: a state transition model, an observation likelihood model and a search strategy. A state transition model is used to model the temporal consistency of the states of a moving object, whereas an observation likelihood model describes the object and observations based on visual representations. Undoubtedly, feature representation is the most important factor in visual object tracking. Most of existing RGB-D trackers [6][7][8] tend to use hand-crafted features to represent target objects, such as Harr-like features [9], histogram of oriented gradients (HOG) [10], and local binary patterns (LBP) [11]. Hand-crafted features aim to describe some pre-defined image patterns, but they cannot capture thecomplex and specific characteristics of target objects. Hand-crafted features may lead to the loss of unrecoverable information which is suitable for tracking in different scenarios. With the rapiddevelopment of computation power and the emergence of large-scale visual data, deep learning has received much attention and had a promising performance in computer vision tasks, e.g., object tracking [12], object detection [13], and image classification [14]. Wang et al. proposed a so-called deep learning tracker (DLT) for robust visual tracking [15]. DLT trackers learn generic features from auxiliary natural images offline. ADLT tracker cannot obtain deep features with temporal invariance, which is important for visual object tracking. In [16], the authors proposed a video tracking algorithm using learned hierarchical features in which the hierarchical features are learned via a two-layer convolutional neural network. Ding et al. [17] proposed a new tracking-learning-data architecture to transfer a generic object tracker to a blur invariant object tracker without deblurring image sequences. One of the research focuses of this paper is how to use deep learning effectively to extract the features of the target objects in RGB-D data.
To the best of our knowledge, the existing visual tracking methods using deep learning follow a similar procedure, which tracks objects in 2D sequences. Object tracking is performed over 2D video sequences in most early research works like TLD tracker [18], MIL tracker [19] and VTD tracker [20]. With the great popularity of affordable depth sensors, such as Kinect, Asus Xtion, and PrimeSense, an explosive growth of RGB-D data that can be used nowadays has been seen. Reliable depth images can provide valuable information to improve tracking performance. In [21], the author establishesa unified benchmark dataset of 100 RGB-D videos, which provide a foundation for further research in both RGB and RGB-D tracking. One of theresearch focuses of this paper is how to fuse RGB information and depth information effectively to improve the performance of visual object tracking in RGB-D data.
To overcome the problems in the existing methods, we propose a visual object tracking algorithm based on cross-modality feature learning using Gaussian-Bernoulli deep Boltzmann machines (DBM) over RGB-D data. A cross-modality deep learning framework is usedto learn a robust tracker forRGB-D data. The cross-modality features of the samples are input into the logistic regression classifier, andthe observation likelihood model is established according to the confidence score of the classifier. We obtain the object tracking results over RGB-D data using aBayesian maximum a posteriori probability estimation algorithm. Experimental results show that such a cross-modality learning can improve the tracking performance.
The main contributions of this paper can be summarized as follows: • We present a cross-modality Gaussian-Bernoulli deep Boltzmann machine (DBM) to learn the cross-modality features of target objects in RGB-D data. The proposed cross-modality Gaussian-Bernoulli DBM is constructed with two single-modality Gaussian-Bernoulli DBMs by adding an additional layer of binary hidden units on top of them, which can fuse RGB information and depth information effectively. • A unified RGB-D tracking framework based on Bayesian MAP is proposed, in which the robust appearance description with cross-modality features deep learning, temporal continuity is fully considered in the state transition model.
•
Extensive experiments are conducted to compare our tracker with several state-of-the-art methods on the recent benchmark dataset [21]. From experimental results, we can see that the proposed tracker performs favorably against the compared state-of-the-art trackers.
The remainder of the paper is organized as follows. First, feature learning over RGB-D data with cross-modality deep Boltzmann machines is described in the next section. Then we introduce our tracking framework in Section 3. The implementation of our proposed method is presented in Section 4. Experimental results and analysis are demonstrated in Section 5, and finally we draw conclusions in Section 6.
Boltzmann Machine
TheBoltzmann machine (BM) was proposed by Hinton and Sejnowski [22]. A Boltzmann machine is a feedback neural network consisting of fully connected coupled random neurons. The connections between neurons are symmetric, and there is no self-feedback. The outputs of neurons only have two states (active and inactive) which are expressed by 0 and 1, respectively. A set of visible units v ∈ {0, 1} D and a set of hidden units h ∈ {0, 1} F are included in BM (as shown in Figure 1). The visible units and hidden units are composed ofthe visible nodes and hidden nodes, and D and F represent the number of visible nodes and hidden layer nodes, respectively.
Boltzmann Machine
TheBoltzmann machine (BM) was proposed by Hinton and Sejnowski [22]. A Boltzmann machine is a feedback neural network consisting of fully connected coupled random neurons. The connections between neurons are symmetric, and there is no self-feedback. The outputs of neurons only have two states (active and inactive) which are expressed by 0 and 1, respectively. A set of visible and a set of hidden units are included in BM (as shown in Figure 1). The visible units and hidden units are composed ofthe visible nodes and hidden nodes, and D and F represent the number of visible nodes and hidden layer nodes, respectively. We formulate the energy function over the state { , } v h as: are the model parameters: , , W L R represent the symmetric interaction terms of visible nodes to hidden nodes, visible nodes to visible nodes, and hidden nodes to hidden nodes. The diagonal elements of L and R are set to 0. B and A are the threshold values of the visible layer and the hidden layer.
The model defines a probability distribution over a visible vector v as: is called the partition function, and * P is an unnormalized probability.
The following formulations give the conditional distributions over hidden and visible units: is the logistic function.
Restricted Boltzmann Machine
Setting both 0 L and 0 R in Equation (1), we will recover the model of a restricted Boltzmann machine (RBM), as shown in Figure 2. We formulate the energy function over the state {v, h} as: where Ψ = {W, L, R, B, A} are the model parameters: W, L, R represent the symmetric interaction terms of visible nodes to hidden nodes, visible nodes to visible nodes, and hidden nodes to hidden nodes. The diagonal elements of L and R are set to 0. B and A are the threshold values of the visible layer and the hidden layer. The model defines a probability distribution over a visible vector v as: where is called the partition function, and P * is an unnormalized probability.
The following formulations give the conditional distributions over hidden and visible units: where σ(x) = 1/(1 + exp(−x)) is the logistic function.
Restricted Boltzmann Machine
Setting both L = 0 and R = 0 in Equation (1), we will recover the model of a restricted Boltzmann machine (RBM), as shown in Figure 2. A restricted Boltzmann machine(RBM) is a generative stochastic artificial neural networkthat can learn a probability distribution over its set of inputs. It is an undirected graphical model with each visible unit only connected to each hidden unit. The energy function over the visible and hidden units. where where the normalizing factor ( ) Z denotes the partition function.
Gaussian-Bernoulli Restricted Boltzmann Machines
When inputs are real-valued images, we formulate the energy function of the Gaussian-Bernoulli RBM over the state { , } v h as follows [23]:
Gaussian-Bernoulli Deep Boltzmann Machine
A deep Boltzmann machine (DBM) [24] contains a set of visible units , and a sequence of layers of hidden units Connections only exist between hidden units in adjacent layers. We illustrate a two-layer Gaussian-Bernoulli deep Boltzmann machine, consisting of learning a stack of modified Gaussian-Bernoulli RBMs (see Figure 3). A restricted Boltzmann machine(RBM) is a generative stochastic artificial neural networkthat can learn a probability distribution over its set of inputs. It is an undirected graphical model with each visible unit only connected to each hidden unit. The energy function over the visible and hidden units.
where the normalizing factor Z(Ψ) denotes the partition function.
Gaussian-Bernoulli Restricted Boltzmann Machines
When inputs are real-valued images, we formulate the energy function of the Gaussian-Bernoulli RBM over the state {v, h} as follows [23]: where Ψ = {a, b, W, σ} are the model parameters, b i and a j are biases corresponding to visible and hidden variables, respectively, W ij is the matrix of weights connecting visible and hidden nodes, and σ i is the standard deviation associated with a Gaussian visible variable v i .
Gaussian-Bernoulli Deep Boltzmann Machine
A deep Boltzmann machine (DBM) [24] contains a set of visible units v ∈ {0, 1} D , and a sequence of layers of hidden units Connections only exist between hidden units in adjacent layers. We illustrate a two-layer Gaussian-Bernoulli deep Boltzmann machine, consisting of learning a stack of modified Gaussian-Bernoulli RBMs (see Figure 3). Sensors 2017, 17, 121 5 of 17 The energy function of the joint configuration are the model parameters, and denote the set of hidden units. The probability distribution over a visible vector v can be modelled as:
Feature Learning UsingCross-Modality Deep Boltzmann Machines over RGB-D Data
ABoltzmann machine (BM) is an effective tool in representing probability distribution over its inputs. Deep Boltzmann Machines (DBMs) have been successfully used in many application domains, e.g., topic modelling, classification, dimensionality reduction, feature learning, etc. According to the task, DBMs can be trained in either unsupervised or supervised ways. In this paper, we propose the cross-modality DBMs for feature learning in visual tracking over RGB-D data. In this section, we first describe how to establish cross-modality DBMs, review BMs, RBMs and Gaussian-Bernoulli restricted Boltzmann machines, then go over them in detail.
Multimodal deep learning was proposed forvideo and audio [25,26]. In RGB-D data, we can also learn deep features over multiple modalities (RGB modality and depth modality). The proposed cross-modality DBM is constructed with two single-modality Gaussian-Bernoulli DBMs by adding an additional layer of binary hidden units on top of them (see Figure 4). Firstly, we model a RGBspecific Gaussian-Bernoulli DBM with two hidden layers as Figure 4a, where be the two layers of hidden units in the RGB-specific DBM. Then, the energy function of Gaussian-Bernoulli DBM over where ( ) RGB i is thedeviation of the corresponding Gaussian model, and RGB is the parameter vector of RGB-specific Gaussian-Bernoulli DBM. Therefore, the joint distribution of the energy-based probabilistic model is defined through an energy function as: The energy function of the joint configuration {v, h (1) , h (2) } is formulated as: where Ψ = {W (1) , W (2) } are the model parameters, and h = {h (1) , h (2) } denote the set of hidden units. The probability distribution over a visible vector v can be modelled as:
Feature Learning UsingCross-Modality Deep Boltzmann Machines over RGB-D Data
ABoltzmann machine (BM) is an effective tool in representing probability distribution over its inputs. Deep Boltzmann Machines (DBMs) have been successfully used in many application domains, e.g., topic modelling, classification, dimensionality reduction, feature learning, etc. According to the task, DBMs can be trained in either unsupervised or supervised ways. In this paper, we propose the cross-modality DBMs for feature learning in visual tracking over RGB-D data. In this section, we first describe how to establish cross-modality DBMs, review BMs, RBMs and Gaussian-Bernoulli restricted Boltzmann machines, then go over them in detail.
Multimodal deep learning was proposed forvideo and audio [25,26]. In RGB-D data, we can also learn deep features over multiple modalities (RGB modality and depth modality). The proposed cross-modality DBM is constructed with two single-modality Gaussian-Bernoulli DBMs by adding an additional layer of binary hidden units on top of them (see Figure 4). Firstly, we model a RGB-specific Gaussian-Bernoulli DBM with two hidden layers as Figure 4a, be the two layers of hidden units in the RGB-specific DBM. Then, the energy function of Gaussian-Bernoulli DBM over v RGB , h RGB is defined as: is thedeviation of the corresponding Gaussian model, and Ψ RGB is the parameter vector of RGB-specific Gaussian-Bernoulli DBM. Therefore, the joint distribution of the energy-based probabilistic model is defined through an energy function as: where Z(Ψ RGB ) is the partition function.
Sensors 2017, 17, 121 6 of 17 (1 ) ( 2 ) (1 ) where σ Therefore, the joint probability distribution over the cross-modal input {v RGB , v Depth } can be written as: where Ψ cross−modality is the parameter vector of cross-modality Gaussian-Bernoulli DBM. The task of learning the cross-modality Gaussian-Bernoulli DBM is the maximum likelihood learning for Equation (6) with respect to the model parameters.
Bayesian Framework
In this paper, the object tracking is formulated as a hidden state variable Bayesian maximum a posteriori (MAP) estimation problem in the Hidden Markov model. Given a set of observed variables Z t = {Z 1 , Z 2 , . . . , Z t }, we can estimate the hidden state variable X t = X 1 t , X 2 t , . . . . . . X N t by using Bayesian MAP theory [27].
The posteriori probability distribution according to the Bayesian theory can be modelled as the following derivation: where p(Z t |X t ) stands for an observation likelihood model and p( X t |X t−1 ) is called a state transition model for two consecutive frames. We can obtain the optimal stateX t among all the candidates through maximum posterior probability estimation:
State Transition Model
The state variable is defined as X t = {x t , y t , θ t , s t , α t , φ t }, which includes the six parameters of the motion affine transformation, where x t and y t denote the x-direction and y-direction translation of the object in the frame t respectively, θ t represents the rotation angle, s t stands for the scale change, α t denotes the aspect ratio, and φ t represents skew direction at time t.
We assume that the candidate states are generated according to Gaussian distribution: where Σ is a diagonal covariance matrix whose diagonal elements are σ 2
Observation Likelihood Model
In this paper, the observation model that we use is discriminative. A binary linear classifier is adopted to classify tracking observations into object class and background class during tracking. Observations are represented using features learned from the DBM introduced previously. We can obtain a training dataset with approximate labels after extracting features of positive and negative samples. Deep representations are likely to be linearly separable, and linear classifiers are less prone to overfitting. We adopt the logistic regression classifier owing to its capability of providing predictions in probability estimation.
Let h 3 i ∈ R r×1 denote the deep feature for the i-th training sample, and y i ∈ {−1, +1} represent the label for the i-th training sample. Z + = [h 3 1 + , h 3 2 + , . . . , h 3 D + ] ∈ R r×D + stands for the positive training set with their respective labels as Y + = [y 1 + , y 2 + , . . . , represents the negative training set with their respective labels as Training the logistic regression classifier by optimizing: where C + ∈ R is the parameter to weight the logistic cost of the positive-class and C − ∈ R is the parameter to weight the logistic cost of the negative-class logistic. Weight regularization w is added to the cost function in Equation (19) to reduce overfitting. In the prediction stage, the confidence score of the trained logistic regression classifier can be computed as follows:
The Implementation of Our Proposed Method
Our method has two major components, which are shown in Figures 5 and 6. In the first place, as demonstrated in Figure 5, unlabeled patches in RGB and depth modality are used to train the cross-modality Gaussian-Bernoulli DBM offline.
The Implementation of Our Proposed Method
Our method has two major components, which are shown in Figures 5 and 6. In the first place, as demonstrated in Figure 5, unlabeled patches in RGB and depth modality are used to train the cross-modality Gaussian-Bernoulli DBM offline. Then, the trained cross-modality Gaussian-Bernoulli DBM is transferred to an observational model for visual tracking online based on Bayesian MAP, as shown in Figure 6.
Experimental Results and Analysis
The experiments of our proposed tracking algorithm is implemented on MATLAB R2014a, Intel(R) Core(TM) i7-4712MQ<EMAIL_ADDRESS>GHz and TITAN GPU, 8.00 GB RAM, Windows 8.1 operating system, in Beijing, China.
5.1.Qualitative Evaluation
In order to show the robustness of the visual object tracking algorithm discussed in this paper, we compare our tracker with several state-of-the-art methods on arecent benchmark dataset [21] in different environments with heavy or long-time partial occlusion, rotation, scale change, and fast Then, the trained cross-modality Gaussian-Bernoulli DBM is transferred to an observational model for visual tracking online based on Bayesian MAP, as shown in Figure 6.
The Implementation of Our Proposed Method
Our method has two major components, which are shown in Figures 5 and 6. In the first place, as demonstrated in Figure 5, unlabeled patches in RGB and depth modality are used to train the cross-modality Gaussian-Bernoulli DBM offline. Then, the trained cross-modality Gaussian-Bernoulli DBM is transferred to an observational model for visual tracking online based on Bayesian MAP, as shown in Figure 6.
Experimental Results and Analysis
The experiments of our proposed tracking algorithm is implemented on MATLAB R2014a, Intel(R) Core(TM) i7-4712MQ<EMAIL_ADDRESS>GHz and TITAN GPU, 8.00 GB RAM, Windows 8.1 operating system, in Beijing, China.
5.1.Qualitative Evaluation
In order to show the robustness of the visual object tracking algorithm discussed in this paper, we compare our tracker with several state-of-the-art methods on arecent benchmark dataset [21] in different environments with heavy or long-time partial occlusion, rotation, scale change, and fast
Experimental Results and Analysis
The experiments of our proposed tracking algorithm is implemented on MATLAB R2014a, Intel(R) Core(TM) i7-4712MQ<EMAIL_ADDRESS>GHz and TITAN GPU, 8.00 GB RAM, Windows 8.1 operating system, in Beijing, China.
Qualitative Evaluation
In order to show the robustness of the visual object tracking algorithm discussed in this paper, we compare our tracker with several state-of-the-art methods on arecent benchmark dataset [21] in different environments with heavy or long-time partial occlusion, rotation, scale change, and fast motion. Given the limited space, in this section we only list four of them to show the experimental results and the forms of data statistics.
We compare our method with several state-of-the-art trackers, including TLD Tracker [18], MIL Tracker [19],VTD Tracker [20], and RGB-D Tracker [28], CT Tracker [29], Struck Tracker [30], Deep Tracker [15], and Multi-cues Tracker [31],andwe ranthe experiments based on the code provided by the authors. Figure 7 demonstrates that our method performs well in terms of rotation, scale and position when the object undergoes severe occlusion. The MIL tracker and VTD tracker are sensitive to occlusion.
Sensors 2017, 17, 121 10 of 17 motion. Given the limited space, in this section we only list four of them to show the experimental results and the forms of data statistics. We compare our method with several state-of-the-art trackers, including TLD Tracker [18], MIL Tracker [19],VTD Tracker [20], and RGB-D Tracker [28], CT Tracker [29], Struck Tracker [30], Deep Tracker [15], and Multi-cues Tracker [31],andwe ranthe experiments based on the code provided by the authors. Figure 7 demonstrates that our method performs well in terms of rotation, scale and position when the object undergoes severe occlusion. The MIL tracker and VTD tracker are sensitive to occlusion. motion. Given the limited space, in this section we only list four of them to show the experimental results and the forms of data statistics. We compare our method with several state-of-the-art trackers, including TLD Tracker [18], MIL Tracker [19],VTD Tracker [20], and RGB-D Tracker [28], CT Tracker [29], Struck Tracker [30], Deep Tracker [15], and Multi-cues Tracker [31],andwe ranthe experiments based on the code provided by the authors. Figure 7 demonstrates that our method performs well in terms of rotation, scale and position when the object undergoes severe occlusion. The MIL tracker and VTD tracker are sensitive to occlusion. RGB Figure 9 illustrates the tracking results on the test video with severe occlusion, appearance change and fast motion. From the results, we can notice that the TLD, MIL and VTD methods are sensitive to target appearance change or occlusion. Figure 11 illustrates the "bad" tracking results of our method,meaning frames where tracking failures are observed. When the objects are all occluded, the tracking results of our method experience a drift phenomenon. As shown in experimental results, the proposed tracking method performs favorably against the state-of-the-art tracking methods in handling challenging video sequences, but there are some limitations for our method. The robustness of the proposed tracking method is not strong enough to solve allocclusion and abrupt movement.
Quantitative Evaluation
We use two measurements to quantitatively evaluate tracking performances. The first one is called average center location error [32] which measures distances of centers between tracking results and ground truths in pixels. The second one is called success rate (SR) which is calculated according and indicates theextent of region overlapping between tracking results T R and G R . As shown in experimental results, the proposed tracking method performs favorably against the state-of-the-art tracking methods in handling challenging video sequences, but there are some limitations for our method. The robustness of the proposed tracking method is not strong enough to solve allocclusion and abrupt movement.
Quantitative Evaluation
We use two measurements to quantitatively evaluate tracking performances. The first one is called average center location error [32] which measures distances of centers between tracking results and ground truths in pixels. The second one is called success rate (SR) which is calculated according to area(R T ∩R G ) area(R T ∪R G ) and indicates theextent of region overlapping between tracking results R T and R G . Figures 12-15 report the average center location errors of different tracking methods over three test videos. The comparison results show that the proposed method has a smaller average center location error than the state-of-the-art methods indifferent situations. As shown in experimental results, the proposed tracking method performs favorably against the state-of-the-art tracking methods in handling challenging video sequences, but there are some limitations for our method. The robustness of the proposed tracking method is not strong enough to solve allocclusion and abrupt movement.
Quantitative Evaluation
We use two measurements to quantitatively evaluate tracking performances. The first one is called average center location error [32] which measures distances of centers between tracking results and ground truths in pixels. The second one is called success rate (SR) which is calculated according and indicates theextent of region overlapping between tracking results T R and G R . Table 1 reports the success rates, where larger scores mean more accurate results. Table 2 lists the average speed of each method on the recent benchmark dataset [21]. The average speed of our method is 0.14 fps, implemented in Matlab without optimization for speed. The fine-tuning of our method is time-consuming. Table 2. The average speed of each method on the recent benchmark dataset [21].
Method
The Average Speed (fps)
Conclusions
By analyzing the problems of the existing technologies, this paper proposes a visual object tracking algorithm based on cross-modality features learning using Gaussian-Bernoulli deep Boltzmann machines (DBM) over RGB-D data. We extract cross-modality features of the samples in RGB-D video data based on across-modality Gaussian-Bernoulli DBM and obtain the object tracking results over RGB-D data using aBayesian maximum a posteriori probability estimation algorithm. The experimental results show that the proposed method greatly improves the robustness and accuracy of thealgorithm. In the future, we will extend the proposed method to solve other vision problems (e.g., object detection, face recognition, etc.). | 6,015.8 | 2017-01-01T00:00:00.000 | [
"Computer Science"
] |
Additive and heterozygous (dis)advantage GWAS models reveal candidate genes involved in the genotypic variation of maize hybrids to Azospirillum brasilense
Maize genotypes can show different responsiveness to inoculation with Azospirillum brasilense and an intriguing issue is which genes of the plant are involved in the recognition and growth promotion by these Plant Growth-Promoting Bacteria (PGPB). We conducted Genome-Wide Association Studies (GWAS) using additive and heterozygous (dis)advantage models to find candidate genes for root and shoot traits under nitrogen (N) stress and N stress plus A. brasilense. A total of 52,215 Single Nucleotide Polymorphism (SNP) markers were used for GWAS analyses. For the six root traits with significant inoculation effect, the GWAS analyses revealed 25 significant SNPs for the N stress plus A. brasilense treatment, in which only two were overlapped with the 22 found for N stress only. Most were found by the heterozygous (dis)advantage model and were more related to exclusive gene ontology terms. Interestingly, the candidate genes around the significant SNPs found for the maize–A. brasilense association were involved in different functions previously described for PGPB in plants (e.g. signaling pathways of the plant's defense system and phytohormone biosynthesis). Our findings are a benchmark in the understanding of the genetic variation among maize hybrids for the association with A. brasilense and reveal the potential for further enhancement of maize through this association.
Introduction
Currently, major agro-systems are highly dependent on chemical fertilizers and pesticide inputs. One of the main strategies to develop sustainable agriculture in the face of natural resource scarcity and environmental impacts caused by the application of these products is the use of Plant Growth-Promoting Bacteria (PGPB) inoculants. These bacteria in association with plants may generate several benefits to the host, such as phytohormone biosynthesis, PLOS
Bacterial strain and inoculum
The bacterial strain A. brasilense Ab-V5 was selected from maize roots in Brazil and is registered by the Brazilian Ministry of Agriculture, Livestock and Food Supply (MAPA) for the inoculant production for maize, rice and wheat [3,35]. In addition, it is part of the Culture Collection of Diazotrophic and PGPB of Embrapa Soybean (Londrina, Paraná, Brazil). The bacterial inoculum of A. brasilense Ab-V5 was prepared in the Laboratory of Genetics of Microorganisms "Prof. João Lúcio de Azevedo" at ESALQ/USP, Piracicaba-SP, Brazil, and taken immediately to the experimental area. Bacterial inoculum was prepared by growing Ab-V5 in Dextrose Yeast Glucose Sucrose (DYGS) liquid medium [36] at 28˚C with 150 rpm agitation. The inoculum concentration was adjusted to approximately 1 × 10 8 UFC mL -1 and transferred with a pipette into plastic bags containing the maize seeds of each genotype individually. Sowing was done about 30 min after inoculation.
Plant material and greenhouse experiments
The association panel was comprised of 118 single-cross hybrids from a diallel mating design between 19 tropical maize inbred lines with genetic diversity to nitrogen-use efficiency [37][38][39]. The plants were grown under semi-controlled conditions in a greenhouse located at the University of São Paulo, Brazil (22˚42' 39" S; 47˚38' 09" W, altitude 540 m), in two years: November-December 2016 and February-March 2017. A randomized complete block experimental design with three replications spatially arranged under two countertops was adopted in each season. Two main treatments were evaluated: N stress without bacterial inoculation and N stress plus A. brasilense inoculation. The decision to non-input N fertilizer was due to its reported negative effects on N fixation by diazotrophic bacteria [40,41]. In each plot, three seeds were sown at 3 cm depth in plastic pots of 3 L capacity containing unsterilized loam soil from an area not in agricultural use. Information about the soil chemical and physical characteristics is available in Vidotti et al. [42]. After germination, the seedlings were thinned singly.
Only potassium chloride and single phosphate fertilizers were added to the soil according to the general crop demand. The average temperature was semi-controlled (20-33˚C), and supplementation of luminosity was by fluorescent lamps to simulate a photoperiod of 12 h. The water supply was provided manually by pot, with the same amount applied to all of them and always maintaining a well-watered condition. During the conduction of the experiments, no insect or pathogen attack was detected, and pesticides were not used. Approximately 35 days after the emergence, when most of the hybrids had reached the V7 stage (seven expanded leaves), plant height (PH, cm) was measured from the soil level to the insertion of the least expanded leaf. The shoot was harvested and dried in a forced-draft oven at 60˚C for 72 h to determine the shoot dry mass (SDM, g). The soil particles in each root system were carefully removed with water and the individual storage was performed inside plastic pots in 25% ethanol solution for preservation. The root images acquired by an Epson LA2400 scanner (2400 dpi resolution) were analyzed by WinRHIZO TM (Reagent Instruments Inc., Quebec, Canada). This software provided the measurements of root average diameter (RAD, mm), root volume (RV, cm 3 ), and the total length of a series of root diameter classes. The length of fragments with a diameter class less than or equal to 0.5 mm were considered as the lateral root length (roots from the axial roots-LRL, cm), while that with diameter classes greater than 0.5 mm were considered as axial root length (comprising crow, seminal and primary roots-ARL, cm) [43]. We determined the root dry mass (RDM, g) after drying the roots under the same conditions used for SDM measurement. This trait was used to calculate the specific root length (SRL, cm g -1 ) and specific root surface area (SRSA, cm 2 g -1 ) dividing the total root length and the surface area by the RDM, respectively. Furthermore, the root/shoot ratio (RSR, g g -1 ) was obtained by dividing the RDM by the SDM. In total, 10 traits were evaluated and approximately 1416 root systems analyzed.
Phenotypic analyses
The analyses were conducted using Restricted Maximum Likelihood/Best Linear Unbiased Predictor (REML/BLUP) mixed models, by ASReml R package [44], considering the following model: where y is the vector of phenotypic observations of the traits evaluated on maize hybrids; β E is the vector of fixed effects of year; β B is the vector of fixed effects of block within year; β C is the vector of fixed effects of countertop within block and year; β I is the vector of fixed effects of inoculation; β EI is the vector of fixed effects of inoculation × year interaction; u G is the vector of random effects of genotype, where u G e N 0; . X E , X B , X C , X I , X EI , Z G , Z GE , Z GI , and Z GEI are the respective incidence matrices related to each vector. The significance of fixed effects was tested using the Wald test implemented in the ASReml R package, while the significance of random effects was assessed by Likelihood Ratio Test (LTR) from the asremlPlus R package [45]. The variance components by treatment were estimated through reduced models disregarding the inoculation effect and its interaction with genotype. Broad-sense heritabilities were estimated as where the s 2 G is the genetic variance; s 2 GE is the genotype-byyear variance; s 2 ε is the error variance; j and r are the number of years and replications in each experiment, respectively.
Genotypic data
The Affymetrix 1 Axiom 1 Maize Genotyping Array [46] of 616,201 SNPs markers was used to genotype the parental inbred lines. Markers with call rate <95% and heterozygous loci on at least one individual were removed. Remaining missing data were imputed through the algorithms of Beagle 4.0 using the codeGeno function from the Synbreed R package [47]. The hybrid genotypes were obtained in silico from the genotypes of the corresponding parental inbred lines. After that, one more filter was applied to the matrix, eliminating SNPs with Minor Allele Frequency (MAF) � 0.05. A final SNP set of 59,215 was obtained and used for the subsequent analyses.
GWAS analyses
Marker-trait association analyses were performed for the traits with significant inoculation effect. For these traits, the adjusted means for each hybrid were calculated by treatment (inoculated and non-inoculated), separately, considering the following model: where y is the vector of phenotypic observations of the traits evaluated on maize hybrids; β E is the vector of fixed effects of year; β B is the vector of fixed effects of block within year; β C is the vector of fixed effects of countertop within block and year; β G is the vector of fixed effects of the genotype; β GE is the vector of fixed effects of genotype × year interaction; and ε is the vector of errors, where ε e N 0; s 2 ε À � . X E , X B , X C , X G , and X GE are the respective incidence matrices for each vector. Density and box plots were used to compare the means between both treatments. In addition, the changes due to A. brasilense inoculation on the hybrid traits were calculated by Δ = M1−M2, where M1 is the adjusted mean under N stress plus A. brasilense and M2 is the adjusted mean under N stress.
Population structure was estimated by principal component analysis (PCA) using the genomic matrix through the SNPRelate R package [48]. The GWAS analyses were conducted by a Fixed and Random Model Circulating Probability Unification method thought the FarmCPU R package [49]. This statistical procedure considers the confounding effect between the testing marker and both kinship (K) and population structure (Q) as covariates to minimize the problem of false positive and false negative SNPs. The FarmCPU R package uses the FaST-LMM algorithm to calculate the K from selected pseudo-QTNs (Quantitative Trait Nucleotides) and not from the total SNP set, as the standard K. The threshold values were calculated by the p. threshold function of FarmCPU. This permutes the phenotypes to break any spurious relationship with the genotype. After obtaining a vector of the minimum p-values of each experiment, the 95% quantile value of the vector is recommended for p.threshold. Finally, quantile-quantile (Q-Q) plots were used to verify the fitness of the model, considering population structure and kinship as factors.
The additive and heterozygous (dis)advantage models were applied in GWAS analyses by using specific encodings for the SNP matrix. Concerning the additive SNP effect with two alleles (A 1 and A 2 ), the SNP matrix was coded by 0 (A 1 A 1 ), 1(A 1 A 2 ), and 2 (A 2 A 2 ), considering the A 2 as the minor allele. In this context, the additive GWAS model assumes there is a linear change in the phenotype regarding the minor allele number of copies. On the other hand, in the heterozygous (dis)advantage GWAS model, the homozygous genotypes (A 1 A 1 or A 2 A 2 ) were assumed to have the same effect, while the heterozygous genotypes a different one, implying an increase or decrease of the effect on the trait. Therefore, the SNP matrix was coded by 0 (A 1 A 1 ), 1 (A 1 A 2 ), and 0 (A 2 A 2 ) [33,34]. Box plots were then used to show the phenotype values by genotypes of the SNPs significantly associated with the traits.
The average linkage disequilibrium (LD) in the hybrid panel was investigated using the square allele frequency correlation coefficient r 2 between all pairs of SNPs across the chromosomes using PLINK v.1.9 software [50]. The extension of LD decay was verified by plotting the r 2 values against the physical distance of the SNPs. Moreover, the heterozygosity by hybrid and by SNP marker was estimated dividing the number of heterozygous loci by the total of SNP markers and maize genotypes, respectively.
Identification of candidate genes
The candidate genes associated with the significant SNPs were obtained from the B73 genome reference (version 4) in the MaizeGDB genome browser (https://www.maizegdb.org/). Complementary information was collected from the U.S. National Center for Biotechnology Information (http://www.ncbi.nlm.nih.gov/) and the Universal Protein Resource (http://www.uniprot.org/). Venn diagrams were constructed to summarize the number of candidate genes identified using the VennDiagram R package [51]. In addition, the sequences of the candidate genes were categorized functionally by Gene Ontology (GO) terms [52], disregarding those with hypothetical function. The terms were obtained using the Blast2GO software with the default parameters specified by the program [53] and were previously simplified using the GO Slim feature.
The phenotypic effect of A. brasilense inoculation on the maize hybrids
Significant phenotypic differences among the 118 maize hybrids were observed for all traits evaluated, except PH and SDM (S1 Table). Furthermore, the genotypic performance for RDM, RV, RAD, SRL, SRSA, and RSR were affected significantly by the inoculation with A. brasilense; thus, only these traits were considered for subsequent analyses. In general, higher broadsense heritabilities were found under inoculated treatment than non-inoculated (S1B Fig).
Regarding the density distribution of the adjusted means for all traits, larger phenotypic variances were found in the inoculated condition compared to the non-inoculated ( Fig 1A). Overall, the inoculation increased the RDM, RV, RAD, and RSR while the opposite was observed for SRL and SRSA. Concerning the change due to inoculation (Δ) for all traits, a distribution close to normal was observed ( Fig 1B). In this sense, most of the evaluated hybrids showed low responsiveness to A. brasilense. Moreover, a considerable portion of the genotypes showed negative responsiveness to A. brasilense; that is, a worse performance than the non-inoculated. The correlations between the ΔRDM and ΔRV with ΔRSR were 0.41 and 0.35, respectively ( Fig 1C).
Population structure and LD decay
The genetic structure of the hybrid panel was accessed by PCA using 59,215 SNP markers ( Fig 2A). The first two PCs captured a small percentage of the total variance (20.8%). In addition, the individuals had a wide distribution throughout the projection space, which indicates a weak structure among the genotypes. Moreover, a rapid decline in LD was observed ( Fig 2B), with 121.7 kb extent when r 2 reached 0.23 (half the maximum value). The average heterozygosity of hybrids was 0.32, ranging from 0.03 to 0.38 with most of the individuals presenting around 0.35 ( Fig 2C). The low values found for some individuals indicate that some inbred lines used in the diallel crosses were of high genetic similarity. For the heterozygosity of markers, this value was also 0.32, varying from 0.10 to 0.61 ( Fig 2D).
Marker-trait associations
The additive and heterozygous (dis)advantage GWAS models were used to dissect the genetic basis of the traits RDM, RV, RAD, SRL, and SRSA under N stress and N stress plus A.
brasilense conditions, since for these traits the genotypes showed a differential performance due to the inoculation effect. Only the genetic relatedness (K matrix) was used as a covariate in all GWAS analyses.Thus, the population structure information was not included due to the increase in the deviation from the expected p-values showed by Q-Q plots (not presented). Furthermore, based on the LD decay for this hybrid panel, the gene annotation was performed within a 50 kb sliding window around each significant SNP.
Concerning the additive GWAS model, eight significant SNP-trait associations were revealed in the maize hybrids evaluated under N stress plus A. brasilense (Table 1 and S2 and S6A Figs). In general, at least one candidate gene was identified for each trait, which was located on the chromosomes 2, 4, 6, 7, and 9. In addition, using the same model but for Nstress treatment, one significant association was detected for each trait, totaling five candidate genes, which were located on chromosomes 2, 5, and 6 ( Table 1 and S3 and S6B Figs). However, for chromosome 5, position 149998432, no candidate gene was found within the window considered. The results for RSR in both treatments were disregarded due to poor adjustment with the expected values shown by the Q-Q plot.
Two candidate genes identified in the inoculated treatment were similar to those identified under N-stress treatment, but for different traits. In this sense, the candidate genes Zm00001d013098 and Zm00001d005892 were related to RAD and SRL under A. brasilense treatment, and to RDM and RAD under non-inoculated treatment, respectively.
In total, 47 significant SNP-trait associations were found, where 25 were related to traits under N stress plus A. brasilense and 22 to N stress. Regarding the models, 13 significant associations were identified by using the additive GWAS model and 34 the heterozygous (dis) advantage model. There was no candidate gene shared between them (Fig 3A). Finally, the nature of the SNP effect on the traits, positive or negative, was independent of the treatment or GWAS model (S6, S7, and S8 Figs). The categorization of candidate genes sequences according to biological process using the Blast2GO software showed that just one biosynthetic category was present in all treatments ( Fig 3B). Moreover, in general, the candidate genes found by additive GWAS model tended to be mainly enriched for terms such as "DNA metabolic process" and "lipid metabolic process". In turn, those found by the heterozygous (dis)advantage model showed more exclusive biological functions; for example, "catabolic process", "cellular component organization", "response to stress", and "secondary metabolic process". Comparing the inoculated and non-inoculated treatments, a different pattern of categorization was seen between them, especially for the candidate genes found by the heterozygous (dis)advantage model.
Genotypic variation of maize to A. brasilense under nitrogen stress
One of our aims was to evaluate the genetic variability of the responsiveness of maize hybrids to the inoculation with the PGPB A. brasilense and the genetic control of related traits to this effect. The few studies that have reported the differential responsiveness among maize genotypes to A. brasilense inoculation were based on a smaller number of hybrids or inbred lines [5,6,54]. Moreover, as far as we know, our report has evaluated the largest number of maize genotypes for their association with PGPB.
In general, Azospirlillum spp. promotes several benefits and changes in maize, including phytohormone production of auxins, cytokinins, and gibberellins [55,56], plant growth and yield [35,55], contents of secondary metabolites [57], photosynthetic potential [1], anatomical pattern (e. g. metaxylem vessel elements) and architecture of roots [31,58], N 2 fixation [6], fertilizer-N recovery [59], tolerance of abiotic stresses (e. g. N limitation and drought conditions) [55,60]. In this work, the inoculation of A. brasilense under N stress promoted significant change in maize performance for six root-related traits: RDM, RV, RAD, SRL, SRSA, and RSR. Some studies have also shown the positive effect of the inoculation of Azospirillum spp. on RDM, RV, and the promotion of thinner root growth [55,61,62], but our study reports for the first time the response in SRL and SRSA.
Our results did not show pronounced differences among the distributions of adjusted means of the hybrids under N stress and N stress plus A. brasilense. However, we observed a significant variation in the delta (the difference between inoculated and non-inoculated treatments), including some of the maize hybrids showing negative effects on the traits due to the inoculation. This result shows that adding only one PGPB to the microbiome is enough to expand the range of maize plant responses under low-N stress. This may be because microbes alter the plant functioning and confer different characteristics to the host plant, reinforcing the emerging idea of the holobiont being a unit of selection that possess a larger variability to be explored in plant breeding [63][64][65].
Studies reporting a decrease in the phenotypic traits of host plants due to the inoculation with PGPBs, such as A. brasilense, are not common in the literature [66]. One possibility is that the genotypes with a negative response to inoculation can have more unfavorable alleles related to the association with A. brasilense. For example, triggering plant defense responses incurs an energetic cost [67], which may lead to a reduction in resources to root system development, causing a worse growth than only the N-stress condition would already entail. In addition, similarly to the plant-endophyte interactions, the "balanced antagonism theory" applies to the plant-PGPB relationships [68,69]; then, phenotypic plasticity in the host plants may vary from mutualism to antagonism depending on the plant genotype, the environmental conditions, and the bacterial strain.
Another explanation for the negative responsiveness is because the effect of A. brasilense on the plant can vary according to the concentration of the inoculant [66,70]. In general, plant hormones are stimulatory only at certain concentrations, which should not exceed the stimulatory threshold specific to each plant genotype [71]. The higher concentration of A. brasilense in the root environment may increase the release of plant hormones that consequently inhibit root growth [66]. Thus, considering the number of genotypes evaluated, the concentration of the inoculant used in our experiment could be unfavorable for some of them, even at the recommended dose.
On the other hand, the reduction in root traits due to inoculation would not necessarily be a negative factor for the plant. Under abiotic stress conditions, such as low N supply and drought, high root/shoot ratios are common [72,73]. In this sense, we found moderate positive correlations between the ΔRDM and ΔRV root traits and ΔRSR, which indicates that under A. brasilense inoculation some plant genotypes could reduce the investment in root growth in order to allocate it to shoot development. However, further studies are needed to better understand the influence of inoculation with this PGPB on the distribution of dry matter between roots and shoots.
The continuous phenotypic variation and the moderate estimates of heritability for the traits related to the maize responsiveness to A. brasilense suggest the influence of several genes of small effect and a strong environmental influence. In summary, these results reinforce the complex PGPB × plant × environment interactions. Furthermore, they show the possibility of improving plants to be more efficient by the association with PGPB.
Candidate genes related to the maize responsiveness to A. brasilense
To the best of our knowledge, this is the first report employing GWAS to assess the genetic architecture of the association of maize with A. brasilense. Several candidate genes related to the maize responsiveness to A. brasilense were detected. Considering the panel size used in our study, possibly the power of our GWAS analyses has been low and only theose SNPs with more greater effect have been identified [74]. Korte & Farlow [75] suggest that a way to mitigate the small sample effect is to account for large phenotypic variability. Thus, as we used hybrids rather than inbred lines, a series of different allelic combinations can occur, increasing the genetic variants with heterozygous loci and thereby allowing the finding of better results in GWAS analysis [76]. This is reflected in the number of significant SNPs identified by the heterozygous (dis)advantage model, which was about three times higher than the additive model. Consequently, given the high number of candidate genes found, we focused our discussion mainly on those with functions more related to the treatments of this study.
It is known that the colonization of host plants by beneficial microbes depends on their ability to manipulate defense-related pathways [4]. In this study, the candidate gene Zm00001d051881 (additive model) was found, which codes the protein Binding to ToMV RNA 1 (BTR1). This is involved in the defense against Tomato Mosaic Virus (TOMV) RNA, with possible indirect effects on the host's innate immunity [77]. In addition, Zm00001d052221 (additive model) codes the tetratricopeptide repeat (TPR)-like protein superfamily, which is determinant for the transduction of signals mediated by plant hormones and able to activate the plant's defense response. For example, TPR is related to the quantitative resistance of soybean to Fusarium graminearum [78]. Another candidate gene is the ethylene-responsive transcription factor ERF109 (Zm00001d005892, additive model), which besides being involved in ethylene-activated abiotic stress responses [79], induces the expression of defense-related genes promoting a positive modulation of the response against pathogen infections [80]. Gene Zm00001d029115, identified for two traits using the heterozygous (dis)advantage model, codes the protein strictosidine synthase-like, known to play a key role in the alkaloid biosynthesis pathway. These chemical compounds function as protection against pathogenic microorganisms and herbivorous animals. In addition, the improvement in alkaloid content in the roots has been observed with A. brasilense inoculation in medicinal plants [81], but there are no reports about its induction in cereal crops.
The modulation of plant hormones and related signaling pathways by A. brasilense are aspects also frequently reported [2,61]. For example, we found Zm00001d013098 (additive model) corresponding to Aldehyde oxidase 2, which is a key enzyme in the final step of abscisic acid (ABA) biosynthesis. In addition, it performs the final catalytic conversion of indole-3-acetaldehyde (IAAld) in indole-3-acetic acid (AIA) in different tryptophan-dependent auxin biosynthesis pathways [82]. Moreover, we found the candidate gene for 12-oxophytodienoic acid reductases (Zm00001d037182, heterozygous model), which are key enzymes in the control of jasmonate (JA) biosynthesis in plants such as maize [83] and wheat [83]. Among other functions, this phytohormone orchestrates defense and growth responses [84].
Some studies show modulation of the induction and emission of plant volatiles by plantassociated microorganisms, including PGPBs and Rhizobia [85,86]. In turn, these chemicals have an important role, especially on the induction of resistance in plants against insects and pathogens [87,88]. We found the Zm00001d046604 candidate gene (additive model) corresponding to (Z)-3-hexen-1-ol acetyltransferase. This enzyme is involved in the green leaf volatile biosynthetic process that is derived from the lipoxygenase pathway [89]. In agreement with this finding, A. brasilense negatively affects the attraction of the pest insect Diabrotica speciose to maize by inducing higher emissions of the volatile (E)-β-caryophyllene. Therefore, the validation of this candidate gene in further studies could help to understand better the role of plant defense against pests induced by A. brasilense.
Regarding the candidate genes related to abiotic stress mitigation, we found Zm00001d020747 (additive model), encoding Aquaporin TIP4-1. Under N deficiency, this plant transporter is up-regulated in Arabidopsis [90] and it is induced by rhizobial and arbuscular mycorrhizal fungi symbiosis [91]. In both cases, its function is related to the N delivery between plant compartments.
We found a candidate gene directly involved in plant root growth encoding hydroxyproline-rich glycoprotein family protein (Zm00001d006108, additive model), a protein family from plant cell walls classified as arabinogalactan-proteins (AGPs), extensins (EXTs), and proline-rich proteins (PRPs). It plays a key role in several processes of plant development such as root elongation and root biomass, especially under stress conditions [92]. Additionally, AGPs are exuded in the rhizosphere and help in communicating with soil microbes, participate in the signaling cascade modulating the plant's immune system, and are required for root colonization by symbiotic bacteria [93]. Another, LOC103636767 (heterozygous model), corresponds to formin-like protein 20, which is involved in cytoskeleton movement and secondary cell wall formation [94].
The major part of N in the leaf is allocated to the chloroplast proteins, and deficiency in this nutrient leads to a reduction in photosynthetic efficiency [95]. The Zm00001d035859 candidate gene (additive model) found in our study is related to the Plastocyanin homolog 1, a protein involved in the transfer of electrons in the photosystem I. In accordance with this result, the inoculation of the PGPB Burkholderia sp. in Arabidopsis thaliana leads to modification in the expression of this protein [96]. Moreover, it is involved in the response of maize to N deficiency [97].
Although the candidate genes found for the N-stress treatment were not the main focus of this study, many of them have been previously described due to their direct or indirect relation to plant responses to abiotic stress conditions. The LOC109941493 gene (heterozygous model) encodes the plasma membrane ATPase 2-like, this being an ion pump in the plant cell membrane important for root growth and architecture under different nitrogen regimes [98]. In addition, Zm00001d006722 (heterozygous model) is related to arabinosylation of extensin proteins that contribute to root-cell hair growth, these being specialized in the absorption of nutrients [99]. The Zm00001d013098 and Zm00001d038300 genes (additive model) corresponding to Aldehyde oxidase2 and Ethylene-responsive transcription factor ERF109, respectively, were the only candidate genes shared between both treatments. Their functions described above, related to ABA and AIA biosynthesis and the ethylene-activated signaling pathway, are also frequently reported in N availability and hormone interactions [100,101]. Moreover, this suggests that the regulation and signalization of these hormones in the plant can be involved in the cross talk between A. brasilense and N stress in maize. Therefore, besides these results indicating that the stress applied in our experiment was effective, they also could be helpful in further studies to understand better the genetic control of root traits under N stress in the early stages of plant development for improving tolerance in maize.
Some of the candidate genes found by the heterozygous (dis)advantage GWAS model were identified for more than one trait; this was not observed using the additive model. For these, the effect on phenotypes was always in the same direction; for example, the candidate gene Zm00001d029115 (protein strictosidine synthase-like) had a negative effect on both RDM and RV. Possibly, this occurred because these pleiotropic candidate genes were only found between RDM, RV, and RAD traits which showed a positive correlation among them.
Additive and heterozygous (dis)advantage GWAS models
GWAS analyses using non-additive models are common in human and animal studies [102][103][104]. However, few studies have been reported using plant species [21,22]. In our study, most of the significant SNPs were identified by heterozygous (dis)advantage GWAS analyses and none were detected by the additive model, which demonstrates how important it is to study the nonadditive effects on the genetic variability of maize responsiveness to both A. brasilense and N stress. This was also evident through the results of GO terms, where an increase in exclusive biological functions was verified. This indicates that the PGPB provide the plant with a broader spectrum of internal activities, which may be an advantage for growth in stressful environments, such as N deficiency, with possible consequences in plant evolutionary potential.
Furthermore, our results showed that heterozygous genotypes can have advantages or disadvantages on the root traits (both treatments) depending on the allelic combinations that are formed by the parental crossing. Thus, the strategy of use of SNP-trait associations found by heterozygous loci in breeding programs depends on the effect of the heterozygous genotype. This is a challenge to plant breeders because during hybrid development the allele combination should be predicted by parental selection in order to benefit its association with PGPB. In this sense, further studies underlying these candidate genes are required to understand better the biological mechanisms of heterotic performance, in comparison to homozygous, in the presence of these PGPB. For those providing an advantage, the alleles should be improved separately in different heterotic groups for their subsequent combination in the mating process. On the other hand, when the heterozygous genotypes are a disadvantage, one or other allele should be improved simultaneously in both heterotic groups in order to obtain homozygous genotypes in hybrids.
Conclusions
Our study modeling additive and heterozygous (dis)advantage effects in GWAS analyses revealed 25 candidate genes for the responsiveness of maize to A. brasilense, with key roles particularly in plant defense, hormonal biosynthesis, signaling pathways, and root growth providing insights into their complex genetic architecture. In this context, non-additive effects contribute substantially to the maize phenotypic variation in response to the inoculation and are related to a wider spectrum of biological functions. Together, these findings allow to be started the marker-assisted selection and genome editing in breeding programs for the development of maize hybrids that can take advantage of this association more efficiently. Finally, our results also represent a benchmark in the identification of homologous genes in important related species, such as rice and wheat, besides advancing the understanding of the genetic basis of plant-PGPB interactions.
Supporting information S1 Table. Phenotypic data analyses. Wald Test for fixed effects and Likelihood Ratio Test for random effects from the joint diallel analysis of 118 maize hybrids evaluated under N stress and N stress plus Azospirillum brasilense treatments. | 7,450.8 | 2019-09-19T00:00:00.000 | [
"Biology",
"Environmental Science",
"Agricultural and Food Sciences"
] |
A Direct Feedback FVF LDO for High Precision FMCW Radar Sensors in 65-nm CMOS Technology
A direct feedback flipped voltage follower (FVF) LDO for a high-precision frequency-modulated continuous-wave (FMCW) radar is presented. To minimize the effect of the power supply ripple on the FMCW radar sensor’s resolution, a folded cascode error amplifier (EA) was connected to the outer loop of the FVF to increase the open-loop gain. The direct feedback structure enhances the PSRR while minimizing the power supply ripple path and not compromising a transient response. The flipped voltage follower with a super source follower forms a fast feedback loop. The stability and parameter variation sensitivity of the multi-loop FVF LDO were analyzed through the state matrix decomposition. We implemented the FVF LDO in TSMC 65 nm CMOS technology. The fabricated FVF LDO supplied a maximum load current of 20 mA with a 1.2 V power supply. The proposed FVF LDO achieved a full-spectrum PSR with a low-frequency PSRR of 66 dB, unity-gain bandwidth of 469 MHz, and 20 ns transient settling time with a load current step from 1 mA to 20 mA.
Introduction
Starting from military equipment, the FMCW radar sensor has broadened its application to an autonomous vehicle, a 3D imaging system, and a weather forecast. At the same time, the power management has become an integral part of the FMCW transceiver. To ensure the spatial and range resolution of the FMCW radar sensor, the power management circuit must supply stable and isolated supply voltages to each sensitive block, such as the PLL, mixer, and ADC [1][2][3][4][5][6][7][8][9][10]. With sawtooth modulation with T m = 2 ms, the time delay (τ) and the beat frequency (f b ) for the frequency-modulated received signal from a target at a distance of R is given as With K f of 500 GHz/s and a target range of 180 m, the maximum beat frequency is 600 kHz. Thus, the LDO should reject the low-frequency ripple from the supply to prevent it from degrading the phase noise of the PLL, which is the frequency modulation signal source. Moreover, even the power supply ripple of the frequency higher than the ADC sampling frequency may fold into the ADC in-band. Hence, it is essential for the LDO to reject a wide range of the power supply ripple, especially at the low-frequency range. We noticed that the FMCW frequency hopping approach [11] required an LDO to respond rapidly to the transient load variation. This is because the current consumption of the PLL changes relatively rapidly with the frequency hopping.
In order to achieve a high PSR across a wide frequency range, various analog circuit techniques have been introduced. A feedforward ripple cancellation achieves a high PSR by combining a feedback and feedforward signal path [12][13][14][15][16]. A bandgap reference (BGR) recursive configuration [17] and an output-supplied voltage reference [18] have been proposed to reduce the effect of a non-ideal PSR of the bandgap reference. A multi-loop structure [19][20][21][22][23] has been introduced to boost the unity-gain bandwidth and the transient response in various configurations. The flipped voltage follower (FVF) LDO [24] has become one of the most popular analog LDO approaches for the last decade. The FVF LDO has a local feedback loop that reduces output resistance. In addition, an independent control voltage generator can provide an adequate control voltage for the control transistor. However, the transient time of the local feedback loop is relatively slow due to the large pass transistor, and the unity-gain bandwidth of the LDO has been limited. A tri-loop FVF LDO with buffered FVF was proposed to achieve full-spectrum PSR and fast response time in [25]. Although additional loops through a tri-input EA provided more loop gain, the resulting low-frequency PSR was not sufficiently improved. A dual-loop FVF LDO was reported to provide full-spectrum PSR with high low-frequency PSR in [26]. As the control voltage regulating loop was removed, it created another power supply ripple path through the inverting stage, which necessitated an auxiliary LDO.
In this paper, a direct feedback FVF LDO was proposed. By constructing an error amplifier (EA) that directly controls the FVF local loop, the FVF LDO can eliminate the power supply ripple path, resulting in a high PSRR without the need for additional components. A local FVF loop with a super source follower realizes a fast transient response with a unity-gain bandwidth of 469 MHz, and an outer loop incorporating folded cascode EA enhanced a low-frequency PSR to 66 dB. State matrix decomposition [27] was applied to analyze the stability and parameter sensitivity of a multi-loop FVF LDO.
This paper is organized as follows. Section 2 introduces the proposed direct-feedback LDO. The PSRR and stability analysis of the FVF LDO was also presented. State matrix decomposition [27] was employed to analyze the stability and parameter sensitivity of the multi-loop FVF LDO. Section 3 shows the experimental result with a fabricated FVF LDO, and Section 4 follows with a conclusion. Figure 1 shows a schematic diagram of the proposed LDO regulator. The LDO consisted of a unity-gain buffer, an error amplifier (EA), an output capacitor, and transistors, M pass , M 1 , and M 2 . M pass , M 1 , and M 2 formed a flipped voltage follower. Fast and weak shunt-shunt feedback loop 1 in the flipped voltage follower enables the fast response of the LDO. The output of the error amplifier, V SET , sets the input level of the flipped voltage follower. The input of the EA was connected to the reference input (V REF ), and V OUT formed another feedback loop 2. This dramatically enhanced the open loop gain of the overall loop. Since V OUT was directly fed back into EA and the inverting stage was removed, we can eliminate the power supply ripple path without the need for an additional component. To enhance the transient performance, we needed to make the dominant pole of the fast loop 1 located at the output node. The output capacitor, C L , was connected to the output of the LDO to make the output node of the LDO dominant pole, and the capacitor, C 1 , was connected to the output of the error amplifier to stabilize loop 2. An additional compensation capacitor, C 2 , was enabled by a start-up pulse generator to guarantee more phase margin during the start-up situation. The unity-gain buffer was to drive the large power transistor, M pass . The size of the transistors, the capacitor values, and the load current (I L ) values are listed in Table 1.
Fast Loop 1 Analysis
At higher frequencies where loop 2 did not work, only loop 1 worked. Without loop 2, the LDO simply had the flipped voltage follower (FVF) used as the power stage. The proposed LDO without loop 2 is shown in Figure 2a. The input V SET sets the output voltage of the FVF, and any interference or noise in the V IN works as a disturbance for the system. The series-shunt feedback structure reduced the output impedance of the system, enabling a high-frequency operation. The noise or interference from the power source was reduced by the internal feedback loop. To perform the PSRR analysis of the proposed LDO, we established a small-signal block diagram of the LDO. The block diagram is shown in Figure 2b. The V SET works as a reference input of the FVF, and any interference or noise in V IN was a disturbance for the system. The open-loop gain and output of LDO is where g m1 is the transconductance of M 1 , r o1 and r o2 are the output resistance of M 1 and M 2 , respectively, C A is capacitance seen at node A, ω n is the natural frequency of the super source follower, ζ is the damping factor of the super source follower, g mP is the transconductance of the pass transistor, R L is the load resistance, r oP is the output resistance of the pass transistor, and C OUT is the capacitance seen at the output node. Supply noise is reduced approximately by G A at high frequency. The bandwidth of the super source follower was boosted due to the internal feedback structure, and the pole at node A was also at high frequency, as M 1 and M 2 were small. The output capacitor, C L , was set such that the pass transistor, M Pass , was the slowest working component, and the dominant pole of the controller gain, G A and G SSF , were placed at a higher frequency. Therefore, loop 1 suppressed the supply noise through a wide frequency range. The supply noise at a higher frequency was absorbed by the large C L . proposed LDO without loop 2 is shown in Figure 2a. The input VSET sets the output voltage of the FVF, and any interference or noise in the VIN works as a disturbance for the system. The series-shunt feedback structure reduced the output impedance of the system, enabling a high-frequency operation. The noise or interference from the power source was reduced by the internal feedback loop. To perform the PSRR analysis of the proposed LDO, we established a small-signal block diagram of the LDO. The block diagram is shown in Figure 2b. The VSET works as a reference input of the FVF, and any interference or noise in VIN was a disturbance for the system. The open-loop gain and output of LDO is where gm1 is the transconductance of M1, ro1 and ro2 are the output resistance of M1 and M2, respectively, CA is capacitance seen at node A, ωn is the natural frequency of the super source follower, ζ is the damping factor of the super source follower, gmP is the transconductance of the pass transistor, RL is the load resistance, roP is the output resistance of the pass transistor, and COUT is the capacitance seen at the output node. Supply noise is reduced approximately by GA at high frequency. The bandwidth of the super source follower was boosted due to the internal feedback structure, and the pole at node A was also at high frequency, as M1 and M2 were small. The output capacitor, CL, was set such that the pass transistor, MPass, was the slowest working component, and the dominant pole of the controller gain, GA and GSSF, were placed at a higher frequency. Therefore, loop 1 suppressed the supply noise through a wide frequency range. The supply noise at a higher frequency was absorbed by the large CL.
Slow Loop 2 Analysis
The folded cascode amplifier can drastically improve the closed-loop gain. Since V OUT was directly fed back into the EA and the inverting stage was removed, we could eliminate the power supply ripple path without the need for an additional component. Figure 3 shows the loop 2 feedback path. Breaking the loop at V SET gives where G EA is the voltage gain of the folded cascode amplifier. The PSRR is boosted approximately by G EA . Loop 1 is a unity-gain feedback network seen at node V SET , and the unity-gain bandwidth of loop 1 was far beyond that of the EA. Hence, we simply needed to compensate for the folded cascode EA. The folded cascode amplifier can be stabilized simply by adding the compensation capacitor, C 1 , to the output of the amplifier.
Slow Loop 2 Analysis
The folded cascode amplifier can drastically improve the closed-loop gain. Since VOUT was directly fed back into the EA and the inverting stage was removed, we could eliminate the power supply ripple path without the need for an additional component. Figure 3 shows the loop 2 feedback path. Breaking the loop at VSET gives where GEA is the voltage gain of the folded cascode amplifier. The PSRR is boosted approximately by GEA. Loop 1 is a unity-gain feedback network seen at node VSET, and the unity-gain bandwidth of loop 1 was far beyond that of the EA. Hence, we simply needed to compensate for the folded cascode EA. The folded cascode amplifier can be stabilized simply by adding the compensation capacitor, C1, to the output of the amplifier.
Overall Loop Analysis
Loop 1 and Loop 2 formed a combined global loop. The global loop had the largest closed-loop gain, making it critical for the phase margin design. Figure 4 shows the combined diagram of loop 1 and loop 2. By breaking the loop at the node VG, the output voltage is expressed as
Overall Loop Analysis
Loop 1 and Loop 2 formed a combined global loop. The global loop had the largest closed-loop gain, making it critical for the phase margin design. Figure 4 shows the combined diagram of loop 1 and loop 2. By breaking the loop at the node V G , the output voltage is expressed as Here, the open-loop gain had a dominant pole at the output of the EA, and the second pole was at the output of the LDO. The (1 + G EA ) term in (11) made a quadratic zero near the unity-gain bandwidth of the EA. This zero was set to cancel out the second pole, which was below the unity-gain bandwidth of the LDO. It was noted that the LDO would be unstable without this zero. As a result, the (1 + G EA ) term boosted the unity-gain bandwidth of the LDO. Figure 5 shows the phase margin simulation result. The unitygain bandwidth of loop 1 was 507 MHz, and the phase margin was 37.3°. The unity-gain bandwidth of the loop 2 was 31.2 MHz, and the phase margin was 63.6°. The unity-gain bandwidth of the overall loop was 469 MHz, and the phase margin was 44.1°.
Effect of Non-Ideal PSRR of Each Component
There was more than one power supply ripple path in the FVF LDO. Circuit blocks with a non-ideal PSRR can provide an additional path for the power supply ripple. Figure 6 shows the effect of non-ideal components on PSRR. With the simplified model, the output of the LDO is given as Here, the open-loop gain had a dominant pole at the output of the EA, and the second pole was at the output of the LDO. The (1 + G EA ) term in (11) made a quadratic zero near the unity-gain bandwidth of the EA. This zero was set to cancel out the second pole, which was below the unity-gain bandwidth of the LDO. It was noted that the LDO would be unstable without this zero. As a result, the (1 + G EA ) term boosted the unity-gain bandwidth of the LDO. Figure 5 shows the phase margin simulation result. The unity-gain bandwidth of loop 1 was 507 MHz, and the phase margin was 37.3 • . The unity-gain bandwidth of the loop 2 was 31.2 MHz, and the phase margin was 63.6 • . The unity-gain bandwidth of the overall loop was 469 MHz, and the phase margin was 44.1 • . Here, the open-loop gain had a dominant pole at the output of the EA, and the second pole was at the output of the LDO. The (1 + G EA ) term in (11) made a quadratic zero near the unity-gain bandwidth of the EA. This zero was set to cancel out the second pole, which was below the unity-gain bandwidth of the LDO. It was noted that the LDO would be unstable without this zero. As a result, the (1 + G EA ) term boosted the unity-gain bandwidth of the LDO. Figure 5 shows the phase margin simulation result. The unitygain bandwidth of loop 1 was 507 MHz, and the phase margin was 37.3°. The unity-gain bandwidth of the loop 2 was 31.2 MHz, and the phase margin was 63.6°. The unity-gain bandwidth of the overall loop was 469 MHz, and the phase margin was 44.1°.
Effect of Non-Ideal PSRR of Each Component
There was more than one power supply ripple path in the FVF LDO. Circuit blocks with a non-ideal PSRR can provide an additional path for the power supply ripple. Figure 6 shows the effect of non-ideal components on PSRR. With the simplified model, the output of the LDO is given as
Effect of Non-Ideal PSRR of Each Component
There was more than one power supply ripple path in the FVF LDO. Circuit blocks with a non-ideal PSRR can provide an additional path for the power supply ripple. Figure 6 shows the effect of non-ideal components on PSRR. With the simplified model, the output of the LDO is given as where PSR SSF is the power supply rejection of the super source follower, PSR A is the power supply rejection of the FVF stage, and PSR EA is the power supply rejection of the folded cascode amplifier. The PSRR of the FVF stage and EA should be as low as possible. On the other hand, the super source follower with a poor PSRR helps the LDO reject the power supply ripple by working as a feedforward path.
where PSRSSF is the power supply rejection of the super source follower, PSRA is the power supply rejection of the FVF stage, and PSREA is the power supply rejection of the folded cascode amplifier. The PSRR of the FVF stage and EA should be as low as possible. On the other hand, the super source follower with a poor PSRR helps the LDO reject the power supply ripple by working as a feedforward path.
Stability Analysis of Proposed LDO
Since the proposed LDO has two feedback loops, state matrix decomposition [27] must be more suitable for analyzing the stability than a classical open-loop ac analysis. Without looking at each loop separately, the closed-loop analysis gives a state space model as The detailed closed-loop analysis is shown in the Appendix A. The LDO is asymptotically stable when all the real parts of the eigenvalues of matrix A are negative. The eigenvalues are given as λ 1 = -5.543 · 10 9 + j4.612 · 10 9 λ 2 = -5.543 · 10 9 − j4.612 · 10 9 λ 3 = -1.414 · 10 9 + j4.342 · 10 9 (16)
Stability Analysis of Proposed LDO
Since the proposed LDO has two feedback loops, state matrix decomposition [27] must be more suitable for analyzing the stability than a classical open-loop ac analysis. Without looking at each loop separately, the closed-loop analysis gives a state space model as The detailed closed-loop analysis is shown in the Appendix A. The LDO is asymptotically stable when all the real parts of the eigenvalues of matrix A are negative. The eigenvalues are given as λ 1 = −5.543 × 10 9 +j4.612 × 10 9 λ 2 = −5.543 × 10 9 − j4.612 × 10 9 λ 3 = −1.414 × 10 9 +j4.342 × 10 9 λ 4 = −1.414 × 10 9 − j4.342 × 10 9 λ 5 = −3.444 × 10 8 +j2.297 × 10 8 λ 6 = −3.444 × 10 8 − j2.297 × 10 8 (16) Since all the eigenvalues have negative real parts, the LDO was asymptotically stable. The parameters used in the analysis are given in Table 2. The parameters were extracted from the circuit simulation results, including parasitics. Figure 7 compares the PSRR simu- λ 4 = -1.414 · 10 9 − j4.342 · 10 9 λ 5 = -3.444 · 10 8 + j2.297 · 10 8 λ 6 = -3.444 · 10 8 − j2.297 · 10 8 Since all the eigenvalues have negative real parts, the LDO was asymptotically stable. The parameters used in the analysis are given in Table 2. The parameters were extracted from the circuit simulation results, including parasitics. Figure 7 compares the PSRR simulation results from the circuit simulator and state space model. The state space model fits the circuit simulation result and can predict the pole/zero location of the transfer function. The red line represents the simulation result with the state space model, and the blue line represents the simulation result with Cadence Spectre. We also identified the parameter variation sensitivity by computing the real part of the critical eigenvalue with variation in each parameter. Plotting the highest real part of the eigenvalues, the circuit should follow the conditions: The red line represents the simulation result with the state space model, and the blue line represents the simulation result with Cadence Spectre. We also identified the parameter variation sensitivity by computing the real part of the critical eigenvalue with variation in each parameter. Plotting the highest real part of the eigenvalues, the circuit should follow the conditions:
Measurement Results
We implemented the LDO in TSMC 65 nm CMOS technology with an active area of 0.037 mm 2 , including a 350 pF on-chip output capacitor. Figure 9 shows a chip photograph of a fabricated FVF LDO. A 350 pF output capacitor was implemented on-chip using a MOM capacitor. We performed the on-chip probe measurements and the chip-on-board measurements. The power supply rejection ratio measurement setting is shown in Figure 10. The Analog Device ADA4870 OPAMP supplied the DC power and ac ripple at the frequency of fR to the LDO. The OPAMP was used to reduce the output impedance and combine the DC voltage with the ac ripple. A Keysight E36313A DC power supply sets the reference voltage and voltage bias for the OPAMP. A BK Precision BK4063B arbitrary signal
Measurement Results
We implemented the LDO in TSMC 65 nm CMOS technology with an active area of 0.037 mm 2 , including a 350 pF on-chip output capacitor. Figure 9 shows a chip photograph of a fabricated FVF LDO. A 350 pF output capacitor was implemented on-chip using a MOM capacitor. We performed the on-chip probe measurements and the chip-onboard measurements.
Measurement Results
We implemented the LDO in TSMC 65 nm CMOS technology with an active area of 0.037 mm 2 , including a 350 pF on-chip output capacitor. Figure 9 shows a chip photograph of a fabricated FVF LDO. A 350 pF output capacitor was implemented on-chip using a MOM capacitor. We performed the on-chip probe measurements and the chip-on-board measurements. The power supply rejection ratio measurement setting is shown in Figure 10. The Analog Device ADA4870 OPAMP supplied the DC power and ac ripple at the frequency of fR to the LDO. The OPAMP was used to reduce the output impedance and combine the DC voltage with the ac ripple. A Keysight E36313A DC power supply sets the reference voltage and voltage bias for the OPAMP. A BK Precision BK4063B arbitrary signal The power supply rejection ratio measurement setting is shown in Figure 10. The Analog Device ADA4870 OPAMP supplied the DC power and ac ripple at the frequency of f R to the LDO. The OPAMP was used to reduce the output impedance and combine the DC voltage with the ac ripple. A Keysight E36313A DC power supply sets the reference voltage and voltage bias for the OPAMP. A BK Precision BK4063B arbitrary signal generator provided the input ripple signal to the OPAMP. A Keysight B2902A SMU supplied I ref to bias the internal amplifiers and buffer. The biasing point was controlled by the SPI Module. A Keysight DSO-X oscilloscope was used to measure the input and output ripple. The PSRR was calculated using measured input and output. Figure 11 shows the PSRR measurement result. The fabricated FVF LDO achieved a full-spectrum PSR of 64.6 dB at 100 kHz and the worst measured PSRR of 10 dB at 200 MHz. generator provided the input ripple signal to the OPAMP. A Keysight B2902A SMU supplied Iref to bias the internal amplifiers and buffer. The biasing point was controlled by the SPI Module. A Keysight DSO-X oscilloscope was used to measure the input and output ripple. The PSRR was calculated using measured input and output. Figure 11 shows the PSRR measurement result. The fabricated FVF LDO achieved a full-spectrum PSR of 64.6 dB at 100 kHz and the worst measured PSRR of 10 dB at 200 MHz. The load transient measurement setting is shown in Figure 12. A Keysight E36313A was used to supply VIN and VREF to the LDO, and a Keysight B2902A was used to input IREF to bias the internal amplifiers and buffer. The load control signal was given from the BK precision BK4064B arbitrary signal generator. The load current was stepped from minimum to maximum, with an edge time of 8 ns. The load transient measurement result generator provided the input ripple signal to the OPAMP. A Keysight B2902A SMU supplied Iref to bias the internal amplifiers and buffer. The biasing point was controlled by the SPI Module. A Keysight DSO-X oscilloscope was used to measure the input and output ripple. The PSRR was calculated using measured input and output. Figure 11 shows the PSRR measurement result. The fabricated FVF LDO achieved a full-spectrum PSR of 64.6 dB at 100 kHz and the worst measured PSRR of 10 dB at 200 MHz. The load transient measurement setting is shown in Figure 12. A Keysight E36313A was used to supply VIN and VREF to the LDO, and a Keysight B2902A was used to input IREF to bias the internal amplifiers and buffer. The load control signal was given from the BK precision BK4064B arbitrary signal generator. The load current was stepped from minimum to maximum, with an edge time of 8 ns. The load transient measurement result The load transient measurement setting is shown in Figure 12. A Keysight E36313A was used to supply V IN and V REF to the LDO, and a Keysight B2902A was used to input I REF to bias the internal amplifiers and buffer. The load control signal was given from the BK precision BK4064B arbitrary signal generator. The load current was stepped from minimum to maximum, with an edge time of 8 ns. The load transient measurement result is given in Figure 13. The maximum voltage droop was 30.3 mV, and the settling time was about 16 ns. Transient load regulation was 141 µV/mA. is given in Figure 13. The line transient measurement setting was the same as the PSRR measurement setting, and the only difference was that the ripple signal, f R , was replaced with a square wave. The line transient measurement result is given in Figure 14. With the power supply voltage changing from 1.2 V to 1.4 V within 20 ns, the output voltage changed by about 25.7 mV. The settling time to the final value was about 40 ns. Table 3 summarizes the performance of the proposed FVF LDO with other state-of-theart LDOs. The proposed FVF LDO occupied a 0.037 mm 2 active area. The LDO output was 1 VDC with a supply voltage of 1.2 VDC. The maximum output current was 20 mA, and the quiescent current was 290 µA. An output capacitor of 350 pF was used. The worst-case load transient overshoot was 30.3 mV with a load current step of 8 ns edge time, and the output was settled within 16 ns. When the response time of the LDO is comparable to the edge time, the assumption in the simple response time equation [28] is no longer valid.
Assuming that the load current varies at a constant rate [29], the response time is given as Table 3 summarizes the performance of the proposed FVF LDO with other state-ofthe-art LDOs. The proposed FVF LDO occupied a 0.037 mm 2 active area. The LDO output was 1 VDC with a supply voltage of 1.2 VDC. The maximum output current was 20 mA, and the quiescent current was 290 µA. An output capacitor of 350 pF was used. The worstcase load transient overshoot was 30.3 mV with a load current step of 8 ns edge time, and the output was settled within 16 ns. When the response time of the LDO is comparable to the edge time, the assumption in the simple response time equation [28] is no longer valid.
Assuming that the load current varies at a constant rate [29], the response time is given as The shorter the response time, the better the performance is. The response time, calculated according to (18), is shown in Table 2. The response time of the LDO was 2.99 ns. Transient FoM [28] is given by where the smaller FoM represents better performance. The proposed FVF LDO achieved an FoM of 43.4 ps. The low-frequency PSRR of the FVF LDO was 66 dB, and the worstmeasured PSRR of the LDO was 10 dB at 200 MHz.
Discussion
The proposed FVF LDO was successfully implemented in 65 nm CMOS technology. The PSRR measurement results confirmed that the analytic model and simulation results corresponded quite well with the measured PSRR. Our work has demonstrated that a simple direct feedback structure could improve low-frequency PSRR without additional components. The proposed LDO operated stably with various line/load transient situations, and the output settled rapidly to the final value. For future research, current efficiency can be improved by using an efficient buffer structure or an adaptive bias scheme.
Conclusions
A direct feedback flipped voltage follower (FVF) LDO was proposed. Both the classical ac analysis and the state-space model of the LDO were performed, and the results were compared with the circuit simulations. The parameter variation sensitivity of the LDO was also investigated using the state matrix model. The local FVF loop achieved a fast response and a high unity-gain frequency, and the outer loop with the folded cascode error amplifier (EA) enhanced the low-frequency closed-loop gain. The proposed direct feedback structure had a less power supply ripple path without a complex design. Experimental results verified theoretical predictions.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
Appendix A
Let X 1 = v set /K EA be a state variable, and the gain of the error amplifier is Substituting v set = K EA X 1 into (A1) and identifying the numerator and the denominator, v re f − v out = X 1 + ( 1 /ω p1 + 1 /ω p2 ) ..
Let X 3 = v a /K A be a state variable, and the gain of the error amplifier is Substituting v a = K A X 3 into (A4) and identifying the numerator and the denominator, Let X 4 = v g /K SSF be a state variable, and the gain of the super source follower is G SSF = v g v a = K SSF 1+ 2ζ /ω n s+ 1 /ω 2 n s 2 . (A6) Substituting v g = K SSF X 4 into (A6) and identifying the numerator and the denominator, v a = K A X 3 = X 4 + 2ζ /ω n . X 4 + 1 /ω 2 n .. | 7,253.4 | 2022-12-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Chitosan / Siloxane Hybrid Polymer : Synthesis , Characterization and Performance as a Support for Immobilizing Enzyme
O polímero híbrido derivado de siloxano e quitosana foi obtido pela técnica sol-gel, tendo como precursor o tetraetilortosilicato (TEOS). O suporte híbrido obtido foi modificado quimicamente com epicloridrina e utilizado para imobilizar lipase de Burkholderia cepacia. O híbrido SiO2-quitosana deu origem a uma nova estrutura macromolecular na qual as partículas inorgânicas encontramse dispersas em escala nanométrica na matriz orgânica e ligadas à matriz por meio de ligações covalentes. Foi realizado um estudo comparativo entre a lipase livre e a imobilizada quanto à influencia do pH e temperatura, parâmetros cinéticos e estabilidade térmica. O pH ótimo para a atividade máxima de hidrólise da lipase imobilizada foi de 6,1, enquanto que para a lipase livre foi de 7,0. A temperatura ótima permaneceu em 50 oC mesmo depois da imobilização. Os perfis de estabilidade térmica indicaram que o processo de imobilização foi favorável à estabilização da enzima e o derivado epóxi SiO2-quitosana foi cerca de 30 vezes mais estável que a lipase livre a 60 oC.
Introduction
Enzyme immobilization usually provides, in addition to the desired reuse of the biocatalyst, unexcelled advantages such as product separation and continuous operation. 1Moreover, immobilization may be used to improve other enzyme features, such as increasing the activity, 2 decreasing inhibitions, 3 modulating selectivity and specificity 4 or improving the enzyme behavior in synthetic processes. 5Particularly for lipase, 6 the immobilization state is required to attain high product yield so it is not surprise that its immobilization has been the object of intensive investigation. 6Several commercially immobilized preparations are available, such as Candida antarctica lipase B immobilized onto polyacrylate type matrix, available under the name of Novozym 435 ® (Novozymes) or Chirazyme L-2 ® (Roche).Some lipases are also available in the cross-linked forms, such as cross-linked enzyme crystals (CLECs) offered by Altus Biologics Inc.However, for some applications, the relatively high cost of those biocatalysts associated with the usage of an expensive carrier material, becomes unfeasible for further exploitation.Therefore, the development on less expensive immobilizing procedure is need to emerge into relative low margin applications such as interesterification of fats when compared to the relative high margin applications, for example enantioselective synthesis of pharmaceutical intermediates. 6he organic-inorganic sol-gels have received significant interest because the incorporation of organic polymers in the inorganic sol-gel can lead to new composite materials possessing the properties of each component that would be useful in particular applications. 7This method has been considered as outstanding to synthesize a significant number of materials with high degree of homogeneity and purity at a molecular level with extraordinary physical and chemical properties. 8The sol-gel reaction involves the hydrolysis of silica precursors and condensation of the resulting hydroxyl groups to form a nano-structure.][11][12] One simple method is mixing organic compounds with a metal alkoxide, such as tetraethoxysilane (TEOS).
It has been reported that the inorganic nanocomposite is formed in situ in a biopolymer solution by the selforganization of sol particles generated in the course of hydrolysis of TEOS and following polycondensation reactions into a porous three-dimensional network in the bulk solution.During the sol-gel process the inorganic mineral is deposited in the organic compound matrix forming hydrogen bonding between organic and inorganic phases.To increase the solubility, methanol or ethanol is usually added. 13Some organic/inorganic hybrids on the basis of different inorganic precursors and organic compounds, such as cellulose, 12 carrageenan 13 and polyvinylalcohol 14 have been reported.Incorporation of organic polymers, especially those with amino or amide groups, allows the formation of molecular hybrids often stabilized by strong hydrogen bonding. 15Moreover, chemical additives can be used to improve the process and the attainment of materials with better mechanical properties, porosity control and hydrophilic/hydrophobic balance. 16everal applications have been already developed for this kind of hybrid materials particularly in the biotechnological field.8][19] Hydrolysis and condensation reactions are basically responsible for polymerization of the inorganic precursors.
Using this methodology, polymers such as chitosan are able to form hybrids with silica. 15This macromolecule, derived chemically by deacetylation of chitin, the second most abundant biopolymer in nature close to cellulose, has widely been assumed to be a cheaper and versatile sorbent for transition metal ions and organic substances through the coordination and/or reaction sites composed of the amino (-NH 2 ) and hydroxy (-OH) groups anchoring on chitosan chains. 20As a matter of fact silica-chitosan possesses a highly porous microstructure and has been shown to be a superior support for immobilizing enzymes. 1 Moreover, it is inexpensive, non-toxic, hydrophilic, biocompatible and biodegradable. 21hus, this work assesses the performance of a silicachitosan hybrid matrix synthesized by sol-gel process, using TEOS as a precursor to immobilize lipase from Burkholderia cepacia.3] The properties of the support (SiO 2 -chitosan) and immobilized derivative were evaluated by X-ray diffraction, Fourier transform infrared spectroscopy (FTIR) and termogravimetry (TG).The influence of the temperature and pH on the activity of the biocatalysts was determined using a surface response methodology.Under the established conditions, the kinetic behavior of the immobilized lipase was also determined.The results were compared with those attained by the free lipase.
Materials
Burkholderia cepacia (Lipase PS) was purchased from Amano Enzyme Inc. (Nagoya, Japan) and used without further purification.Chitosan flaks (C3646-Sigma) with a degree of deacetylation of 85% and tetraethoxysilane (TEOS) were obtained from Sigma-Aldrich Chemicals Co. (Milwaukee, WI, USA).Epichlorohydrin, hydrochloric acid (minimum 36%) and polyethylene glycol (PEG, molecular weight-1500) were supplied by Reagen (Rio de Janeiro, RJ, Brazil).Commercial olive oil (low acidity) was purchased in a local market.All other chemicals were of analytical grade.
Support synthesis and activation
SiO 2 -chitosan was prepared by the hydrolysis and polycondensation of tetraethoxysilane according to the methodology reported by Paula et al. 16 with slight modifications as briefly described.An initial solution of chitosan (0.125 g) and ethanol (5 mL) in water (20 mL) was heated at 60 ºC under agitation, followed by addition of 1.0 mL of concentrated HCl.The solution was stirred for 12 h for total chitosan dissolution.After this period, 5 mL of TEOS were added under agitation for 40 min and the product was transferred to micro wells of tissue culture plates (disc shape) and kept at 25 ºC until complete gel solidification (formation of an interpenetrated network of SiO 2 -chitosan).Then, the material was ground in a ball mill and classified to attain particles with 0.308 mm of diameter (-40/+60 MESH Tyler standard sieves).Activation of the hybrid particles was carried out with epichlorohydrin at 2.5% (m/v) and pH 7.0 for 1 h at room temperature, followed by exhaustive washings with distilled water.
Lipase immobilization onto epoxy SiO 2 -chitosan particles
Activated epoxy SiO 2 -chitosan particles were soaked into hexane under stirring (100 rpm) for 1h at 25 °C.Then, excess hexane was removed and powder lipase preparation was added at a ratio of 1:4 g of enzyme per g of support.PEG-1500 as lipase stabilizer was added together with the enzyme at a fixed amount (5 mg g -1 of support).Lipase-support system was maintained in contact for 16 h at 4 °C under static conditions.The immobilized lipase derivatives were filtered (nylon membrane 62HD from Scheiz Seidengazefabrik AG, Thal Schweiz, Switzerland) and thoroughly rinsed with hexane.Hydrolytic activities of free and immobilized lipase derivatives were assayed by the olive oil emulsion method according to the modification proposed by Soares et al. 24 One unit (U) of enzyme activity was defined as the amount of enzyme that liberates 1 µmol of free fatty acid per min under the assay conditions.The results were expressed in activity units per g of solid (free enzyme preparation or immobilized derivative).
Determination of optimal enzymatic activities and kinetic parameters
Enzymatic activities for free and immobilized lipases as a function of temperature and pH were investigated according to a five-level-two-factor central composite rotatable design (CCRD) with three replications at the center point.The activities were assayed by the hydrolysis of olive oil emulsion at a fixed proportion oil/water 1:1. 24he pH values were achieved using appropriate phosphate buffer solutions (0.1 mol L -1 ).Results from the experimental design were analyzed using Statistica version 5 (StatSoft Inc., USA).The statistical significance of the regression coefficients was determined by Student's test; the second order model equation was determined by Fischer's test and the proportion of variance explained by the model obtained was given by the multiple coefficient of determination, R 2 .
The influence of substrate concentration on the hydrolytic activities was also analyzed in the olive oil hydrolysis assay varying the proportion of oil in the emulsion from 10 to 50%.Michaelis-Menten constant (K m ) as the concentration of substrate at which half of the maximum reaction rate (V max ) is reached was calculated with aided by computational program Enzfitter version 1.05 published by Elsevier-Biosoft, 1987.In all cases, enzyme activity was measured as the initial reaction rate (0-5% hydrolysis) to avoid the possible inhibition that might take place due to the appearance of reaction products.
Thermal stability
Soluble (1.0 mL) and immobilized derivative (0.1 g) were incubated in the presence of 1 mL buffer phosphate pH 6.5 (100 mmol L -1 ) at 60 °C for different time intervals.Samples were removed and assayed for residual activity as previously described, 24 taking an unheated control to be 100% active.The rate of denaturation (k d ) and half-life time (t 1/2 ) were calculated by equations 1 and 2, respectively. (1)
Characterization
Fourier transform infrared spectroscopy (FT-IR, Perkin-Elmer, model spectrum GX FT-IR system) was used for studying chemical bonds between hybrid support and lipase.Micrographs were obtained with a scanning electron microscopy (SEM, LEO1450VP, Schott Zeiss from Brazil).X-ray diffraction patterns were collected on a XRD 6000 X-ray difractometer Shimadzu (Shimadzu from Brazil), using CuK radiation source, with 2q angle varying from 15-80°.Thermal stability and weight loss profiles were determined using a thermo gravimetric analysis-TG apparatus (TGA-50 Shimadzu-thermogravimetric analyzer).All TGA measurements were accomplished in nitrogen atmosphere and examined over the range of 25 to 1000 ºC, with a heating rate of 10 ºC min −1 .The typical sample amount was 10.00 ± 0.5 mg.
Synthesis and characterization of SiO 2 -chitosan matrix
Figure 1 is a hypothetic representation of the synthetic route used for the preparation of hybrid polymer by sol-gel process, involving three steps: 25 (i) hydrolysis of the precursor (SiO 2 ) in acidic solution; (ii) polycondensation of the formed monomers to produce oligomers arranged as sol particles and (iii) formation of SiO 2 -chitosan hybrid matrix by cross-linking of the sol particles leading to a sol-gel transition.
The product of the polycondensation reaction is a typical inorganic glass type material having hydroxyl groups on the surface.The chitosan is adsorbed on silica particle via hydrogen bonds between sylanol groups in the silica network and amide and oxy-groups of chitosan, ionic bounds between chitosan amino groups and sylanol groups as well as covalent bonds in result of esterification of chitosan hydroxyl-groups of sylanol groups of silica net work. 25,26The SiO 2 -chitosan hybrid forms new molecular structures containing not only organic groups, but also inorganic nano-silica particles.These inorganic particles are dispersed in the chitosan matrix, forming network organic-inorganic hybrid.SiO 2 -chitosan is transparent, vitreous, brittle and homogeneous (Figure 2).
The infrared spectrum (Figure 3) shows bands at 1110 to 1000 cm -1 corresponding to Si-O-R asymmetric stretching and hydroxyl groups (3600-3200 cm -1 ). 27Other characteristic bands were observed at 950 cm -1 (Si-O-Si axial deformation), 810 cm -1 (Si-O-Si axial deformation) and 600 cm -1 (Si-O-Si angular deformation). 28e XRD displays peaks of low intensity and expressive width, indicating a predominantly amorphous structure of the hybrid composite (Figure 4), with one halo centered around 23° (2q).According to Ogawa et al. 29 pure chitosan could be arranged in noncrystalline, hydrated crystalline and anhydrous crystalline forms which is represented by peaks around 11.7°, 14.2° and 23°, respectively.The disappearance of these two cristalline peaks in hybrid composite indicates the prominence of chitosan amorphous structure, influencing directly the structure of hybrid composite.
SiO 2 -chitosan thermal behavior was evaluated by TGA as can be seen in Figure 5.The initial weight loss (26%) was observed around 155 °C and can be attributed to loss of adsorbed water on the surface of chitosan and side product of subsequent condensation of Si-OH groups. 27he second weight loss (29%) was observed at 275 ºC due to the decomposition of low molecular weight species.Thermal decomposition was marked in the region of 275 ºC up to 800 ºC (around 40%).In this range of temperature probably occurred the dehydration of the saccharide rings, depolymerization and decomposition of the units of the organic polymer as well as the inorganic part of the hybrid. 30,27The higher temperature values for the first stage of decomposition observed for support compared to the first stage decomposition temperature of the pure chitosan (60 ºC), 31 indicates that the formation of the hybrid of SiO 2 -chitosan was successful.
Morphological characteristics of the immobilized derivative
SiO 2 -chitosan was submitted to activation procedure involved the incorporation of aldehyde groups on the surface of the support by reacting with the hydroxyls of SiO 2 -chitosan, aiming to render adequate functional groups to the covalent binding with the enzyme.
Epichlorohydrin reacts directly with the hydroxyl groups of chitosan generating epoxide groups which are able to link to the enzyme.3][34] PEG1500 (not shown) was used for improving activity of the biocatalyst as previous described. 35he epoxy SiO 2 -chitosan support was used to immobilize Burkholderia cepacia (lipase PS) by covalent binding.Typically 0.25 g of powder lipase per g of dry support was found to be sufficient to attain satisfactory retention of enzyme on the support (30 ± 0.5%) and immobilized derivative having high activity (1620 ± 58 units g -1 support).
The efficiency of the methodology in relation to the lipase incorporation on the support was also assessed by infrared spectroscopy.This technique is used to detect the presence of covalent bonds in molecules, based on the principle that each type of covalent bond has a characteristic absorption wavelength represent as an upward peak on a charted spectrum. 28Figure 7 shows spectra for both free and immobilized lipase.
Free enzyme displayed a typical spectrum of proteins with bands associated to their characteristic amide group (CONH).In this case, in the range from 1700 to 1600 cm -1 , there is a amide I band due to the double bond CO stretching, the CN stretching and NH bending. 28or the immobilized derivative, a slight additional bands in the spectrum at 1600 cm -1 was observed, indicating that the covalent bond between the enzyme and support occurred (NH bending).Bands corresponding to carboxyl deformation (2800 and 1500 cm -1 ) disappeared, which demonstrated that the characteristic functional groups of the lipase were covalently bond on the groups inserted in the support. 28iffraction of the X-ray information data on the activated support (epoxy SiO 2 -chitosan) and immobilized derivative is shown in Figure 8.The diffractogram obtained from the epoxy SiO 2 -chitosan shows peaks of low intensity and expressive width, indicating a predominantly amorphous structure of the hybrid composite. 14The immobilization procedure resulted on support structure modifications which clearly indicated insertion of crystalline regions into the SiO 2 -chitosan that can be associated with the presence of the enzyme.As can be seen in the inserted box in Figure 8, free lipase exhibits XRD with peaks intense and narrow which is typical for high crystalline structure of protein.The insertion of this crystalline structure of the lipase on the immobilized derivative confirmed that the immobilization procedure was successfully achieved.
Biochemical and kinetics properties of the immobilized derivative
The influence of the variables pH and temperature on the hydrolytic activities for free and immobilized lipase was assessed by two level factor central composite rotatable design (CCRD) with three replications at the center point taking the hydrolytic activities as response variable.The experimental matrix and the results are shown in Table 1.
For the free lipase, activity values were 9361 to 35181 U g -1 and immobilized lipase values varied from 1177 to 1997 U g -1 .The highest values were attained at pH 7.5 and temperature 50 °C (assay 10) for the free lipase and pH 6.1 and temperature 50 °C (assay 6) for the immobilized lipase.
The individual effects and interaction of factors (pH and temperature) in hydrolytic activity for free and immobilized lipases were estimated with aid of Statistica 5.0.Table 2 displays the estimative of the effects of the variables, standard error and p-values.
Based on a 95% confidence level, models and each term were tested to be significant using Fisher's statistical test for analysis of variance (ANOVA).Predict values for models indicated that the regression was significant at 95% confidence level without lack of fit (p > 0.10).Results are described in Table 3 (free lipase) and Table 4 (immobilized derivative).
The models were checked by the determination coefficient and values (R 2 > 0.9055) indicated that the sample variation of 90.55% for hydrolytic activities is attributed to the independent variables and less than 10% of the total variations are not explained by the model.Thus, the equations were considered adequate to describe hydrolytic activity as a function of study variables and were used to plot the response surfaces as shown in Figures 9a and 9b.
The response surface for free lipase (Figure 9a) showed a top corresponding to a temperature of 50 °C and pH 7.5.For the immobilized lipase, saddle-shaped response surface was obtained (Figure 9b), within a maximum point corresponding to a temperature of 50 °C and pH 6.1.
The shift of optimal pH for immobilized lipase has been reported for different lipase sources and support types. 33,36enerally, immobilized lipases on polycationic supports shift its optimum pH to the acidic range.Huang et al. 37 reported that pH optimum for immobilized lipase from C. rugosa onto chitosan nanofibrous membrane shifted slightly from 7.7 to 7.5 when compared with the free one.In this study, a similar behavior was observed.
Under these optimal conditions, the kinetic parameters were determined for each lipase preparation.Figure 10 displays the curve profile for hydrolytic activities as a function of substrate concentration expressed in fatty acids molarity varying from 372 to 1860 mmol L -1 .Results showed that both lipase preparations obeyed the Michaelis-Menten equation, indicating that in the studied range, no inhibition by the reaction product was detected.Total SS 787081 11 *significant at 95% confidence level; X 1 and X 2 represent the variables pH and temperature, respectively.The apparent value of K m for the immobilized lipase (819 mmol L -1 ) was almost three times higher than that for the native enzyme (334 mmol L -1 ) and the maximum reaction rate (V max ) were 16268 U g -1 µmol g -1 min -1 and 1921 µmol g -1 min -1 for the free and immobilized lipase samples, respectively.
The kinetic values (K m and V max ) indicate a change in affinity of immobilized enzyme to substrate.Changes in the kinetic parameters are found to be dependent on the enzyme source, kind of support, immobilization method and enzyme-support interactions. 38In this work, K m value for the immobilized lipase is higher than that of free lipase, which means the loss of affinity of lipase from substrate.A similar result involving increase in the K m value of lipase after immobilization has been reported in the literature. 39This increment after immobilization might either be due to structural changes in the enzyme induced by the applied immobilization procedure, or due to the use of epichlorohydrin as the activating agent.Moreover, the interactions of enzyme-support and diffusion resistance of carrier can be promoted by the lower accessibility of the substrate to the active sites of the immobilized enzyme.
Stability tests
Experiments were performed to determine the thermal deactivation constants (k d ) and the half-life time for free and immobilized lipase under thermal effect at 60 ºC and results are displayed in Figure 11.Patterns of heat stability indicated that the immobilization process tends to stabilize the enzyme.In this case, half-life time (t 1/2 ) of the enzyme is shown to be inversely proportional to the rate of denaturation (k d ).The half-life time (t 1/2 ) is the time which takes for the activity to reduce to a half of the original activity.
Under these conditions, the immobilized lipase exhibited higher stability against heat than the soluble form.While the free enzyme was practically inactivated after 1 h incubation at 60 °C, the immobilized lipase preserved about 40% of its original activity.Inactivation constants (k d ) and half-life (t 1/2 ) for both free and immobilized lipases were calculated, respectively as k d = 9.6 h -1 (t ½ = 0.07 h) and k d = 0.33 h -1 (t ½ = 2.1 h) indicating a raise of thermal stability near 30 fold when compared to free enzyme.Thermal stability upon immobilization is the result of molecular rigidity and the creation of a protected microenvironment.
These results were also confirmed by running TG analysis which also allows determining the temperature range at which a heated sample undergoes a major conformational change by means of monitoring the thermal weight change profile as well as parameters for thermal stability of support and free and immobilized lipase.The results of such analysis are shown in Table 5.
The free lipase shows two peaks in mass change, the first at 127 ºC (loss of water) and the second at 382 ºC attributed to the decomposition of stabilizing agents present in the lipase preparation. 40On the other hand, the immobilized lipase shows an increase in the temperature of the first stage of decomposition (174 ºC), attributed to the covalent bond of the biocatalyst on the hybrid matrix.The discrete increase in the temperature of the second stage (397 ºC) took place by the additional cross linking degree after the activation step.These results indicate that upon immobilization, the thermal profile for the lipase derivative shifted towards higher temperatures because of a strong interaction between enzyme and hybrid matrix, which enhanced the conformational stability of the native form.
To better illustrate the catalytic properties observed for the Burkholderia cepacia immobilized onto epoxy-SiO 2 -chitosan, the biochemical and kinetic properties of free and immobilized lipase are summarized in Table 6.
Conclusions
Epoxy SiO 2 -chitosan proved to be an attractive and efficient matrix to immobilize lipase due to its physical and chemical properties.The use of techniques such as X-ray diffraction and FTIR confirmed incorporation of the enzyme onto the support matrix.Experimental design data showed the significant influence of pH and temperature in hydrolytic activity of free and immobilized lipase.Values of K m found for both lipase preparations indicated that immobilization process reduced the affinity of enzymesubstrate; however the k d value indicated an increase of thermal stability of lipase.These results showed the potential of the epoxy SiO 2 -chitosan matrix to immobilize lipases, particularly lipase PS.Moreover easiness of support synthesis and immobilization procedure, justify the importance of the evaluation of morphological and mechanical properties to obtain suitable carriers.
Figure 6 .
Figure 6.Proposed mechanism for SiO 2 -chitosan activation with epichlorohydrin and immobilization step.
Figure 7 .
Figure 7. FTIR spectra for free and immobilized lipase.
Figure 9 .
Figure 9. Surface responses for the hydrolytic activity of the Burkholderia cepacia lipase in the free form (a) and immobilized on epoxy SiO 2 -chitosan (b) as a function of the variables pH and temperature according to the fitted mathematical models.
Figure 10 .
Figure 10.Hydrolytic activities for free and immobilized lipase on epoxy SiO 2 -chitosan as a function of the substrate concentration (expressed in total fatty acids content in olive oil/water emulsions).
Table 1 .
Experimental design and results according to the central composite "2²+star"
Table 2 .
Estimated effects, standard errors and Student's t test for the hydrolytic activity of free and immobilized lipase on epoxy SiO 2 -chitosan using the 2 2 central composite "2²+star" *significant at 95% confidence level; X 1 and X 2 represent the variables pH and temperature, respectively.
Table 3 .
Analysis of variance (ANOVA) for the model that represents the hydrolytic activity of the free lipase PS as a function of pH (X 1 ) and temperature (X 2 )
Table 4 .
Analysis of variance (ANOVA) for the model that represents the hydrolytic activity of the immobilized lipase as a function of pH (X 1 ) and temperature (X 2 )
Table 5 .
Thermogravimety data for support and immobilized derivative
Table 6 .
Biochemical and kinetic properties for the free and immobilized lipase PS on SiO 2 -chitosan | 5,696.6 | 2011-08-01T00:00:00.000 | [
"Engineering"
] |
Dynamic Analysis of the Coupling Coordination Relationship between Urbanization and Water Resource Security and Its Obstacle Factor
Water resource security is an important condition for socio-economic development. Recently, the process of urbanization brings increasing pressures on water resources. Thus, a good understanding of harmonious development of urbanization and water resource security (WRS) systems is necessary. This paper examined the coordination state between urbanization and WRS and its obstacle factors in Beijing city, utilizing the improved coupling coordination degree (ICCD) model, obstacle degree model, and indicator data from 2008 to 2017. Results indicated that: (1) The coupling coordination degree between WRS and urbanization displayed an overall upward tendency during the 2008–2017 period; the coupling coordination state has changed from an imbalanced state into a good coordination state, experiencing from a high-speed development stage (2008–2010), through a steady growth stage (2010–2014), towards a low-speed growth (2014–2017). (2) In urbanization system, both the social and spatial urbanizations have the greatest obstruction to the development of urbanization-WRS system. The subsystems of pressure and state are the domain obstacle subsystems in WRS system. These results can provide important support for urban planning and water resource protection in the future, and hold great significance for urban sustainable development.
Introduction
Water is not only the natural resources to maintain the human existence, but also the material basis to guarantee the socio-economic development. Water resources directly or indirectly provide resources guarantee for social development [1]. As the important strategic socio-economic resource for the development of urbanization, the rational and efficient utilization of water resources will affect the implementation of sustainable urban development strategies [2,3]. Urbanization system includes complex relationships of dependence between socio-economic activities and resources. Water resource security and urban development systems are intricately linked [4]. Due to the rapid development of urbanization, urban population has increased dramatically, and the scale of urbanization has been expanding, resulting in increasing pressures on urban water resources [5]. Recently, water scarcity caused by urbanization has brought a series of socio-economic issues. According to statistics, about 2.6 billion people in the world lack access to safe water resource [6]. Clarifying the relationship between To fill this gap, we attempt to introduce the improved coupling coordination degree (ICCD) model and an obstacle degree model to analyze the coordination relationship between WRS and urbanization and its obstacle factors in Beijing city. We firstly establish an indicator system including the urbanization system and the WRS system. Then, we use the ICCD model to examine the coordination state between urbanization-WRS in Beijing city from 2008 to 2017. Finally, we introduce the obstacle degree model to determine the dominant obstacle indicators. This study will provide new insights into the water resources management and new-urban development in the future.
Study Area
Beijing city, the capital of China, is located in the north of China. It covers an area of 1.64 million square kilometers, and is divided into 16 major districts (see Figure 1). Beijing lies between 39 • 26 -41 • 03 N and 115 • 25 -117 • 30 E, in a warm, temperate, with moderate warmth and four seasons. Beijing, as one of the fastest growing urbanization areas in China, has a high socio-economic level; however, the gap between urbanization and water resource is very large. As the center of politics, economy, and culture in China, in 2016 the urbanization rate of Beijing reached about 86.5%; the total amount of water resources availability was 3.506 billion cubic meters, while the water resource amount per capita was only 161 billion cubic meters, i.e., about 1/15 of the national average [27,28]. Water resource security in Beijing is a quite prominent problem, especially water supply and demand, and seriously influences the ecological health and the socio-economic development.
3
To fill this gap, we attempt to introduce the improved coupling coordination degree (ICCD) model and an obstacle degree model to analyze the coordination relationship between WRS and urbanization and its obstacle factors in Beijing city. We firstly establish an indicator system including the urbanization system and the WRS system. Then, we use the ICCD model to examine the coordination state between urbanization-WRS in Beijing city from 2008 to 2017. Finally, we introduce the obstacle degree model to determine the dominant obstacle indicators. This study will provide new insights into the water resources management and new-urban development in the future.
Study Area
Beijing city, the capital of China, is located in the north of China. It covers an area of 1.64 million square kilometers, and is divided into 16 major districts (see Figure 1). Beijing lies between 39°26′-41°03′ N and 115°25′-117°30′ E, in a warm, temperate, with moderate warmth and four seasons. Beijing, as one of the fastest growing urbanization areas in China, has a high socio-economic level; however, the gap between urbanization and water resource is very large. As the center of politics, economy, and culture in China, in 2016 the urbanization rate of Beijing reached about 86.5%; the total amount of water resources availability was 3.506 billion cubic meters, while the water resource amount per capita was only 161 billion cubic meters, i.e., about 1/15 of the national average [27,28]. Water resource security in Beijing is a quite prominent problem, especially water supply and demand, and seriously influences the ecological health and the socio-economic development.
Research Framework
This paper tries to explore the coordination relationship between WRS and urbanization by using a systematic framework (see Figure 2). For this purpose, (1) we establish a comprehensive indicator system, including the urbanization and WRS systems; (2) the data of forward and reverse indicators are preprocessed into dimensionless values; (3) the ICCD model and obstacle degree model are established; and (4) the coupling coordination state between the urbanization-WRS system is analyzed.
This paper tries to explore the coordination relationship between WRS and urbanization by using a systematic framework (see Figure 2). For this purpose, (1) we establish a comprehensive indicator system, including the urbanization and WRS systems; (2) the data of forward and reverse indicators are preprocessed into dimensionless values; (3) the ICCD model and obstacle degree model are established; and (4) the coupling coordination state between the urbanization-WRS system is analyzed.
Construction of Indicator System
Urbanization is a way to describe the process of continuous concentration of population into urban areas. This is a process of continuous development and change, in which the type of urbanization changes from "traditional" to "new-type", the social economy is constantly developing, population quality is constantly improving, and urban development moves toward modernization [29]. Urbanization has an important impact on several aspects of society, population, and economic life [30]. Recently, numerous scholars have carried out research on China's urbanization development from four aspects of demographic urbanization, economic urbanization, social urbanization, and spatial urbanization [31,32]. In this study, we followed this classification to establish the primary indicators for the urbanization system. In terms of the specific secondary indicators, we selected urban population density, non-agricultural population rate, and population growth rate to represent demographic urbanization. These indicators are commonly used as secondary indicators for demographic urbanization [33]. Economic urbanization is the geographical concentration of urban economic activities in the process of urbanization. Thus, the indicators of per capita financial revenue, per capital GDP, and per capita investment in fixed assets can be employed to evaluate economic urbanization. The extent of social urbanization is quite wide; therefore, we chose three representative indicators (i.e., per capita education funds, number of doctors per 10 4 people, and number of public transports per million people) to represent it, based on existing research [34]. Spatial urbanization mainly shows the changes in land use and fixed assets investment. Dwelling area per capita, urban road area per capita, and fixed asset investment growth rate were selected as
Construction of Indicator System
Urbanization is a way to describe the process of continuous concentration of population into urban areas. This is a process of continuous development and change, in which the type of urbanization changes from "traditional" to "new-type", the social economy is constantly developing, population quality is constantly improving, and urban development moves toward modernization [29]. Urbanization has an important impact on several aspects of society, population, and economic life [30]. Recently, numerous scholars have carried out research on China's urbanization development from four aspects of demographic urbanization, economic urbanization, social urbanization, and spatial urbanization [31,32]. In this study, we followed this classification to establish the primary indicators for the urbanization system. In terms of the specific secondary indicators, we selected urban population density, non-agricultural population rate, and population growth rate to represent demographic urbanization. These indicators are commonly used as secondary indicators for demographic urbanization [33]. Economic urbanization is the geographical concentration of urban economic activities in the process of urbanization. Thus, the indicators of per capita financial revenue, per capital GDP, and per capita investment in fixed assets can be employed to evaluate economic urbanization. The extent of social urbanization is quite wide; therefore, we chose three representative indicators (i.e., per capita education funds, number of doctors per 10 4 people, and number of public transports per million people) to represent it, based on existing research [34]. Spatial urbanization mainly shows the changes in land use and fixed assets investment. Dwelling area per capita, urban road area per capita, and fixed asset investment growth rate were selected as indictors of spatial urbanization. Thus, a total of 12 representative indicators were determined for urbanization.
The construction of the WRS indicator system followed the Pressure-State-Effect-Response (PSER) framework, which is widely used in the field of environment and sustainable development [35,36]. The PSER framework includes the pressure subsystem, the state subsystem, the effect subsystem, and the response subsystem, as well as their interactions and constraints. The PSER framework can clearly show the causal relationship between these four WRS subsystems. In terms of secondary indicators, we determined the indicator system of WRS through 4 primary indicators and 14 secondary representative indicators based on the status quo of the water resources and the water environment in Beijing [20,37].
In general, the urbanization system and WRS system couple with each other and have complex interactions. We selected the 12 representative urbanization indicators and 14 WRS indicators to reflect the relationship between the two systems. The structure model of urbanization-WRS system in Beijing is shown in Figure 3 [38]. The indicator systems for urbanization and WRS are shown in Table 1.
5
indictors of spatial urbanization. Thus, a total of 12 representative indicators were determined for urbanization.
The construction of the WRS indicator system followed the Pressure-State-Effect-Response (PSER) framework, which is widely used in the field of environment and sustainable development [35,36]. The PSER framework includes the pressure subsystem, the state subsystem, the effect subsystem, and the response subsystem, as well as their interactions and constraints. The PSER framework can clearly show the causal relationship between these four WRS subsystems. In terms of secondary indicators, we determined the indicator system of WRS through 4 primary indicators and 14 secondary representative indicators based on the status quo of the water resources and the water environment in Beijing [20,37].
In general, the urbanization system and WRS system couple with each other and have complex interactions. We selected the 12 representative urbanization indicators and 14 WRS indicators to reflect the relationship between the two systems. The structure model of urbanization-WRS system in Beijing is shown in Figure 3 [38]. The indicator systems for urbanization and WRS are shown in Table 1.
The Variation Coefficient Method
(1) Pre-processing of the indicators. The WRS indicator system includes both forward and reverse indicators. For the forward indicators (i.e., the state subsystem indicators D5-D6), a larger indicator value means a better performance level. For the reverse indicators (i.e., the pressure subsystem indicators D1-D4), a bigger indicator value means a worse performance level. Thus, it is necessary to convert the indicators to dimensionless values to avoid the effect of the characteristics and the scope of the selected indicator. The following normalization formulas were used: Reverse indicator : where x ij is the original index value and f ij is the standardized value of x ij .
(2) Determination of indicators' weight. The urbanization and WRS systems cover 12 and 14 indicators, respectively. A way to distinguish the importance of these indicators in each system is to assign them different weights. The variation coefficient method is based on the information contained in the index data to evaluate the importance of difference indexes, which is a method of objectively calculating weights [39,40]. The variation coefficient method believes that when the variation degree of the indicator is greater, the ability of the indicator to distinguish different evaluated objects should then be stronger, and the weight should be larger [41]. The variation coefficient method has been widely introduced to environmental quality assessments [42]. The variation coefficient weight was calculated with the following steps: where x i refers to the average value of x i during the period studied; σ i represents the standard deviation of x i ; d i is the variation coefficient of x i , it represents the variation degree of x i ; and w i is the weight of the indicator x i .
The ICCD Model
The system composed of both WRS and urbanization can be seen as a coupling system, in which WRS and urbanization influence and restrict reciprocally [43]. The degree of interaction among systems is measured by coupling coordination degree [44]. The contribution coefficients describe the influence degree of each single system to the whole coupling system. The ICCD model is shown below [45]: (1) Determination of the performance levels.
The performance levels of the two systems were calculated in the following way: where U j is the urbanization system's performance level; W j refers to the WRS system's performance level; f i u and w i u are the standardized value and the weight of the indicator x i in urbanization system, respectively; and f i w and w i w are the standardized value and the weight of indicator x i in WRS system, respectively.
(2) Determination of the improved contribution coefficients. In the ICCD model, it needs to determine the contribution coefficients of urbanization system and WRS system. The traditional CCD model usually utilizes subjective assignment methods to determine the contribution coefficients. The two contribution coefficients are arbitrarily assigned the value of 0.5 by some scholars. The way determines the contribution coefficients by assigning the fixed value (0.5) in a subjective way, which highlights the subjective judgment of the decisionmakers [46,47]. Thus, we introduced an ICCD model to determine the contribution coefficients [48] in the following way: where a * is improved contribution coefficient of urbanization system; b * is the improved contribution coefficient of WRS system.
(3) Calculation of the coupling coordination degree D.
The D was determined in the following way: where C is the degree of coupling; k is the adjustment coefficient, it is usually assigned the value of 2. T is the overall performance level; and D refers to the coupling coordination degree between these two systems.
Obstacle Degree Model
To effectively improve the coordination status of urbanization and WRS, it is necessary to explore the main obstacle factors that have a negative impact on the harmonious relationship between urbanization-WRS system. The main step of obstacle degree model is proposed as follows [49]: where Q i represents the obstacle degree, it means the influence degree of each subsystem or indicator on the two systems. 1 − f ij refers to the deviation degree of the indicator, it represents the difference between the actual indicator value and the optimal target value.
Classification Standard of Coupling Coordination Degree
Previous studies have shown that the states of coupling coordination between urbanization and WRS are divided into five grades, namely serious imbalance, imbalance, basic coordination, coordination, and good coordination [21,32]. Accordingly, this study adopted this classification. The D was used to describe the coordination status between two interacting systems, and the five grades were defined as follows: (1) Serious imbalance state: when D * assumes a value within the range 0 ≤ D * ≤ 0.25. In this case, the nexus between urbanization and WRS is very poor. (2) Imbalance state: when D * assumes a value within the range 0.25 < D * ≤ 0.45. In this case, the interaction between urbanization and WRS is weak. (3) Basic coordination state: when D * assumes a value within the range 0.45 < D * ≤ 0.65. In this case, the link between urbanization and WRS begins to reinforce. (4) Coordination state: when D * assumes a value within the range 0.65 < D * ≤ 0.75. In this case, the relationship between urbanization and WRS is coordinated. (5) Good coordination state: when D * assumes a value within the range 0.75 < D * ≤ 1. In this case, the coordination between urbanization and WRS is very good.
Data Source
The urbanization system data during the period 2008-2017 were collected from the Beijing Statistical Yearbook [27] and the China Urban Statistical Yearbook [50]. The annual data on the WRS system were collected from the Beijing Water Resource Bulletin [28] and the Beijing Statistic Yearbook on Environment [51].
Performance Level of Two Systems
The overall performance level of urbanization displayed an upward trend during the period 2008-2017, as shown in Figure 4. The overall performance level of WRS followed an S-shaped growth curve, which initially decreased, and subsequently clearly increased from 2008 to 2012. Thereafter, the performance value in the WRS system largely increased from 2014 to 2017. The fluctuation of the performance level in the WRS system was mainly due to the performance level of both the state and the effect subsystems, which declined from 2008 to 2010, but raised continuously after 2010. From 2012 onwards, the two subsystems showed a downward trend of volatility. It is worth noting that the changing trend of the state subsystem was similar to that of the overall performance level. The fluctuation of both the state and the effect subsystems caused the Sshaped growth curve of the overall trend. The performance level of the pressure and response subsystems gradually increased with a significant growth trend from 2014 to 2017. Figure 5 shows the trends of the performance level in the WRS system from 2008 to 2017. The overall performance level of WRS followed an S-shaped growth curve, which initially decreased, and subsequently clearly increased from 2008 to 2012. Thereafter, the performance value in the WRS system largely increased from 2014 to 2017. The fluctuation of the performance level in the WRS system was mainly due to the performance level of both the state and the effect subsystems, which declined from 2008 to 2010, but raised continuously after 2010. From 2012 onwards, the two subsystems showed a downward trend of volatility. It is worth noting that the changing trend of the state subsystem was similar to that of the overall performance level. The fluctuation of both the state and the effect subsystems caused the S-shaped growth curve of the overall trend. The performance level of the pressure and response subsystems gradually increased with a significant growth trend from 2014 to 2017. The overall performance level of WRS followed an S-shaped growth curve, which initially decreased, and subsequently clearly increased from 2008 to 2012. Thereafter, the performance value in the WRS system largely increased from 2014 to 2017. The fluctuation of the performance level in the WRS system was mainly due to the performance level of both the state and the effect subsystems, which declined from 2008 to 2010, but raised continuously after 2010. From 2012 onwards, the two subsystems showed a downward trend of volatility. It is worth noting that the changing trend of the state subsystem was similar to that of the overall performance level. The fluctuation of both the state and the effect subsystems caused the Sshaped growth curve of the overall trend. The performance level of the pressure and response subsystems gradually increased with a significant growth trend from 2014 to 2017. Figure 6 reveals the results of the coupling coordination degree D between urbanization and WRS in Beijing from 2008 to 2017. As shown in Figure 6, the D between urbanization and WRS followed an overall rising tendency during the period studied, from 0.354 in 2008 to 0.796 in 2017. The results of D mean that the coupling coordination state between the urbanization-WRS system has improved. The state of the coupling coordination between the two systems shows a dynamic evolution, passing from an imbalance state to a good coordination state during the 2008-2017 period. Specifically, by comparing the standard D * with the actual D, we find that the two systems in Beijing city was in an imbalance state in 2008. During the 2009-2012 period, the coordination between the two system in Beijing maintained in a basic coordination state. In the following four years, D values between urbanization-WRS in Beijing were basically in a good coordination state.
Coupling Coordination State
10 Figure 6 reveals the results of the coupling coordination degree D between urbanization and WRS in Beijing from 2008 to 2017. As shown in Figure 6, the D between urbanization and WRS followed an overall rising tendency during the period studied, from 0.354 in 2008 to 0.796 in 2017. The results of D mean that the coupling coordination state between the urbanization-WRS system has improved. The state of the coupling coordination between the two systems shows a dynamic evolution, passing from an imbalance state to a good coordination state during the 2008-2017 period. Specifically, by comparing the standard D * with the actual D, we find that the two systems in Beijing city was in an imbalance state in 2008. During the 2009-2012 period, the coordination between the two system in Beijing maintained in a basic coordination state. In the following four years, D values between urbanization-WRS in Beijing were basically in a good coordination state.
Dominant Obstacle Factors
Using the obstacle degree model, the obstacle degrees of the subsystem in the urbanization and WRS systems were obtained; they are shown in Table 2. As shown in Table 2, in the urbanization system, the subsystem with the greatest obstacle degree was social urbanization during 2008 to 2010; the subsystem of spatial urbanization had the largest obstacle degree from 2011 to 2017. In comparison, the demographic urbanization subsystem had the lowest obstacle degree during the 2008-2017 period. This means that in Beijing city, both the social and spatial urbanizations were the domain obstacle subsystems, while the demographic urbanization was the least important subsystem. In the WRS system, the pressure subsystem had the greatest obstacle degree in the first six years during the surveyed period and the state subsystem had the biggest obstacle in terms of degree in the remaining four years, whereas the effect subsystem had the smallest during the period 2008-2017. These findings suggest that, both the pressure and state subsystems were considered to have the greatest hindrance to the development of urbanization-WRS system. On the contrary, the effect subsystem had the smallest impact on the coordination state between urbanization-WRS system in Beijing city. Table 2. Obstacle degree of subsystem in the two systems.
Dominant Obstacle Factors
Using the obstacle degree model, the obstacle degrees of the subsystem in the urbanization and WRS systems were obtained; they are shown in Table 2. As shown in Table 2, in the urbanization system, the subsystem with the greatest obstacle degree was social urbanization during 2008 to 2010; the subsystem of spatial urbanization had the largest obstacle degree from 2011 to 2017. In comparison, the demographic urbanization subsystem had the lowest obstacle degree during the 2008-2017 period. This means that in Beijing city, both the social and spatial urbanizations were the domain obstacle subsystems, while the demographic urbanization was the least important subsystem. In the WRS system, the pressure subsystem had the greatest obstacle degree in the first six years during the surveyed period and the state subsystem had the biggest obstacle in terms of degree in the remaining four years, whereas the effect subsystem had the smallest during the period 2008-2017. These findings suggest that, both the pressure and state subsystems were considered to have the greatest hindrance to the development of urbanization-WRS system. On the contrary, the effect subsystem had the smallest impact on the coordination state between urbanization-WRS system in Beijing city. In urbanization system, the top five indicators that hinder the coordination development of urbanization-WRS system are shown in Table 3. From 2008 to 2010, the indicators of C12 fixed asset investment growth rate, C9 number of public transports per million people, and C10 dwelling area per capita were ranked in the top three in the urbanization system. After 2010, the indicator of C11 urban road area per capita replaced the C10 dwelling area per capita became one of the top three indicators. In terms of the WRS system, the indicators of D1 water consumption for industrial production, D2 water consumption of agricultural irrigation and D14 urban water supply rate ranked as the top three indicators from 2008 to 2017. This means that the three indicators had a profound effect on the coordination state between urbanization and WRS. C9 C10 C8 C4 D1 D14 D2 D8 D11 2009 C9 C12 C10 C8 C4 D1 D14 D2 D8 D5 2010 C9 C10 C12 C8 C4 D1 D14 D2 D8 D5 2011 C11 C9 C12 C8 C7 D1 D14 D2 D12 D9 2012 C11 C12 C9 C10 C4 D1 D14 D9 D2 D12 2013 C11 C12 C9 C4 C8 D1 D14 D2 D6 D5 2014 C12 C11 C4 C8 C10 D1 D2 D14 D7 D5 2015 C12 C11 C9 C3 C3 D14 D2 D1 D10 D5 2016 C12 C11 C9 C3 C7 D6 D14 D1 D10 D5 2017 C12 C11 C3 C2 C1 D14 D1 D2 D10 D4 4. Discussion
Analysis of Performance Level
The results of the urbanization system's performance level suggest that in the past decade, the performance level of urbanization system in Beijing city has improved. During the four subsystems in urbanization, social urbanization is considered to make the greatest contribution for the whole performance level. In the past 10 years, Beijing city was undergoing a period of urbanization and industrialization, relying on its location advantage and policy support, thus achieving a fast increase in the social urbanization level. In relation to WRS, the overall performance levels showed a fluctuating upward trend. The performance levels of the pressure and response subsystems were constantly optimized. This is mainly due to the fact that in 2016, Beijing was listed as the second demonstration "Sponge City". The concept of "Sponge City" is new in urban water management; it aims to effectively divert rain and sewage, store rainwater, and purify sewage. In Beijing city, the response measures adopted in the context of the "Sponge City" released the pressures on water resources, thereby contributing to the improvement of the overall performance levels in the WRS system.
Analysis of Coupling Coordination State
We examined the coordination state between the WRS and urbanization systems by analyzing the performance gap between the urbanization and the WRS systems. The trend comparison of urbanization performance level and WRS performance level is shown in Figure 7. We discussed this dynamic evolution from the following three stages (2008-2010, 2010-2014, and 2014-2017).
Analysis of Obstacle Factors
In the urbanization system, both social and spatial urbanizations had the greatest influence on coordination development of urbanization-WRS in Beijing city, while demographic urbanization had the smallest impact. Thus, social and spatial urbanizations subsystems need to be considered by policy-makers wishing to improve the coordination development of the two systems. In parallel, the pressure and state subsystems played a dominant role in the WRS system, while the effect subsystem had a minimal effect. The research results are consistent with the current situation of WRS in Beijing. They suggest that, the development of socio-economy has brought tremendous pressure on water resources in Beijing over the past ten years. The state of WRS hinder the development of two systems. Given these circumstances, it is necessary to take more reasonable water resource conservation measures to release the stress on water resources.
In relation to the secondary indicators, the statistical results of obstacle factors show that fixed asset investment growth rate, number of public transports per million people, dwelling area per capita and urban road area per capita were the key indicators restricting the healthy development of urban and water resources. These findings show that, during the 2008-2017 period, the urbanization population has increased rapidly, and Beijing's infrastructure is insufficient to meet the needs of the increased urban population. Water consumption for industrial production, water consumption of agricultural irrigation and urban water supply rate were the three key indicators that had the highest influence in the health development of two systems. Beijing is located in the north of China, in an area with a dry climate; this has led in the past to serious shortages of water resources. Since the reform and opening-up of the economy, Beijing is experiencing a process of industrialization and urbanization, and the industrial and agricultural development has led to an increase in industrial and agricultural water use which pose a significant risk to water resources security. (1) 2008-2010: The D between urbanization and WRS was low, and grew rapidly. The nexus between urbanization and WRS was initially weak, and gradually began to strengthen. In 2008, the performance level of urbanization was poor, while the performance level of WRS was considerably higher. As a result, an unbalanced state occurs between the two systems in Beijing. Subsequently, as the gap between urbanization performance level and WRS performance level decreases gradually, while the D among urbanization and WRS grew more rapidly, increasing from 0.354 in 2008 to 0.636 in 2009. Therefore, the coupling coordination state was optimized. During the period from 2009 to 2010, the impact of urbanization on WRS strengthened, and the performance level of WRS was at its lowest point. As a result, the D showed a slight decline. Fortunately, this decline did not cause changes in the coupling coordination state. Both in 2009 and 2010, the urbanization-WRS system was in a basic coordination state.
Conclusions
(2) 2010-2014: The D between urbanization and WRS raised steadily. During this period, urbanization in Beijing entered a stage of rapid development, and the WRS state improved due to the implementation of the South-to-North Water Diversion Project in 2012. The levels of urbanization and WRS increased simultaneously. The gap in the performance levels between the two systems was reduced, signaling their simultaneous development. In this period, the D between the two systems increased from 0.599 in 2010 to 0.765 in 2014. Therefore, the coupling coordination state improved greatly, displaying a cross-level change. The coupling coordination state moved from a basic coordination state in 2010-2011, to a coordination state in 2012-2013, and eventually to a good coordination state in 2014. This finding demonstrates that these processes were gradually coordinated during this period.
(3) 2014-2017: The D between urbanization and WRS slowly increased. After the implementation of the "new-type urbanization" plan in 2014, Beijing has experienced a rapid urbanization process.
The "new-type urbanization" is a new concept of urban development, based on the tenets of resource conservation, eco-friendliness, and sustainable development. During this period, a series of new-type urbanization measures were taken, such as the construction of the "Sponge City", the protection of the eco-environment, and the renovation of the underground pipeline network, which contributed to the improved quality urbanization and water environment. Not only the urbanization level, but also water resource quality was constantly enhanced. The urbanization and WRS systems were in a good coordination state during the period 2016-2017. Urbanization went in a coordinated development phase with water environmental protection. The coupling coordination state among WRS and urbanization improved slowly.
Analysis of Obstacle Factors
In the urbanization system, both social and spatial urbanizations had the greatest influence on coordination development of urbanization-WRS in Beijing city, while demographic urbanization had the smallest impact. Thus, social and spatial urbanizations subsystems need to be considered by policy-makers wishing to improve the coordination development of the two systems. In parallel, the pressure and state subsystems played a dominant role in the WRS system, while the effect subsystem had a minimal effect. The research results are consistent with the current situation of WRS in Beijing. They suggest that, the development of socio-economy has brought tremendous pressure on water resources in Beijing over the past ten years. The state of WRS hinder the development of two systems. Given these circumstances, it is necessary to take more reasonable water resource conservation measures to release the stress on water resources.
In relation to the secondary indicators, the statistical results of obstacle factors show that fixed asset investment growth rate, number of public transports per million people, dwelling area per capita and urban road area per capita were the key indicators restricting the healthy development of urban and water resources. These findings show that, during the 2008-2017 period, the urbanization population has increased rapidly, and Beijing's infrastructure is insufficient to meet the needs of the increased urban population. Water consumption for industrial production, water consumption of agricultural irrigation and urban water supply rate were the three key indicators that had the highest influence in the health development of two systems. Beijing is located in the north of China, in an area with a dry climate; this has led in the past to serious shortages of water resources. Since the reform and opening-up of the economy, Beijing is experiencing a process of industrialization and urbanization, and the industrial and agricultural development has led to an increase in industrial and agricultural water use which pose a significant risk to water resources security.
Conclusions
This study explored the coordination relationship between urbanization and WRS in Beijing by using the ICCD model and obstacle degree model. The main contributions of this research are as follows. Firstly, the indicator systems of urbanization and WRS have been established. These indicators can provide criteria to measure changes in the coupling degree in both systems. Secondly, this study revealed the dynamic trends present in the coupling coordination state between urbanization and WRS, using the ICCD model. Thirdly, the indicators that restrict the coordination development of urbanization and WRS systems are determined by using the obstacle degree method, which can provide a solid knowledge base for policy-makers to adjust water resources use plan and urbanization development policies.
Over the past ten years, Beijing has experienced a rapid urbanization process, which put a lot of pressure on water resources security. Overall, the state of the coupling coordination between the two systems has improved, passing from an imbalance state towards a good coordination state. However, the coupling coordination degree between the two systems is still low. The social urbanization, spatial urbanization, pressure and state subsystems are the main obstacle subsystems.
According to the research results, we propose the following suggestions and prospects: (1) policy-makers should adjust the urban development strategy by changing the development mode of spatial urbanization. The "chain effect" of medical, education, and infrastructure development on social urbanization can be improved through a rational allocation of resources. Moreover, more attention should be paid to the environmental factors when formulating the new-urbanization development strategy; (2) In order to guarantee the sustainable development of urbanization and water resources, the government should pay more attention on relieving the pressures on water resources caused by the rapid urban development. The local government needs to increase the investment to optimize the industrial structure, promote the adoption of innovative water-saving technologies, and develop clean energy to reduce sewage discharge. In addition, it is necessary to propose a water consumption strategic plan to improve water utilization effect and reduce water resource consumption of industrial output.
The coordination relationship between urbanization and WRS is a very complicated system. Due to data source limitations, this study focused only on the investigation of one city, i.e., Beijing. Future research should focus on the methodology to perform a comprehensive analysis of China's urban agglomerations, such as the Beijing-Tianjin-Hebei urban group. A comparative analysis should be performed of different regions in the urban group through the use of spatial gradients and temporal scales, to have a win-win effect. | 8,440.6 | 2019-11-28T00:00:00.000 | [
"Computer Science"
] |
Superconformal geometries and local twistors
Superconformal geometries in spacetime dimensions $D=3,4,{5}$ and $6$ are discussed in terms of local supertwistor bundles over standard superspace. These natually admit superconformal connections as matrix-valued one-forms. In order to make contact with the standard superspace formalism it is shown that one can always choose gauges in which the scale parts of the connection and curvature vanish, in which case the conformal and $S$-supersymmetry transformations become subsumed into super-Weyl transformations. The number of component fields can be reduced to those of the minimal off-shell conformal supergravity multiplets by imposing constraints which in most cases simply consists of taking the even covariant torsion two-form to vanish. This must be supplemented by further dimension-one constraints for the maximal cases in $D=3,4$. The subject is also discussed from a minimal point of view in which only the dimension-zero torsion is introduced. Finally, we introduce a new class of supermanifolds, local super Grassmannians, which provide an alternative setting for superconformal theories.
Introduction
Conformal symmetry has been extensively studied over the years because of its relevance to various aspects of theoretical physics: two-dimensional conformal theory models and statistical mechanics; four-dimensional N = 2 and 4 superconformal field theories; as an underlying symmetry that may be broken to Poincaré symmetry in four-dimensional spacetime, and as a tool to construct off-shell supergravity theories. It played an important role from early on in studies of quantum gravity [1]. Here we shall be interested in the geometry of local (super) conformal theories as represented on bundles of supertwistors over superspace.
The first paper on D = 4, N = 1 conformal supergravity (CSG) [2] used a formalism in which the entire superconformal group was gauged in a spacetime context. Although this was not a fully geometrical set-up because supersymmetry does not act on spacetime itself, but rather on the component fields, this paper nevertheless introduced the idea that gauging conformal boosts and scale transformations could be very useful. After Poincaré supergravity had been constructed in conventional (i.e. Salam-Strathdee [3]) superspace [4], [5], it was subsequently shown how scale transformations could be incorporated as super-Weyl transformations [5,6]. On the other hand, in the completely different approach to superspace supergravity of [7], super scale transformations were built in right from the start. However, it has turned out to be difficult to extend the latter approach to other cases, such as higher dimensions or higher N. The superspace geometry corresponding to all D = 4 off-shell CSG multiplets was given in [8], [9] using conventional superspace with an SL(2, C) × U(N) group in the tangent spaces together with real super-Weyl transformations. The superspace geometries corresponding to most other off-shell CSG multiplets have also been described, from the conventional point of view in D = 3 [10,11,12] and from conformal superspace in D = 3, 4 and 5 [13], [14], [15], [16], [17], and in D = 6 (the (1, 0) theory) [18], [19], [20]. The D = 4 N = 2 theory [21] and the D = 6 (1, 0) theory were also discussed earlier in harmonic superspace in [21] and [22] respectively. In addition, some years ago, the D = 6 (1, 0) theory was formulated in in projective superspace [23]. This theory as well as the D = 6 (2, 0) theory were recently discussed in terms of local supertwistors in [24].
In the non-supersymmetric case a standard approach is to work with conventional Riemannian geometry augmented by Weyl transformations of the metric. The Riemann tensor splits in two parts, the conformal Weyl tensor, and the Schouten tensor which is a particular linear combination of the Ricci tensor and the curvature scalar. This object transforms in a connection-type way under Weyl transformations and can be used to construct a new connection known as the tractor connection [25,26,27]. This takes its values in a parabolic subalgebra of the conformal algebra and acts naturally on a vector bundle whose fibres are R 1,n+1 (in the Euclidean case), thus generalising in some sense the standard conformal embedding of flat n-dimensional Euclidean space. Although this idea does not carry over straightforwardly to the supersymmetric case, a similar construction, which does, can be made by replacing the (n+ 2)-dimensional fibre by the relevant twistor space. This formalism is called the local twistor formalism and was introduced in D = 4 in [29]. It has been discussed in the supersymmetric case 1 for N = 1, 2 in D = 4 by Merkulov, [30,31,32]. Such a formalism depends on the dimension of spacetime because the twistor spaces also change.
In the following we shall take a slightly different approach to that of Merkulov in that we start from a connection taking its values in the full superconformal algebra. The conformal superspace formalism alluded to previously [13], [14] is a supersymmetric version of the Cartan connection formalism [33], which was first mentioned in the superspace context in [34]. The formalism we advocate here can be thought of as an associated Cartan formalism in that the connection acts on a vector bundle rather than a principal one.
In section 2 we briefly discuss bosonic conformal geometry starting from a local scalesymmetric formulation from which we recover the standard formalism; we also review the rôle of the Schouten tensor. We then give a brief outline of the local twistor point of view in D = 3, 4, 6. A connection for the full superconformal group in the twistor representation is given and the standard formalism is recovered by a suitable choice of gauge with respect to local conformal boost transformations. This is then generalised to the supersymmetric case starting with a quasi-universal discussion in section 3, and followed up by details of the D = 3, 4 and 6 cases in sections 4,5 and 6 respectively. In section 7, the D = 5, N = 1 case, which is slightly different, is discussed. In all cases constraints have to be imposed in order to reduce the component field content to that of the respective minimal off-shell conformal supergravity multiplets, the main one being that the even torsion two-form should be the same as in the flat case. In section 8, we describe the minimal formalism, in which one introduces only the dimension-zero torsion. This formalism applies equally well in all cases, including D = 5 (N=1). In section 9 we introduce a new class of supermanifolds which we call local super Grassmannians, which could be useful in constructing alternative approaches to the subject. In section 10, we make some concluding remarks.
Conformal geometry
From a mathematical point of view a conformal structure in n dimensions can be thought of as G-structure with G = CO(n), the orthogonal group augmented by scale transformations, see, for example [26,27]. This group does not preserve a particular tensor, but only a metric up to a scale transformation, so that angles but not lengths are invariant. Similarly, a classical non-gravitational theory of fields which is locally scale invariant in a background gravitational field will be conformally invariant with respect to the standard conformal group of flat spacetime when the metric is taken to be flat. So conformal transformation are in this sense already present in a theory which is CO(n) invariant. Nevertheless, it can sometimes be of use to make conformal boosts manifest. This can be done, for example, in the Cartan formalism which we shall describe shortly. We shall start from a Weyl perspective in which the structure group is CO(n), so that, in addition to the usual curvature, there is also a scale curvature. It can easily be seen that the torsion is invariant under shifts of the scale connection, and from there, one can either use this shift symmetry to set the scale connection to zero, or introduce a new connection in order to make the curvature invariant. The first method leads back to the conventional approach, whereas the second leads to the Cartan formalism with the symmetric part of the conformal boost connection identified with the Schouten tensor, which will be defined below.
Let e a denote a local basis for the tangent space with dual basis forms denoted by e a . So e a = e a m ∂ m ; e a = dx m e m a , where x m denotes local coordinates and where the coordinate and preferred bases are related to each other by the vielbein e m a and its inverse e a m . Infinitesimal local co(n) transformations act on the frames by The parameter L a b denotes a local o(n) transformation preserving the flat metric η ab , and S is a local scale transformation, although we note that η ab is not invariant under the latter.
The torsion 2-form is defined by a + e a ω 0 := De a + e a ω 0 , where ω a b is the o(n) connection and ω 0 := e a ω a the scale connection. In components It is clear that the torsion is invariant under shifts of ω a by a parameter X a , say, provided that the o(n) connection tranforms by It is also clear that we can use ω a,b c to set the torsion to zero while still maintaining this symmetry, and from now on we shall take this to be the case.
The curvature 2-form is given byR a b = R a b + δ a b R 0 where R 0 is the scale curvature. Making use of the first Bianchi identity we find where the trace-free part C ab,cd is the Weyl tensor and where with the symmetric part P ab being the Schouten tensor (also known as the rho-tensor). It is given by Here R ab is the usual symmetric Ricci tensor and R the curvature scalar. Under a finite shift of ω a accompanied by a (2.6) transformation we find where ∆ denotes a finite change.
At this stage one option is to use X to set the scale connection to zero. This gauge will be preserved by a combined X transformation and a scale transformation provided that where S here denotes a finite scale transformation. So at this point we have regained the conventional formalism: the local scale transformations are no longer regarded as part of the tangent space group but instead are simply rescalings of the metric without an additional scale connection. The transformation of P ab is given by (2.10) but with X replaced by Y , accompanied by an overall factor of S −2 . This factor then disappears in a coordinate basis so that the usual formula is recovered.
In the conventional formalism one can define a new connection, the tractor connection, which takes its values in the Lie algebra of the conformal algebra so(n + 1, 1), and which acts naturally to give a covariant derivative acting on vector fields in R n+1,1 . However, the tractor formalism cannot be adapted directly to the supersymmetric case because the superconformal groups are not simply given by super Lorentz groups in two higher dimensions, one of which is timelike. Instead, one should think about supertwistors because they naturally carry the fundamental representations of superconformal algebras. It is therefore more relevant to study local twistor connections, introduced in [29] in the non-supersymmetric case. Then one has to consider different twistor spaces according to the dimensions of spacetime. In general we can write an element of the conformal Lie algebra, h, in the form where α, α ′ etc denote spinor indices which have two components for D = 3, 4 and four components for D = 6. The diagonal elements a, d are Lorentz and scale transformations while the off-diagonal ones are translations, b, and conformal boosts c.
The (Lie algebra valued) connection is
where each entry is a one-form with the index structure given in the previous equation. The diagonal elements are connections for Lorentz and scale transformations, while the off-diagonal entries are the vielbein form e and the conformal connection f . The transformation of A is A → g −1 Ag + dg −1 g , (2.14) and the curvature F is defined by It transforms covariantly under g without a derivative term. Its components are given by where the diagonal terms are the covariant Lorentz and scale curvatures, T is the torsion and S the conformal curvature. In terms of the standard torsion and curvature one has The objective now is to construct an element of the conformal group depending on the scale and conformal parameters, and a connection one-form with values in the conformal algebra which will transform in the required way provided that the transformation of the Schouten tensor is as given above in (2.10). The group element g is given by where the diagonal elements involve unit matrices and C is a covector, C α ′ β . Under a conformal transformation of this form one can straightforwardly compute the changes in the components of A to beω where again the index structure on the various elements follows from the original definitions, and where Y = S −1 dS. In order to compare with the previous discussion we need to eliminate the scale curvature and express the conformal boost parameter in terms of the scale parameter S. In addition, we use the Lorentz connection to set the torsion to zero as usual. The conformal connection f is a covector-valued one-form, f b = e a f ab , and we can use the anti-symmetric part of f ab to set (R 0 ) ab = 0. If we take the trace of the transformation ofω, which is proportional to the transformation of ω 0 , we can see that the conformal boost parameter can be used to set ω 0 = 0, so that R 0 = 0 as well. This gauge will be preserved if C a ∝ Y a . This gives us the desired result: the scale curvatures R 0 and R 0 are both zero as is the antisymmetric part of f so that we can identify the remaining symmetric part f (ab) with the Schouten tensor P ab .
These transformations agree with the usual ones in the conventional formalism, expressed in spinor notation and with respect to an orthonormal basis. For the connection form ω a b this translates to For P ab we recover the standard transformation for the tractor connection which in an orthonormal basis reads Now let us return to the Weyl picture with non-zero scale connection but still with the torsion taken to be zero. If we redefine the Lorentz and scale curvature two-forms by where Q b := e a Q a b , then we observe that the primed quantities are invariant under infinitesimal X gauge transformations, for which δQ a =DX a . (2.26) We can also define a new curvature two-form We can interpret X as a local conformal boost parameter, Q a as the corresponding gauge field and R ′a as its curvature. We have thus arrived at the conformal gauging picture starting from the Weyl perspective. In fact, we can identify the combined primed curvatures together with theR curvatures, and Q with f , in (2.17). Of course, this is just the converse to deriving the conventional point of view starting from the conformal perspective as discussed, for example, in [14] To conclude this outline of non-supersymmetric conformal geometry we briefly review the theory of Cartan connections of which conformal gauging is an example. Let H, G be Lie groups, H ⊂ G, with respective Lie algebras h, g let P be a principal H-bundle over a base manifold M. A Cartan connection on P is a g-valued form ω equivariant with respect to H, and such that ∀X ∈ h ω(X) = X and ω gives an isomorphism from T p P to g, for any point p ∈ P .
A simple example is given by an n-dimensional manifold M with G = SO(n) ⋉ R n , H = SO(n). Then g = g −1 ⊕ h, where g −1 corresponds to translations and h to rotations. The translational part of ω is identified with the soldering form, i.e. the vielbein, while the h-part corresponds to an so(n) connection. In the conformal case h = g 0 ⊕ g 1 where g 0 = so(n) ⊕ R, and g 1 = R n . So g o corresponds to rotations and scale transformations while g 1 corresponds to conformal boosts. The grading of the Lie algebra then corresponds to the dilatational weights of the various components. The curvature of ω, R = dω + ω 2 , also has components corresponding to this grading, and it is straightforward to see that they correspond to the torsion, the curvature and scale curvature, and the conformal boost field strength respectively.
Superconformal geometry
In this section we shall present a quasi-universal formalism for superconformal geometries in D = 3, 4 and 6, working in complexified spacetime for the moment. The basic idea is to introduce a connection one-form A on superspace taking its values in the appropriate superconformal algebra acting on the super vector bundle whose fibres are super-twistors. These have the form The connection is given by For D = 3 the pair (αβ) on (E, F ) are symmetric whereas in D = 6 they are antisymmetric. The connection components are as follows: (E αβ ′ , E αj ) correspond to translations and Q-supersymmetry, (F α ′ β , F α ′ j ) to conformal and S-supersymmetry, while the diagonal Ωs correspond to Lorentz symmetry above the line and internal symmetry below the line with the hats indicating that the scale connection is also included. (Ẽ,F ) on the third line are appropriate transpositions of (E, F ). We will occasionally refer to the internal connection as Ω I .
The curvature two-form is given by F = dA + A 2 ; its components in matrix form are: HereD denotes the exterior covariant derivative for the Lorentz, scale and internal parts of the algebra. The first terms on the right of the first two equations are the standard even and odd superspace torsion tensors constructed in the usual way from thê Ω connections,T αβ =DE αβ ,T αj =DE αj , while the non-calligraphic curvature forms on the right in the last three lines are the standard curvature tensors constructed in a similar fashion.
In D = 3, 6 the one-forms (E αβ ′ , E αj ) can be identified, after converting pairs of spinor indices to vector indices and, if necessary, rescaling, with the basis one-forms of conventional superspace, E A = (E a , E αi ), while in D = 4,Ẽ i α ′ , which becomes the complex conjugate of E αi in real superspace, is also required in order to complete the basis forms.
The forms E A are associated with super-translations, and play the role of soldering forms in this context. This means that the translational part of the algebra can be subsumed into super-diffeomorphisms.
In a similar fashion, we can identify the pair (F α ′ β , F α ′ j ) with a super-covector-valued one-form F B = E A F AB . In D = 4 the tilded odd forms are essentially the complex conjugates of the untilded ones (in real spacetime) and similar identifications can be made in other dimensions.
We remark in passing that, although there is also a conformal supergravity theory in D = 5 for N = 1 [35], [36], which can be described in conventional superspace [16] and in conformal superspace [17], it does not admit a straightforward description in the supertwistor formalism. In this case the spinor indices are four-component, with no distinction between primed and unprimed, vectors can be represented by skew-symmetric symplectic-traceless spinors, e.g. E αβ = −E βα , η αβ E αβ = 0, where η αβ is the symplectic matrix (charge conjugation matrix), and where the spacetime part of the curvature is γ atraceless,Ω α β (γ a ) β α = 0. However, the bilinear fermion terms in the bosonic curvatures in (3.4) do not preserve these contraints. Nevertheless, we shall see in section 7, that the formalism can be amended to take the D = 5 case into account.
The Bianchi identity is Written out in components this is, for the torsions, for the Lorentz, scale and internal symmetry curvatures, and for the superconformal curvatures, We now consider the supersymmetric counterpart of the group element (2.18) given by with inverse given by For the moment we can think of g as an element of the complex supergroup SL(M 0 |M 1 ), where (M 0 |M 1 ) denote the even and odd dimensions of supertwistor space. In the real cases we will find that ∆ is related to Γ and that C will have symmetry properties depending on the case in hand. The transformation of the connection A is given by (2.14). For the superspace basis forms we find while for the connections we find For the S-supersymmetry connections we have The conformal connection, F α ′ β , transforms as The transformations of the field strengths can also be found straightforwardly. For the torsions we have while for the Lorentz, scale and internal curvatures we have: The scale curvatures for D = 3, 6 are given by the trace ofR α β given in (3.4), while for D = 4 we have to take the sum of the traces ofR α β and its complex conjugate; in all cases, we can write where k is a constant depending on the dimension of spacetime, and where F [AB] is the graded anti-symmetric part of F AB , the latter having no symmetry. Since F A is a connection one-form we are free to add a tensorial part to it and thereby set the scale curvature R 0 = 0, after which (dΩ 0 ) AB = −F [AB] . As in the bosonic case Ω 0 transforms by a shift under conformal and S-supersymmetry transformations, as can be seen from the trace of the first line in (3.13), and can therefore also be set to zero. We are then left with the symmetric part of F AB which we can be identified as the super Schouten tensor. There is then a residual local super-Weyl invariance which we will present in more detail below for each case.
To put more flesh on this general outline we shall now go through the various cases in turn.
D = 3
The superconformal group in D = 3 is SpO (2|N). This is the same as the orthosymplectic group, but with the symplectic factor written first to indicate that it refers to the spacetime part. It acts on the supertwistor space C 4|N (in the complex case) and consists of (4|N) × (4|N) matrices g which preserve the symplectic-orthogonal form The invariance condition is where the st superscript denotes the super-transpose of the matrix g. This is the same as the ordinary transpose except for the odd component in the lower left corner which has an additional minus sign. For a Lie superalgebra element h we have, correspondingly, The reality condition needed to restrict to real spacetime (and superspace) is where K has a similar structure to J but with the minus sign on the second row replaced by a plus sign. The form of a real superalgebra element h is therefore where d = −a t , b and c are symmetric, e is anti-symmetric, γ is real and δ is imaginary.
Since the connection is a g-valued one-form it can be written as (4.6) Here E αβ and F αβ are real and symmetric, E αj is real and F α j is imaginary (for later convenience). On the bottom rowẼ i β = −E β i , whileF i β = F β i , where the o(N) indices are raised or lowered by the flat Euclidean metric. The Lorentz and scale connections arê with Ω α β = Ω β α as the trace-free Lorentz connection and Ω 0 the scale connection. As usual two-component spinor indices are raised and lowered with the epsilon tensor, so Ω αβ is symmetric.
It is straightforward to compute the components of the curvature two-form, but for the moment we shall focus on the scale curvature. It is given by where R 0 = dΩ 0 . The last two terms can be rewritten as Note that F AB is not necessarily graded-antisymmetric, although it is when contracted with two sets of basis forms. (We have suppressed the wedge symbol in the two-forms above). Here we have identified E αi as the standard odd basis forms of superspace and set E αβ = −E a (γ a ) αβ , where E a are the standard even basis forms. (Note that this involves a rescaling since there would normally be a factor of 1 2 when going from bi-spinors to vectors.) We therefore have (4.10) As discussed briefly above, we can use the freedom to adjust a connection by a tensorial addition to set (R 0 ) AB = 0, after which (R 0 ) AB = 2F [AB] . As we shall see shortly below, superconformal gauge transformations (i.e. conformal boosts and S-supersymmetry) can be used to set Ω 0 = 0, after which F AB will become graded symmetric. We can then identify F AB as the super-Schouten tensor.
We shall now exhibit a finite superconformal transformation which will allow us to set Ω 0 = 0 and to identify residual superconformal transformations in terms of scale transformations, or what one could call (finite) super Weyl transformations in this context. It is given by The parameters (S, C, Γ) are those for scale transformations, conformal boosts and special supersymmetry respectively. C is symmetric on its spinor indices (i.e. it is a vector) and real, while Γ is taken to be imaginary. It is straightforward to check that this is indeed an element of the superconformal group; one can find the effect of such a transformation on the connection by using the standard formula For the moment we shall focus on the transformation of the scale connection Ω 0 . It is given by where the one-form Y := S −1 dS. We learn two things from this equation: first, the parameters C, Γ can be used to set the even and odd components of the one-form Ω 0 to zero, and second, we can determine the residual symmetry transformations in terms of Y . In other words, having set Ω 0 = 0 we have where the derivatives are now the standard superspace covariant derivatives, and where we have denoted the odd component of Y by Υ for later use. The finite transformations of the basis forms are: The finite changes of the Lorentz and o(N) connections are: The transformations of the superconformal connections are and Here Γ 2 αβ := Γ α k Γ βk , is antisymmetric on α β, while the symmetrisations are on α, β only.
The formalism given above applies quite generally regardless of whether any constraints have been imposed or not. The basic constraint we shall choose for D = 3 (and in fact in all cases) is T 0 = 0, which is clearly superconformally invariant, from (3.16). We can now use conventional constraints, including some for the conformal and superconformal potentials, as well as the Bianchi identities, to show that we can always choose T 1 = R = 0, remembering that we always choose the scale curvature to vanish. Thus in D = 3 the only covariant field strengths that are non-zero are the internal curvature R I and the superconformal curvature S A = (S a , S αj ). From the Bianchi identity (3.6) we can then see that R aβj,kl = (γ a λ) βjkl , for N ≥ 2, where each field is totally antisymmetric on its SO(N) indices. In fact, for N ≥ 2, the leading component in the Weyl multiplet, i.e. the conformal supergravity field strength multiplet, is the leading non-zero component in R I .
For N = 1 R I is identically zero, and only S A survives. From the Bianchi identities it is then easy to show that the only non-zero components are S ab,γ and S ab,c . The former is equivalent to a gamma-traceless vector-spinor, while the latter is equivalent to a symmetric traceless tensor. These are the Cottino and Cotton tensors respectively. It is then straightforward to see that the component fields in the Weyl multiplet can be arranged diagrammatically as follows 2 : Here, each [p, q] entry denotes a field with p antisymmetrised internal indices and q symmetrised spinor indices. For N ≤ 4 clearly only the right part of the diagram survives, with the last two components being the Cottino and Cotton tensors respectively. For N = 6 we note that there is an extra U(1) gauge field not included in the superconformal group. This field plays an important role in the BLG formalism for multiple membranes [39,40], and was discussed in the superspace context in [12]. For N = 6, therefore, we have an additional closed two-form field strength G with components G αiβj = ε αβ W ij (4.21) G aβj = (γ a λ) βj , as well as G ab , where W ij is the SO(6) dual of W ijkl and λ i is the SO(6) dual of the 5-index fermion on the left of the diagram above. For N = 8 an extra constraint is required in order to avoid having two gravitons; this is achieved by imposing a self-duality constraint on W ijkl , and this in turn implies that the field content of the N = 7 and N = 8 Weyl multiplets are the same, so that the left-hand diagonal line can be terminated at N = 6.
To conclude this section we translate the above results into conventional superspace. The main consequence is that the components of the superconformal potentials now appear explicitly in the torsions and curvatures. These potentials are graded symmetric and make up the components of the super-Schouten tensor as we remarked earlier. Making use of equation (3.4) which relates the conformally covariant tensors on the left to the standard superspace ones on the right we find, for the even torsion two-form 22) or, in components, which we recognise as the usual expressions [10]. For the odd torsion we find This implies for the components, where we have written the right-hand sides in terms of the super Schouten tensor. For the dimension-one component, since F αi,βj is antisymmetric under the interchange of pairs of indices, we have F αi,βj = ε αβ K ij + (γ a ) αβ L aij , where K is symmetric on the internal indices and L antisymmetric, so that This differs slightly from the expression for the dimension one torsion given in [10,12], but can be brought into agreement by a further redefinition of the dimension-one o(N) connection. (On the other hand (4.26) does agree with the form given in [11].) The dimension three-halves torsion is the gravitino field strength, which in three dimensions can be dualised to a vector-spinor Ψ aγk . It is given in terms of the super Schouten tensor by where Ψ a αi := 1 2 ε a bc Ψ bc αi = ε a bc (γ b ) αβ F cβ i . (4.28) The components of the standard superspace curvature tensors can also be easily computed from equation (3.4), using the fact that the covariant Lorentz and scale curvatures are zero.
D = 4
The superconformal groups for D = 4 N-extended supersymmetry are SU(2, 2|N), for N = 1, 2, 3 and P SU(2, 2|4) for N = 4. Elements of this group are matrices g with unit superdeterminant which obey and where the numerical subscripts denote the dimenions of the unit matrices. For an element h of the corresponding Lie superalgebra su(2, 2|N) we have Such an element can be written where b and c are hermitian, e is anti-hermitian, and the factors of i are put in for convenience (we follow the conventions of [42] here). The supertrace condition then implies that −a 0 +ā 0 = e k k , The connection in D = 4 can be written and the curvature is where the expressions for the various components are given by (3.4) with appropriate factors of i. We setΩ where Ω 0 and Ω 1 are the scale and U(1) connections respectively and where Ω α β is tracefree.
A group element depending on scale and superconformal transformations is easily constructed in a similar fashion to D = 3. It is given by with C being hermitian. The inverse is given by As in D = 3 we can use the antisymmetric part of the superconformal connection together with a superconformal transformation to set R 0 = R 0 = 0 and leave residual superconformal transformations in terms of the scale parameter S. The transformation of the scale connection Ω 0 under g is given by 14) The right-hand side can be rewritten as −Y + C, where C = E A C A = E a C a + E αi Γ αi − Eα i Γα i (after rescaling C αα → 1 2 C αα and Γ αi → 2iΓ αi ). We can then use C to set Ω 0 = 0, leaving residual superconformal transformations with Y = S −1 dS = C.
The scale curvature R 0 can be written as where R 0 = dΩ 0 and where We therefore have Finally, we can use C A to set (Ω 0 ) A = 0 and the graded antisymmetric part of the superconformal connection f [AB] to set R 0 = 0. After this, we have residual superconformal transformations determined by S, as above, while the remaining part of f AB is graded symmetric and can be identified as the super-Schouten tensor for D = 4.
The super-Weyl transformations of the basis forms and the connections are given by: where it is understood that the trace over α and β is to be projected out in the third line. The components of the super-Schouten tensor at dimension one are f αi,βj , f j αi,β plus complex conjugates. Graded symmetry then implies that while f j αi,β is skew-hermitian. At dimension three-halves there is just a complex vectorspinor f aβj which can be identified with that part of the gravitino field-strength tensor which is not part of the Weyl supermultiplet.
The basic constraint for D = 4 is T a = 0, as for the other cases. Using the Bianchi identities and choices for the connections, including the superconformal ones, one finds that the non-zero components of the covariant torsion T γk are The components of the covariant curvature can easily be found; for example, at dimension one, they are [9]: Further details of the torsions and curvatures, including the Schouten terms, can be found in [8,9].
The complex conjugate of a four-component D = 6 spinor u α is denotedūα but this representation is equivalent to the undotted one as there is a matrix B αα relating the two, uα =ū α B αα . B is unitary, B * B = 1, and satisfies BB = −1. 3 Similar remarks hold for the inequivalent spinor representation denoted by a lower index, v α say. So a twistor z consists of a pair of 4-component spinors and can be written A supertwistor in D = 6 can therefore be written in the form where i = 1, . . . 2N for (N, 0) supersymmetry, N = 1, 2. Here (u, v) are commuting objects while λ is odd. The superconformal group is OSp(8|N) in complex superspace and preserves the orthosymplectic metric K, so for an element g of the group we have where st denotes the supertranspose, which is the same as the ordinary transpose except for an additional minus sign for each element in the bottom left (odd) sector. The 2N ×2N matrix J 0 is the Sp(N) symplectic invariant. In real spacetime we need to impose the reality constraint An element of the Lie superalgebra, h, has the form The orthosymplectic constraint implies that b and c are skew-symmetric and d = −a t , as before, while eJ 0 = −J 0 e t . For the odd components we have or, in indices, Next we need to impose reality in order to move to real superspace. This is done with equation (3.3) but this time with R extended by the unit matrix in the odd-odd sector, as in (6.4). The result of imposing gRg * = g, at the Lie algebra level is that a, b, c and d obey the same conditions as in the bosonic case while e satisfies e = −e * . (6.10) For the independent odd components of h we have: These constraints simply mean that ε αi and ϕ αi are symplectic Majorana-Weyl spinors as one would expect. They are respectively the parameters for Q and S supersymmetry transformations.
The connection A is
where E A = (E a , E αi ), with E a = 1 2 (γ a ) αβ E αβ , will be identified with the even and odd super-vielbein one-forms of the underlying superspace, F A = (F a , F αi ) is the connection for superconformal transformations, i.e. S-supersymmetry and standard conformal transformations,Ω α β (Ω α β ) is the Lorentz plus scale connection and Ω i j the internal sp(N) connection. On the bottom line, F iβ and E i β are transposes of F α j and E αj with the internal index lowered by (J 0 ) ij = η ij . The curvature two-form, F = dA + A 2 , has components given in matrix form by: where, from (3.4), with α ′ → α, and with appropriate factors of i, Here,D is the superspace covariant exterior derivative with respect to scale, Lorentz and internal symmetries, while the leading terms on the right, for the top four lines, are the standard superspace and torsion and curvature tensors for the corresponding connections (extended by the scale connection).
The detailed form of the Bianchi identity DF = 0 is given, mutatis mutandis, by (3.6) to (3.8). We shall now repeat the steps carried out in the non-supersymmetric case to reduce the conformal and superconformal boost parameters to derivatives of the scale parameter. We introduce a group element g(S, C, Γ) where S is a scale parameter and Γ α i is an S-supersymmetry parameter. It is given by where the index structure is as above, in (6.12) for example, where J 0 is the Sp(N) invariant discussed previously, and wherẽ then, from (6.16), C is antisymmetric since ΓJ 0 Γ t is symmetric. Reality implies that Note also that the latter equation implies that Under such a transformation the components of A transform as follows: The curvature transformations are obtained from those for the potentials by replacing the latter by the former in the equations above. In addition, for the superconformal curvatures, the derivative terms in (6.19) must be replaced by curvature terms as follows: If we take the trace of the third equation in (6.14) we find that where we have defined the super-vector-valued one-form F A = (F a , F αi ). By adjusting this potential we can choose R 0 = 0 so that the (graded) antisymmetric part of F AB is now proportional to R 0AB . Taking the trace of the transformation ofΩ α β we find so that we can use the parameters C a and Γ αi to set Ω 0 = 0. This leaves residual transformations determined by the scale parameter S, where C A = (C a , Γ αi ). We shall take the components of Y to be given by Y A = (Y a , Υ αi ) in order to clearly distinguish the even and odd components where necessary.
The basic constraint that we shall choose is to set the even torsion two-form to zero, which is clearly covariant. Using this, conventional constraints corresponding to connection choices (including superconformal ones) and the Bianchi identities, one finds that the covariant torsions (i.e. the torsion components of F ) are given by where G abcjk is anti-self-dual on abc (by its definition), anti-symmetric on jk and symplectictraceless on jk for N = 2, and where Ψ ab γk is the gamma-traceless gravitino field strength. For the curvature tensor components we find R aβj,kl = −8(γ a χ) β(k,l)j , R ab,cd = C ab,cd , R ab,kl = F ab,kl . (6.26) The dimension three-halves field χ α i,jk is antisymmetric on jk : it is a doublet for N = 1 while for N = 2 it is in the 16 of sp(2), i.e. it is symplectic-traceless on any pair of indices. The graviton field-strength C ab,cd has the symmetries of the Weyl tensor, while F ab,kl is the sp(N) field-strength tensor. We have thus located all of the components of the conformal supergravity field strength supermultiplets except for the dimension-two scalars which are given by for the (1, 0) case this reduces to a singlet, while for (2, 0) C ij,kl is in the 14-dimensional representation of sp (2). Fuller details of this multiplet can be found in [24].
The standard superspace torsion tensors differs from the covariant ones by components of the Schouten tensor F AB .
D = 5
The D = 5 superconformal theory, which exists only for N = 1, where the algebra is f(4) [37], has been described in (conformal) superspace in [16], [17]. However,it does not fit into the supertwistor formalism as well as the other cases due to the constraints that one has to impose on the top left quadrant of A (starting from the D = 6, (1, 0) case). These are where the F s on the right-hand side are the top left quadrant entries in F , and where η αβ is the 4 × 4 symplectic matrix, (the charge-conjugation matrix in D = 5); the odd torsion, S-supersymmetry curvature and internal curvature are defined as before. The relations between the covariant curvatures and the standard superspace ones is the same as in (6.14) with the difference that the unwanted terms have to be projected out as in the preceding equation. For example, we have where the angle brackets indicate that the symplectic trace terms on the first line and the (γ a )-trace terms on the second line have been projected out. Note that the middle term on the right in the second equation is automatically (γ a )-traceless because E αγ and F γβ are now five-dimensional vectors, so that multiplying by γ a and taking the trace is identically zero.
We now briefly describe the constraints and the geometry that follows from them. As in the other cases we set T αβ = 0 from which the even torsion two-form T αβ takes its usual γ-matrix form, as can be seen from the first line in (7.3) on setting the left-hand-side to zero. The only non-zero torsion components are at dimension one and three-halves. The former is where G abc is the dual of G ab , The field G ab can be identified as the leading component of the D = 5, N = 1 Weyl multiplet. The covariant dimension three-halves torsion is the gamma-traceless gravitino field strength.
Note that (7.4) can be obtained by dimensional truncation from the D = 6, N = (1, 0) expression. The remaining components are a dimension three-halves spinor, χ αi , and, at dimension two, the Weyl tensor C ab,cd , the sp(1) field strength F ab,kl and a scalar field C. The curvature components are formally the same as the D = 6, (1, 0) case given in (6.26), except that while the dimension three-halves spinor χ αi,jk → ε i(j χ αk) . Finally, the dimension two scalar can be defined to be C = D αi χ αi . (7.7) The standard superspace torsions and curvatures can be obtained from the covariant ones in the same way as in D = 6, (1, 0), using (6.14). They are the leading terms on the right-hand sides of the first four equations, but with the symplectic and γ a traces projected out. The Schouten terms come from the equation for the odd torsion in (6.14).
Minimal approach
We shall now describe the superspace geometry corresponding to these conformal supergravity multiplets from a minimal perspective. Since the details coincide with those derived previously we will be brief. We define a superconformal structure on a supermanifold with (even|odd) dimension (D|D ′ ) to be a choice of odd tangent bundle T 1 (of dimension (0|D ′ ) which is maximally non-integrable, so that the even tangent bundle T 0 is generated by commutators of sections of T 1 , and such that the Frobenius tensor, F, defined below, is invariant under R ⊕ spin(1, D − 1) ⊕ g, where g is the internal symmetry algebra for the case in hand: g = so(N) for D = 3, u(N) for D = 4, sp(N) for D = 6, with N = 1, 2, and for D = 5, with N = 1. The components of F with respect to local bases E αi for T 1 and E a for T * 0 are given by where k ij = δ ij for D = 3 and η ij for D = 5, 6 respectively and where , denotes the pairing between vectors and forms. The R factor denotes an infinitesimal scale transformation, δE a = SE a , δE αi = − 1 2 SE αi , while the spin and symplectic algebras act in the natural way on the spacetime and internal indices. For D = 4, T 1 is the sum of two complex conjugate bundles of dimension 2N, T 1 = T 1 ⊕T 1 , and we have We now introduce connections for sp(N) and spin(1, D − 1) and define the torsion and curvatures in the usual way. Note that this procedure involves the complementary basis E αi for T * 1 which is only determined modulo T * 0 , i.e. shifts of the form We could include this in the structure group, along with a corresponding connection, but we shall instead follow the standard procedure of using this freedom to impose some additional constraints at dimension one-half. In addition we shall not include a scale connection so that we have the standard superspace geometrical set-up.
It is clear that the Frobenius tensor is invariant under scale transformations of the form E a → SE a , E αi → S − 1 2 E αi (with the same transformation forĒ iα in D = 4). If we impose constraints to determine E a and E αi at dimension one-half we can then determine their transformations as well. A convenient one to consider, as it does not involve any connection terms, is where the symmetrisation is understood to include lowering the c index on the left-hand side. Making finite super-Weyl (scale) transformations, we find that this constraint will be preserved if Identifying F αiβj c with the dimension-zero torsion T αiβj c , imposing suitable constraints on various components of the torsion corresponding to fixing the odd basis E αi using (8.3) and making appropriate choices for the spin(1, D − 1) and g connections, one can show, with the aid of the usual superspace Bianchi identities and some algebra, that the components of the torsion and curvature tensors can be chosen to agree with those derived previously. The finite super-Weyl transformations in the general case were given in (5), from which it is straightforward to obtain particular cases.
In addition to the fields of the conformal supergravity multiplet, this geometry will also contain the components of the super Schouten tensor F AB , whose transformations can be found in (6.19). We can recover the covariant forms for the torsions and curvatures by reversing the steps made earlier.
Local Super Grassmannians
In the previous sections we have considered superconformal geometries starting from local supertwistors in various dimensions. In all cases we have used the standard split of supertwistor spaces into even twistors together with additional odd components. However, we can choose different (even|odd) splittings which naturally give rise to formulations of superconformal geometry on different supermanifolds. These include chiral, projective [46], [47], [49] and harmonic superspaces [48],and can be thought of from the viewpoint of super flag manifolds as discussed from a pure mathematical perspective in [50], [51] and presented in a more accessible form in [42]. More recently the present authors have used them to discuss super-Laplacians and their symmetries in the context of rigid supersymmetry [52,53,54]. In this section we shall give a brief discussion of some cases relevant to four-dimensional complex spacetime, deferring a fuller discussion to a later publication.
Grassmannians are spaces of p-dimensional planes in C n , with p < n, and super Grassmannians are spaces of (p|q) planes in C m|n , with p ≤ m, q ≤ n (but with p + q < m + n). For example, complex Minkowski space in D = 4 can be thought of as an open set in the Grassmannian of 2-planes in C 4 ,. In the super case, consider D = 4-dimensional spacetime with N = 2 supersymmetries. The corresponding supertwistor space is then C 4|2 , and the Grassmannian of (2|1) planes in C 4|2 is known as analytic superspace. A supertwistor, i.e. an element of C 4|2 can be split into two halves u A and v A ′ , where A = (α, 1) and A ′ = (α ′ , 1 ′ ), where we have used a prime to denote a dotted two-component spinor index. This allows us to rewrite a supertwistor as We can then define a superconformal connection A acting on supertwistors in this basis in the form with corresponding curvature 3) The action of the superconformal group on A and F is formally the same as in the bosonic case, equation (2.14), but note that the diagonal transformations are not simply scale and Lorentz transformations, because the diagonal subgroups are also supergroups. This is therefore a somewhat different way of looking at superconformal transformations which we propose to investigate further in a forthcoming paper.
Finally, we note that such geometries can also be thought of super versions of paraconformal geometries [55].
Summary
In this article we have discussed the supergeometries describing off-shell conformal supergravity multiplets in D = 3 for N = 1 to 8, in D = 4 for N = 1 to 4, and in D = 6 for (N, 0) supersymmetries with N = 1, 2, from the perspective of local supertwistors 5 .
In this formalism one introduces connections taking their values in the superconformal algebras in the twistor representation, which can be thought of as an associated version of the Cartan connection formalism. Similarly, the conformal superspace construction [13] is an adaption of the Cartan formalism to superspace, and is expected to be equivalent to our local supertwistor approach.. From this starting point one can then derive the standard superspace formalism in a systematic fashion. In order to specialise to the minimal off-shell conformal supergravity multiplets one then has to impose constraints. In addition, we discussed the D = 5, N = 1 case for which a slight amendment of the formalism is necessary. We also showed that the same results can be obtained from the minimal formalism in which only the dimension-zero torsion, or Frobenius tensor, is specified. This minimal formalism was previously applied to D = 3 [10,12], where an additional self-duality constraint at dimension one is required, while it was also previously shown in the D = 4 case that the supergeometries also follow from the dimension-zero torsion constraint [8]. In this case an additional dimension one constraint is required to ensure the correct number of component fields. In section 10 we briefly introduced the idea of local super Grassmannians. Such superspaces are best viewed in terms of different (even|odd) splittings of supertwistor space. | 12,318.6 | 2020-12-06T00:00:00.000 | [
"Physics"
] |
Study on Fractal Characteristics of Mineral Particles in Undisturbed Loess and Lime-Treated Loess
In order to explore the fractal characteristics of particle size distribution (PSD) of various minerals in loess and lime-treated loess, the Q4 undisturbed loess and lime-treated loess were studied. From the perspective of multi-scaled microstructure, the internal characteristics of loess were observed and the regularity statistics were carried out from a macroscopic view. Fractal theory was used to quantitatively study the distribution of mineral particles in undisturbed loess and lime-treated loess. It was found that the skeleton particles of undisturbed loess were obvious and the structure of soil was loose. While that of lime-treated loess decreased, the fine particles were connected with each other, and the structure of soil changed from loose to dense. The three mineral particles in the undisturbed loess and lime-treated loess did not accord with the single fractal distribution characteristics, but the total particles had fractal characteristics. The percentage content of the mineral particles in the soil varied greatly with the particle size. In addition, the non-uniform degrees of mineral particles in the two soils from large to small were carbonate minerals of lime-treated loess, carbonate minerals of undisturbed loess, quartz minerals of lime-treated loess, feldspar mineral of lime-treated loess, feldspar mineral of the undisturbed loess, and the quartz mineral of the undisturbed loess. This paper provided a basis for the future study of the different soil mechanical properties of undisturbed loess and lime-treated loess.
Introduction
Loess is a widely-distributed typical structural soil [1][2][3]. Therefore, it is used as a common building material in highway subgrade or roadbed. However, it is a complex porous medium composed of different mineral particles with irregular shapes. At the same time, its macro complexity features such as discontinuity, non-uniformity, and anisotropy are closely related to its microstructure [4]. Therefore, these characteristics cause certain challenges for highway engineering in the loess area.
In recent years, important achievements have been made in the study of soil structure by using fractal and multifractal theory, from the micro scale (µm) to the macro scale (km) [5,6]. The establishment and application of fractal theory provide a new approach to quantitatively describe the particle size distribution (PSD) of soil mineral particles. Fractal dimension can characterize the difference and self-similarity of soil particle distribution, and reflect its uniformity [7]. Xie et al. systematically studied the pore fractal and particle fractal of rock and soil materials, and proposed the measurement method of pore fractal [8]. Liu et al. pointed out that the particle size fractal dimension could reflect the pore and microstructure characteristics of soil, and it was a method of characterizing the non-uniformity of cohesive soil [9]. Hu Ruilin and Zhang Jiru measured the equivalent diameter and distribution of soil particles by Housdoff fractal dimension calculation method and computer image analysis technology [10,11]. The quantitative results of particle distribution fractal dimension, surface characteristic fractal dimension, and pore and contact zone distribution fractal dimension were obtained. The fractal dimension of soil was characterized by the quantitative distribution of particle size, which provided a method for the study of fractal structure. The fractal dimension of soil particles, aggregates and porosity is calculated using the single fractal theory to characterize the composition and uniformity of soil structure, which provides an accurate and simple method for quantitative description of soil structure characteristics. In addition, at present, many scholars use multifractal theory to study the characteristics of soil particle size distribution, and multifractal method can reflect the local heterogeneity and non-uniformity of soil structure in detail [12]. Grout et al. and Posadas et al. believe that the particle size distribution of soil particles does not conform to the principle of single fractal, and the multifractal method can more accurately analyze the PSD characteristics [13,14]. Dong et al. found that the capacity dimension D(0) and information dimension D(1) could be used as potential indicators to reflect the physical properties and mass of soil [15]. Guan et al. concluded that the multifractal spectrum parameters of soil particle size distribution can reflect the non-uniformity of soil particle size distribution [16].
However, due to the limitation of test technology, most of these studies only analyzed the fractal characteristics of the overall particle size distribution of soil, while fewer studies focused on the fractal characteristics of different mineral particles under different soils. The SEM-EDX study was widely used in cement and concrete research [17][18][19]. In this paper, the multi-scale microstructure images obtained by image scanning, processing, and mineral composition identification were more representative and could better reflect the characteristics of macroscopic soil samples. The fractal characteristics of each mineral particle in Q 4 undisturbed loess and lime-treated loess were quantitatively analyzed, and the distribution characteristics of single fractal and multi-fractal of three mineral particles in the two soils were discussed, thus providing a basis for studying different soil mechanical properties of natural loess and lime-treated loess.
Sampling
The Q 4 undisturbed loess sample was taken from the exploratory well of Xining Qinghai-Tibet Science and Technology Museum, which is located in Xining City, Qinghai Province. The untreated loess sample, 6~8 mm in height, was cut from the original ring knife sample and was pressed into the soil extractor. The main physical properties are as follows: natural water content w is 16.7%, soil specific gravity G s is 2.70, natural bulk density r is 16.3 kN/m 3 , void ratio e is 0.93, dry density ρ d is 1.4 g/cm 3 , liquid limit is 24.8%, and the plastic limit is 15.4%. A part of the soil samples was crushed, air-dried, and sieved through an aperture of 2 mm. Lime was weighed, and according to a mass ratio of 100:7, mixed evenly with loess to prepare the lime-treated soil samples with water content of 16.7% and dry density of 1.39 g/cm 3 . It was pressed into the soil extractor for sample preparation, drying, curing, coarse grinding, fine grinding, polishing until the surface of the soil sample showed flatness and smoothness, which can be observed by electron microscope scanning, and then it was coated with gold sprayer for further use.
Sample Image Processing
The sample was scanned with a JSM-6390A scanning electron microscope at a magnification of 500 times. After scanning, an energy spectrum base map and a surface scanning image of seven elements Si, Al, Ca, K, Fe, Mg, and Na were obtained; 320 images (including 40 base map and 280 EDS photo for 7 different elements) were obtained for each sample ( Figure 1). Corresponding colors fell respectively into seven different elements, and then these seven different colors were added to the base map according to the RGB values [20], as shown in Figure 1. In order to ensure complete and seamless splicing, the images were spliced in the form of an S-shaped route with a horizontal overlapping width of 1/4 and a vertical overlapping length of 1/3. The multi-scale microscopic image of mineral composition distribution in the two soils was obtained and transformed from micron to millimeter scale, as is shown in Figure 2. After the microscopic image was processed, the chromatic value (a1, b1) of unknown minerals was obtained by averaging. According to the formula ∆E = ((∆a)2 + (∆b)2)1/2, the difference between the chromaticity value of unknown minerals and the standard chromaticity value of minerals was calculated. ∆a = a − a1, ∆b = b − b1, where (a,b) is the standard color of mineral A. If ∆E < 1, the mineral was identified as A. The identification results of mineral image recognition were compared with those of energy spectrum to verify the accuracy of the method. The validation schematic is shown in Figure 3. All the minerals in the schematic have energy spectrum data, and the mineral name can be identified. Based on this method, quartz, feldspar, and carbonate mineral particles in untreated loess and lime-treated loess were identified respectively, and the particle size parameters of each mineral particle in the two soils were extracted.
Distribution Characteristics of Mineral Particle Size
It can be observed from Figure 2 that the skeleton particles of undisturbed loess are obvious and the particles support each other to form a macroporous support. The skeleton particles were clearly distinguished from the pores, and the particle contour was clear. While the skeleton particles of lime-treated loess decreased, the fine particles were connected with each other, and many small particles were closely combined with clay minerals to form new aggregates with large particle size. These new aggregates were composed of massive, rounded small quartz, carbonate, and clay minerals, etc., with particle sizes ranging between 5 and 20 µm.
The particle size parameters of each mineral particle in natural loess and lime-treated loess were extracted, and the particle size interval I = [0.02,2000] was selected according to the logarithmic equidistant method. The distribution of different mineral particle sizes was calculated, and the spatial distribution characteristics were analyzed and compared. The particle size distribution curves of various minerals in the two soils are shown in Figure 4. (a) Particle size distribution curve of quartz minerals; (b) particle size distribution curve of carbonate minerals; (c) particle size distribution curve of feldspar minerals; (d) particle size distribution curve of total minerals. It can be seen from Figure 4a that in the range of particle size 1~10 µm, the quartz content in lime-treated loess was about 10%, and that in undisturbed loess was about 2%. In the range of 10~100 µm, the content of quartz particles in undisturbed loess was 27% at the particle size of 100 µm, whereas the corresponding particle size of quartz particles in lime-treated loess was about 40 µm and the content was about 23%. In Figure 4b, there are two peaks in the particle size distribution curve of carbonate minerals in undisturbed loess, the corresponding particle sizes are 2 µm and 70 µm, and the contents are about 7% and 22%, respectively. There were three peaks in the distribution curve of carbonate minerals in lime-treated loess, and the corresponding particle sizes were 2 µm, 27 µm, and 50 µm, respectively. The percentages of particle content were 13%, 12%, and 15%, respectively. From the particle size distribution curve, the carbonate mineral particle size distribution of lime-treated loess was not uniform. Figure 4c shows that the main particle size of feldspar minerals in undisturbed loess was 60 µm and the content was about 26%. There are two peaks of feldspar minerals in lime-treated loess, the corresponding particle sizes were 2 µm and 70 µm, and the contents were 2% and 16%, respectively. Compared with undisturbed loess, the particle size of feldspar minerals in lime-treated loess decreased and the particle size distribution was non-uniform. From the whole mineral particle size distribution curves, compared with undisturbed loess, the particle size of all mineral particles in lime-treated loess decreased. This is consistent with the decrease of mineral particles in lime-treated loess observed directly above.
Single Multifractal Calculation
Soil is a complex porous medium with fractal characteristics, and the fractal can be defined by the relationship between particle size and number of soil particles [21].
Assuming that the total number of soil mineral particles is N T and d min is the minimum particle size of mineral particles, it can be obtained from Equation (1).
The slope k of the straight line is obtained through linear fitter with log(d i /d min ) and log(N(δ > d i )) as abscissa and ordinate respectively. If these points satisfy the linear relationship, the fractal dimension of mineral particle size distribution D = k, thus showing the mineral particles in undisturbed loess and lime-treated loess of Xining Q 4 have single fractal features. [14][15][16] is obtained by logarithmic transformation based on the particle size interval I = [0.02, 2000]. In the interval J, there are N(ε) = 2 k subintervals with size ε = 5 × 2 −k , and k is 1 to 6. Construct a family of partition functions using p i (ε) is shown as Equation (3) [22].
Multifractal Calculation
where, u i (q, ε) is the q-order probability of the i subinterval, q is a real number, is the sum of q-order probabilities of all subintervals. Then the multifractal generalized dimension spectrum D(q) is calculated as Equation (4) [20].
The multifractal spectrum function f (α) can be calculated as Equation (6): In the range of −10 ≤ q ≤ 10, fitting with 1 as step length, the generalized dimension spectrum (D(q)), singularity index (α(q)), and multifractal spectrum function (( f (q)) of mineral particle size distribution in undisturbed loess and lime-treated loess are calculated using Equation (3) to (6).
Mineral Particle Size Comparison
It can be speculated that in the process of crushing and compaction, large particles are crushed to many small particles and a series of chemical and physicochemical reactions occur between lime and soil to give rise to cements, such as calcium carbonate and crystal calcium hydroxide, etc. The aggregation of clay colloid particles makes the soil structure dense. From the whole mineral particle size distribution curves, compared with undisturbed loess, the particle size of all mineral particles in lime-treated loess decreases. This is consistent with the decrease of mineral particles in lime-treated loess observed directly above. In terms of the non-uniformity of mineral particle distribution, the carbonate mineral particle distribution in lime-treated loess is the most uneven. This is the result of a series of chemical and physicochemical reactions in the soil after lime is added, which gives rise to cements, such as calcium carbonate and crystal calcium hydroxide.
Single Fractal of Mineral Particle Distribution
According to Equation (2) and the PSD curves of each mineral particle in natural loess and lime-treated loess, log(d i /d min ) and log(N(δ > d i )) curves can be plotted, and the fractal dimension of particle size distribution of each mineral particle can be obtained, as shown in Figure 5.
For different properties of soil, the fractal dimension of particle size distribution reflects the particle size and distribution uniformity. The larger the fractal dimension is, the smaller the particle size of soil mineral particles is, the higher the fine particle content is, and the more uneven the texture is. The fractal dimension in equation 2 reflects the characteristics of particle size distribution of soil mineral particles, and has a clear physical meaning. It shows that when D = 0, the soil is completely composed of mineral particles with equal particle size. Most fractal dimensions of soil mineral particles determined by particle size distribution are between 1.0 and 3.0, and some are greater than 3.0 [11]. Since the mineral particle data in this paper were obtained according to the parameters such as particle area and equivalent diameter, etc., the fractal dimension ranged between 1 and 2.
It can be seen from Figure 5 that the fractal dimensions of quartz, carbonate, and feldspar mineral particles in undisturbed loess were 0.952, 0.659, and 0.797, respectively; the fractal dimensions of quartz, carbonate and feldspar mineral particles in lime-treated loess were 0.955, 0.896, and 1.095, respectively. Most of the fractal dimensions ranged from 0 to 1.0, which does not conform to the range of particle size fractal dimension, and the fitting accuracy is in the range of 0.359~0.681, which is not satisfying. It shows that the distribution of mineral particles in the two soils does not accord with the characteristics of single fractal. It can be seen from the particle size distribution curve of mineral particles in Figure 4 that the smaller the curve's amplitude of variation was, the greater the non-uniformity of mineral particle distribution was, which indicates the mineral particle content in each particle size tended to be consistent. The mineral particle content of different particle size in the two soils was unevenly distributed, whereas the particle size distribution of carbonate minerals showed great non-uniformity, which also indicates that single fractal can only describe the overall characteristics of particle distribution rather than the local characteristics of soil structure. Therefore, it is possible to analyze the distribution of mineral particle size by multifractal theory, which can reflect the local heterogeneity and non-uniformity of the distribution of mineral particles in more detail.
Generalized Dimension Spectrum Curve q − D(q)
According to the multifractal theory, when D(0) is larger, the range of mineral particle size distribution is wider; when D(1) is larger, the distribution range of soil mineral particles is wider, and the percentage of mineral particle content in each region is evenly distributed at various scales. The value of D(1)/D(0) can reflect the dispersion degree of particle size distribution. If D(0) = D(1) = D(2), the distribution of soil mineral particles has a single fractal structure. The values of D(0), D(1), D(1)/D(0) of mineral particles in undisturbed loess and lime-treated loess are shown in Table 1.
As is shown in Table 1, D(0) > D(1) > D(2) applies in all mineral particles-quartz, feldspar and carbonate in untreated loess as well as in lime-treated loess, indicating that the particle size distribution of the three minerals in the two soil samples is non-uniform fractal, which also shows that it is necessary and reasonable to analyze the PSD of each mineral in undisturbed loess as well as lime-treated loess by the multifractal method. On the basis of the multifractal analysis of three kinds of mineral particles in undisturbed loess and lime-treated loess, the generalized dimension spectrum curve q − D(q) of PSD of mineral particles is obtained in the range of −10 ≤ q ≤ 10, as shown in Figure 6. For non-uniform fractal, q − D(q) had a certain width, and the greater the curvature was, the worse the soil uniformity was [24]. PSDs of the three mineral particles had a certain degree of curvature and showed a certain degree of non-uniformity, and the carbonate mineral particles in lime-treated loess were the most obvious. Figure 6 shows that with the increase of q, the D(q) of the three mineral particles in two soil samples decreased, and when q > 1, the decreasing trend slowed down, and the generalized fractal dimensions of different mineral particles approached 0.8; D(q) of carbonate minerals in undisturbed and lime-treated loess changed most obviously with the increase of q, showing stronger non-uniformity than PSD of other minerals; D(q) of quartz minerals in undisturbed loess changed the least with the increase of q, indicating that its PSD was relatively uniform; D(q) of quartz and feldspar mineral particles in lime-treated loess increased with q, which is larger than that of quartz and feldspar mineral particles in undisturbed loess, but far less than that of carbonate mineral particles, showing moderate particle size distribution characteristics of mineral particles.
Singular Spectrum Analysis
The multifractal singular spectrum of PSD of three different minerals in undisturbed loess and lime-treated loess is shown in Figure 7. The α − α(q) functions of PSD of the three mineral particles in the two soils are convex functions, indicating that different mineral particles showed non-uniformity. Symmetry ∆ f = f (α min ) − f (α max ) reflects the shape characteristics of multifractal spectrum function. When ∆ f < 0, f (α) was in the form of a right hook; when ∆ f > 0, f (α) was in the form of a left hook [19]. From Figure 7, it can be seen that in undisturbed loess, the ∆f of quartz mineral particles equaled 0 with a uniformly symmetrical shape, whereas the carbonate mineral particles ∆ f > 0, f (α) showed a left hook, and feldspar mineral particles ∆ f < 0, f (α) showed a right hook. In the lime-treated loess, quarts and feldspar mineral particles ∆ f < 0, f (α) was in the form of a right hook, whereas carbonate mineral particles ∆ f > 0, f (α) showed a left hook. Except the uniform symmetry of the quartz particles α − α(q) in the undisturbed loess, the other two mineral particles α − α(q) and the quartz, carbonate, and feldspar mineral particles α − α(q) in the lime-treated loess were obviously asymmetric. The more obvious the asymmetry is, the greater the percentage content of the mineral particles changes with the particle size.
According to the multifractal theory, f (α) and D(q) are correlated, and the spectral height of the multifractal spectrum is the maximum value of f (α), namely, the fractal dimension D0 when q = 0. The spectral width (∆α = α max − α min ) can reflect the nonuniformity of probability measure distribution on the whole fractal structure [23].
Takele et al. [25] believed that when ∆α is 0, D(q) equals D 0 , remaining unchanged with the increase of q. The larger ∆α is, the more uniform PSD is. Its distribution characteristics can be described by multifractal rather than single fractal. Table 1 shows that the spectral widths ∆α of quartz, carbonate, and feldspar mineral particles in natural loess were 0.5311, 1.1175, and 0.6883, respectively. The spectral widths ∆α of quartz, carbonate, and feldspar mineral particles in lime-reinforced loess were 0.9289, 1.1183, and 0.7026, respectively. It indicates that the non-uniform distribution of carbonate minerals was more obvious than that of quartz and feldspar minerals in these two soils. The greater ∆α is, the greater the non-uniform degree of particles is. Therefore, the non-uniformity of different mineral particles in two soils can be obtained as follows: carbonate minerals in lime-treated loess > carbonate minerals in undisturbed loess > quartz minerals in lime-treated loess > feldspar minerals in lime-treated loess > feldspar minerals in undisturbed loess > quartz minerals in undisturbed loess.
Conclusions
Through image scanning, processing, and mineral composition identification, the obtained multi-scale microstructure images were more representative and could better present the characteristics of macroscopic soil samples. In this work, fractal theory was used to quantitatively study the distribution of mineral particles in undisturbed loess and lime-treated loess. The following main conclusions may be drawn: 1. The skeleton particles of undisturbed loess were obvious and the soil structure was loose, whereas the skeleton particles of lime-treated loess decreased, fine particles were connected with each other, and the soil structure changed from loose to dense. The particle size of each mineral particle in lime-treated loess decreased, and the distribution of carbonate mineral particles was the most non-uniform. 2. Mineral particles in undisturbed loess and lime-treated loess did not conform to the single fractal distribution characteristics, whereas the overall particle distribution had fractal characteristics. 3. The α − α(q) distribution of mineral particles in lime-treated loess was obviously asymmetric, indicating that the percentage of mineral particles in the soil varies greatly with particle size. 4. The non-uniform degree of mineral particles in the two soils was as follows: carbonate minerals in lime-treated loess > carbonate minerals in undisturbed loess > quartz minerals in lime-treated loess > feldspar minerals in lime-treated loess > feldspar minerals in undisturbed loess > quartz minerals in undisturbed loess.
It can be seen that the non-uniformity of mineral particles in lime-treated loess was relatively more obvious than that in undisturbed loess, which is consistent with the observation of multi-scale microscopic images and the results of mineral particle size distribution curves. The analysis in this work can better provide a basis for the future study of the different soil mechanical properties of undisturbed loess and lime-treated loess, and also provide a method for the study of the mechanical properties of different types of soil. | 5,447.4 | 2021-11-01T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Derivation of the Adjoint Drift Flux Equations for Multiphase Flow
: The continuous adjoint approach is a technique for calculating the sensitivity of a flow to changes in input parameters, most commonly changes of geometry. Here we present for the first time the mathematical derivation of the adjoint system for multiphase flow modeled by the commonly used drift flux equations, together with the adjoint boundary conditions necessary to solve a generic multiphase flow problem. The objective function is defined for such a system, and specific examples derived for commonly used settling velocity formulations such as the Takacs and Dahl models. We also discuss the use of these equations for a complete optimisation process. scenarios and geometries, but this work will remain private and unpublished.
Introduction
The adjoint method is currently attracting significant interest as an optimization process in CFD. The objective of the adjoint approach is to calculate the sensitivity of the flow solution with respect to changes in the input parameters, most commonly changes in the geometry. This can then in principle be used as the basis for an iterative optimization algorithm based on gradient information (the sensitivities) which can optimize the design with many fewer function evaluations than would be the case for non-gradient-based approaches (such as genetic algorithms). Calculating the sensitivities requires differentiating the governing equations with respect to the changes of the input parameters, and since the governing equations for fluid flow are the Navier-Stokes equations (or equations derived from these such as the Reynolds Averaged Navier Stokes equations), this is understandably very challenging. There are two main approaches; the discrete adjoint approach, and the continuous adjoint approach. In the discrete adjoint approach, the sensitivity matrix is calculated numerically by evaluating the system for small changes in the inputs and applying standard finite difference methods. In the continuous adjoint approach, the sensitivities are calculated mathematically using lagrange multipliers. This is more elegant and provides an implementation which is easier to code, requires fewer evaluations and can be made numerically consistent with the evaluation of the original equation set. However it does require significant mathematical analysis in advance, and if the problem formulation changes (different equations, boundary conditions etc) this has to be repeated. Examples of the application of the continuous adjoint method for single phase flow can be found in a range of areas [1,2] such as automotive [3][4][5], aerospace [6,7] and turbomachinery [8][9][10], and implementations of the equations can be found in general purpose CFD codes such as STAR-CCM, ANSYS Fluent [11] and Engys Helyx [4]. However the equations are complex to develop and application to multiphase systems is only just starting [12]. In many cases, even just the evaluation of the sensitivities is valuable, as they can be used to indicate possible changes to the design engineers. Beyond this the sensitivities can also be used as the basis for an optimization loop [2]. This of course necessitates the morphing of the geometry through techniques such as volumetric B-splines [13] or Radial Basis Functions [14] and consequent updating of the mesh [15].
Multiphase flow is the simultaneous flow of two or more immiscible phases in a system. In dispersed multiphase systems, one or more of the phases exists as fluid particles small enough not to be resolved in the simulation; examples include gas bubbles in water, emulsions (liquid droplets in another immiscible liquid) and actual solid particles in gas or liquid. A wide variety of different mathematical models have been derived over the years to describe dispersed multiphase flow, including mixture models, lagrangian particle tracking, and eulerian n-fluid models [16,17]. Which is used depends on the exact physics of the problem, as well as factors such as available computing resources and desired accuracy. In many physical systems, the density ratio between the two phases is low, generally less than 2:1, and the drag force between them is high. Therefore, to a good approximation, the two phases can be considered to respond to pressure gradients as a single phase. Additionally, the slip (drift) between the phases is primarily due to the gravitational settling of the dispersed phase. This might adequately describe solid particles in water or an emulsion of immiscible liquids, and in these cases a commonly used mathematical model is the drift flux model. Hence it is this set of equations we have decided to focus on.
In the drift flux model, the two phases are treated as one: the momentum and continuity equations for both phases are summed to create a mixture-momentum and mixture-continuity equation, and the transport of the dispersed phase is modelled using a drift equation. The three equations, collectively called the drift flux equations, are listed below: where: • α is the dispersed-phase volume fraction, • ρ c is the continuum density, µ m is the mixture viscosity, defined as the sum of the continuum, dispersed-phase and mixture turbulent viscosities, g is the acceleration due to gravity, • F is the capillary force and • K is the turbulent diffusion coefficient, defined as the mixture eddy diffusivity, ν t m = µ t m ρ m . In summing the momentum equations, not only have the number of equations been reduced from four to three, but the inter-phase momentum transfer terms have also been eliminated which were numerically unstable [18]. Hence, a far more robust equation set has been produced and the computational resources required to solve the system have been reduced. This also makes it a very appropriate basis from which to develop an adjoint formulation suitable for applying to dispersed multiphase flows in this regime. This is the challenge of the current paper. We focus in particular on wall-bounded or ducted flows, in which there is no contribution to the objective function from the interior of the domain, in other words, the performance of the system is entirely governed by the boundary properties.
The paper is organized as follows. The optimization problem is stated in Section 2 and the adjoint equations for the drift flux model are derived for the general case in Section 2.1. These equations are then applied to the specific case of ducted or wall-bounded flows in Section 3, with the objective function for this case being specified in Section 4, and different settling velocities in Section 5. Finally, the conclusions follow in Section 6.
The Optimization Problem
If the performance of a device is measured by an objective function, J, and the residuals of the primal (flow) equations are given by R, the optimisation problem can be stated as, where x are the design parameters and y are the primal variables [19]. It can then be formulated as, where L is the Lagrange function, λ are the Lagrange multipliers (also referred to as the adjoint variables) and Ω is the flow domain. In this case, the primal equations are the steady state drift flux equations, with the capillary force taken to be zero [18] and a Darcy term included in the mixture-momentum equation. They are rearranged in terms of their residuals, R = (R 1 , R 2 , R 3 , R 4 , R 5 ) T , as follows: where ℵ is the porosity, associated with the Darcy term. The variation of the Lagrange function with respect to the primal variables, (v m , p m , α), and the design parameter, ℵ, is, where, for example, δ α L = L (α + δα) − L (α). We choose the adjoint variables, (u, q, β) = (u 1 , u 2 , u 3 , q, β), so that the variation with respect to the primal variables vanishes, i.e.
and the Lagrange function now varies only with respect to the design parameter,
Derivation of the Adjoint Drift Flux Equations
The adjoint drift flux equations are derived by substituting Equation (3) into Equation (6), giving, which can be expanded to, The variation of R with respect to the primal variables can be determined as: Derivation of Equations (10a), (10g) and (10i) can be found in Appendies A-C, respectively, where the variation of µ t m has been neglected. This is correct only for laminar flow regimes. For turbulent flows, neglecting this variation constitutes a common approximation, known as frozen turbulence [19]. This may introduce errors into the optimisation [20], although there are cases in the literature where the frozen turbulence assumption can be demonstrated to be acceptable [21].
With these variations, Equation (9) now reads, Decomposing the objective function into contributions from the boundary, Γ, and interior, Ω, of the domain, Equation (11) can be reformulated as, Derivation of Equation (13) can be found in Appendix D. In order to satisfy Equation (13) in general, the integrals must vanish individually. The adjoint drift flux equations are deduced from the integrals over the interior of the domain: where: and the boundary conditions for the adjoint variables are deduced from the surface integrals: where: and u n = u · n is the normal component of the adjoint velocity. This is the general form of the adjoint equation system for the steady state drift flux equations with Darcy porosity term and frozen turbulence.
Application to Wall Bounded Flows
Thus far in the paper we have presented the optimisation problem in as generic a way as possible. To proceed further with the derivation we now need to derive expressions for the boundary conditions, objective function and slip velocity. We will examine these for the case of wall-bounded or ducted flows, for which there is no contribution to the objective function from the interior of the domain. So, in the cases where the objective function only involves integrals over the surface of the flow domain rather than over its interior, the adjoint equations reduce to: These equations no longer depend on the objective function, so when switching from one optimisation objective to another, they remain unchanged and only the boundary conditions have to be adapted to the specific objective function. Note that as a result of Equation (18b), ∇ · u = 0 [22] and, therefore, S 2 = 0.
For the adjoint boundary conditions, the terms in Equation (16a) involving µ m can be rewritten as, (Ref. [19]) and therefore the adjoint boundary conditions, Equation (16), reduce to: In order to determine the boundary conditions of the adjoint variables, the boundary conditions imposed on the primal variables are listed in Table 1. We will derive expressions for the three main boundary conditions.
Adjoint Boundary Conditions at the Inlet
At an inlet, the primal velocity and dispersed-phase volume fraction are usually fixed, so, δv m = 0 and δα = 0.
The first integrals in Equations (20a) and (20c) therefore go to zero and Equation (20) reduces to: When both fluids are incompressible, ∇ · v m = 0 [22], and as δv mt = 0 along the inlet, (n · ∇)δv m = (n · ∇)δv mt [19], where v mt is the tangential component of the mixture velocity. Hence, Equation (22) reduces to: where u t is the tangential component of the adjoint velocity, from which we deduce the boundary conditions for the adjoint variables at the inlet to be: Note that these derivations do not impose a condition for q. Since q enters the adjoint drift flux equations in a manner similar to the way p m enters the primal drift flux equations, the zero gradient boundary condition of p m at the inlet is applied to q as well,
Adjoint Boundary Conditions at the Wall
At a wall, typical primal conditions are zero velocity and zero gradient of the dispersed-phase volume fraction. Therefore, we have, v m = 0, δv m = 0 and (n · ∇)δα = 0. (26) The first integral in Equation (20a) and the second integral in Equation (20c) therefore go to zero and the terms in the first integral in Equation (20c), containing v m , go to zero. Equation (20) therefore reduces to: As at the inlet, the primal velocity does not diverge and δv mt = 0 along the wall, so Equation (27) reduces to: from which we deduce the boundary conditions for the adjoint variables at the wall to be: Equation (29c) is used to determine β and, as at the inlet, Equation (25) applies.
Adjoint Boundary Conditions at the Outlet
At an outlet, typical primal conditions are zero pressure and zero gradient of velocity and dispersed-phase volume fraction. Therefore, we have, δp m = 0, (n · ∇)δv m = 0 and (n · ∇)δα = 0.
The second integral in Equations (20a) and (20c) therefore goes to zero and, with δp m = 0, Equation (20b) is identically fulfilled. The remaining terms in Equation (20) are the first integrals in Equations (20a) and (20c), which can be made to go to zero by enforcing the integrands to vanish: Note that the term containing D(v m ) = 0, because (n · ∇)v m = 0. Decomposing Equation (31a) into its normal and tangential components yields: Equations (31b), (32a) and (32b) are used to determine β, q and u t , respectively. Since u n is prescribed at the inlet, the adjoint continuity equation, Equation (18b), is used to calculate u n at the outlet, Φ Γ . The boundary conditions for the adjoint variables at the outlet are summarised as: where ν m = µ m ρ m is the mixture kinematic viscosity. A summary of the boundary conditions for the adjoint variables is presented in Table 2.
Objective Function
The objective function is related to the dispersed-phase mass-flow rate at the boundaries of the domain, where αρ d is the dispersed-phase mass fraction and v d · n is the dispersed-phase velocity normal to the boundary. Since the phase fraction at the inlet is specified, the objective function is defined as the mass-flow rate of solid at the outlet, and Equation (12) becomes, where o refers to the outlet. The derivatives of the objective function, Equation (34), with respect to the primal variables are: Derivation of Equation (36c) can be found in Appendix E. Using these derivatives, the adjoint boundary conditions at an inlet reduces to: At a wall, there is no contribution from the objective function, so: Note that C 2 = 0 because u = 0. At an outlet, to satisfy the adjoint continuity equation, Equation (18b), u n = 0, so: Note that C 2 = 0 because u n = 0 and D(v m ) = 0 when (n · ∇)v m = 0. A summary of the adjoint boundary conditions, using the objective function defined in Equation (34), is presented in Table 3.
Settling Velocity
The equations thus far have been derived for the most general case in which the settling (drift) velocity has not been specified. Of course the settling velocity is key to the behaviour of the drift flux model, and incorporates much of the physics of the multiphase system. Here we will derive the appropriate additional equations for two common settling velocity models, vis. the Dahl [23] and Takacs [24] models.
Dahl Model
In this formulation v dj is modelled using, where v 0 is the maximum theoretical settling velocity and k is a settling parameter, and its partial derivative with respect to α is given by,
Takacs Model
In this formulation v dj is modelled using, where: • a is the hindered settling parameter, • a 1 is the flocculent settling parameter, • α r is the volume fraction of non-settleable solids at the inlet and • v 00 is the maximum practical settling velocity, and its partial derivative with respect to α is given by, For both models the partial derivative of αv dj with respect to α is given by,
Conclusions
In this paper we have derived, for the first time, the adjoint equations based on the drift flux model for dispersed multiphase flow. In addition to the adjoint drift flux equations themselves we have presented the adjoint boundary conditions for the common boundary conditions (Inlet, Outlet, Wall), as well as a treatment of the generic objective function, and specific formulations corresponding to the common settling velocity models proposed by Dahl [23] and Takacs [24]. From these elements a full adjoint set of equations can be derived for any specific ducted flow problem, and of course implemented in an appropriate numerical code. This of course also presents many, largely numerical and coding, challenges, and this will be the subject of subsequent papers.
Conflicts of Interest:
There is no conflict of interest for Hydro International Ltd. in the publication of the paper. Hydro International holds interests in the implementation and application of the methods discussed in the paper for specific scenarios and geometries, but this work will remain private and unpublished.
Appendix A. Derivation of Equation (10a)
The variation of (R 1 , R 2 , R 3 ) T with respect to v m is calculated as, As stated above, µ m is defined as the sum of the continuum, dispersed-phase and mixture turbulent viscosities, where µ c is constant, µ d is a function of v m and α, and µ t m is obtained from turbulence modelling. Equation (A1) can now be rewritten as, where δ v m µ t m has been neglected.
Substituting Equation (A6) into Equation (A11) and rewriting the parentheses as binomial expansions, As α 1 and ρ d ≈ 2ρ c =⇒ 1 − 1 1−α ρ d ρ c < 2 and ignoring terms containing squared and higher powers of α, As stated above, K is defined as the mixture eddy diffusivity, From Equation (A6), 1 ρ m can be written as, ignoring terms containing squared and higher powers of α. Substituting Equations (A14) and (A15) into the term containing K in Equation (A10), ignoring the term containing δα∇δα, because when substitued into Equation (9) becomes terms containing squared powers of δα. Equation (A10) now becomes, where δ α µ t m has been neglected.
Appendix D. Derivation of Equation (13)
Decomposing the objective function into contributions from the boundary and interior of the domain, according to Equation (12), the terms in Equation (11) can be written as follows. The variations of the objective function can be written as, and Applying the product rule, divergence theorem and continuity equation, and using the Einstein notation for clarity, the terms containing u, v m and ∇ can be written as, Applying the tensor-vector identity [25], the divergence theorem and a property of the colon product, demonstrated below, and Applying the product rule and divergence theorem, the remaining terms in Equation (11) can be written as, Applying the product rule, | 4,347 | 2020-03-11T00:00:00.000 | [
"Mathematics"
] |
Multicomponent One-Pot Synthesis of Substituted Hantzsch Thiazole Derivatives Under Solvent Free Conditions
Thiazole derivatives were prepared by one-pot procedure by the reaction of α-haloketones, thiourea and substituted o-hydroxybenzaldehyde under environmentally solvent free conditions.
Introduction
Thiazoles are important class of natural and synthetic compounds.Thiazole derivatives display a wide range of biological activities such as cardiotonic 1 , fungicidal 2 , sedative 3 , anasthetic 4 , bactericidal 5 and anti-inflammatory 6 .The synthesis of thiazole derivatives is important of their wide range of pharmaceutical and biological properties.The most straight forward procedure reported by Hantzsch in 1887 involves one-pot condensation of αhaloketones, and thiourea or thioamides in refluxing alcohol 7 .This is method, however long reaction time (24-25 h), harsh reaction conditions, the use of large quantity of volatile organic solvents generally gives low yields .Modified procedures have been reported by King et al 8a-b and others 9a-d involving the replacement of α-haloketones with ketones and halogen.Despite this modification the methods of King et al. and others are cumbersome and required long reaction time (24-25 h).Therefore, the development of more economical and environmentally friendly conversion processes is highly desirable.
With the increasing environmental concern and the regulatory constraints faced in the chemical and pharmaceutical industries, development of environmentally benign organic reactions has become a crucial and demanding research area in modern organic chemical research 10 .Therefore, more and more chemists synthetic endeavors are devoted towards 'green synthesis' which means the reagent, solvent and catalyst are environmentally friendly.In modern organic chemical research, Wender defined the 'ideal synthesis' as one in which the target components are produced in one-step, in quantitative yield from readily available and inexpensive starting materials in resource effective and environmentally acceptable process 11 .Multicomponent condensations represent a possible instrument to perform near ideal synthesis because they possess one of the aforementioned qualities, namely the possibility of building up complexing molecules with maximum simplicity and brevity.
In recent years, to minimize the amount of harmful organic solvents used in chemical processes, much attention has been devoted to the use of alternative reaction media 12a .Besides the use of supercritical fluids, water and ionic liquids, the possibility of performing chemical processes in the absence of solvent (solvent free) conditions has been receiving more attention 2b-13 .The reported examples demonstrate that no solvent reactions are generally faster, gives higher selectively and yields, and usually require easier work up procedures and simpler equipments [13][14] .
Experimental
Melting points were determined by in open capillary method and are uncorrected.The chemicals and solvents used for laboratory grade and were purified.IR spectra were recorded (in KBr pallets) on SCHIMADZU spectrophotometer. 1 H NMR spectra were recorded (in DMSO-d 6 ) on AVANCE-300 MHz spectrometer using TMS as an internal standard.The mass were recorded on EI-SHIMDZU-GC-MS spectrometer.Elemental analyses were performed on a Perkins-Elmer CHN elemental analyzer.
Method B
A mixture of 2'-hydroxy-5'chloro-α-haloketones (1) (1 m mole) and thiourea (2) (1 mmol) wetted with 2-4 drops of ethanol, followed by o-hydroxybenzaldehyde (3) (1 mmol) was grinded by pestle in mortar at room temperature for the period as shown in Table 1.The progress of reaction was monitored by TLC.After completion of reaction solid product thus obtained was poured on crushed ice (20 g).The separated product was filtered washed with ice cold water and recrystalized from 5% aqueous acetic acid (6 mL) to afford to the pure product (4).
Results and Discussion
In view of the recent emphasis aimed at developing new selective and environmentally friendly methodologies for the preparation of fine chemicals, herein we report expeditious synthesis of substituted 2-{[4-sub.phenyl)-thiazol-2-yl-imino]-methyl}-phenolby the onepot condensation of α-haloketones, thiourea and substituted o-hydroxybenzaldehyde under environmentally solvent-free conditions (Scheme 1).
Scheme 1 Initially, we attempted the one-pot condensation of 2'-hydroxy-5'-chloro-α-haloketone, thiourea and o-hydroxybenzaldehyde under solvent free condition.The reaction went to completion within 3 minutes and the corresponding product was obtained in 95% yield.Encouraged by the results obtained with o-hydroxybenzaldehyde, thiourea and 2'-hydroxy-5'-chloro-α-haloketone, we turned our attention to variety of ohydroxybenzaldehydes and 2'-hydroxy substituted-α-haloketones respectively.Interestingly, various o-hydroxybenzaldehydes and 2'-hydroxy-substituted-α-haloketone reacted smoothly with thiourea under solvent free conditions to give the corresponding thiazole derivatives in 85-95% yield.In all cases, the reaction proceeded efficiently in high yield at room temperature under solvent-less conditions.All the products were characterized by IR, 1 H NMR and MASS spectral analysis.
When attempts were made to carry out the synthesis of thiazole derivatives by classical method in ethanol under reflux temperature, the yield of the products are poor (60-70%).In general, reactions under solvent-free condition are clean, rapid and afforded higher yield than those obtained by conventional method.The results of the reactions under solvent-free conditions are compared with reflux condition and short reaction times were observed, which is more economy in terms of times, enhanced reaction rates, improved yields and high selectivity are the features obtained in solvent free conditions.The scope and generality of this process is illustrated with respect to various o-hydroxybezaldehyde and 2'-hydroxy substituted-α-haloketones and the result are presented in Table 1.
Conclusion
In summary, we have developed a simple, convenient and effective method for easy synthesis of thiazole derivatives by the condensation of o-hydroxybenzaldehyde, 2'-hydroxy substituted -α-haloketones and thiourea under solvent free conditions.Present methodology offers very attractive features such as reduced reaction times, higher yields and environmentally benign condition.The simple procedure combined with ease of work-up and entirely solvent-free conditions make this method economic, benign and a waste-free chemical process for the synthesis of thiazole derivatives of biological importance.Thus, we believe that this green procedure will be worthwhile addition to the present methodologies.
To our knowledge, this is the first time report of an efficient general method for the synthesis of thiazole derivatives by using various 2'-hydroxy substituted-α-haloketones as one of the substrate.
Table 1 .
Synthesis of Hantzsch thiazole derivatives under solvent free conditions.
a Time in hours based on Method A, b Time in minutes Method B c Pure isolated yield of products. | 1,363.6 | 2009-01-01T00:00:00.000 | [
"Chemistry"
] |
N-Acetylaspartylglutamate Synthetase II Synthesizes N-Acetylaspartylglutamylglutamate
N-Acetylaspartylglutamate (NAAG) is found at high concentrations in the vertebrate nervous system. NAAG is an agonist at group II metabotropic glutamate receptors. In addition to its role as a neuropeptide, a number of functions have been proposed for NAAG, including a role as a non-excitotoxic transport form of glutamate and a molecular water pump. We recently identified a NAAG synthetase (now renamed NAAG synthetase I, NAAGS-I), encoded by the ribosomal modification protein rimK-like family member B (Rimklb) gene, as a member of the ATP-grasp protein family. We show here that a structurally related protein, encoded by the ribosomal modification protein rimK-like family member A (Rimkla) gene, is another NAAG synthetase (NAAGS-II), which in addition, synthesizes the N-acetylated tripeptide N-acetylaspartylglutamylglutamate (NAAG2). In contrast, NAAG2 synthetase activity was undetectable in cells expressing NAAGS-I. Furthermore, we demonstrate by mass spectrometry the presence of NAAG2 in murine brain tissue and sciatic nerves. The highest concentrations of both, NAAG2 and NAAG, were found in sciatic nerves, spinal cord, and the brain stem, in accordance with the expression level of NAAGS-II. To our knowledge the presence of NAAG2 in the vertebrate nervous system has not been described before. The physiological role of NAAG2, e.g. whether it acts as a neurotransmitter, remains to be determined.
N-Acetylaspartylglutamate (NAAG) 3 is an abundant peptide in the vertebrate nervous system, found at high micromolar to low millimolar concentrations (1)(2)(3). A number of studies demonstrated that NAAG acts as a specific agonist at the group II metabotropic mGluR3 glutamate receptors (4 -6). Agonistic and antagonistic effects of NAAG at N-methyl-D-asparatate receptors have been described (5,7,8), but could not be confirmed in later studies (9). Several reports indicate a neuro-protective role of NAAG (10 -12), and in line with this, inhibitors of the NAAG hydrolyzing glutamate carboxypeptidase (GCP)-II have a significant neuroprotective effect in different model systems (13). Increasing NAAG concentrations by GCP-II inhibition appear to reduce glutamate release through activation of presynaptic mGluR3 receptors (for review, see Ref. 13).
NAAG may also be involved in neuron-glia signaling (14), although its specific role is not fully understood. Theoretically, synthesis of NAAG could also be an efficient way to transfer large amounts of glutamate from neurons to the extracellular fluid, avoiding the excitotoxic effect of free glutamate (15). A possible role of NAAG as a molecular water pump has also been suggested (16).
NAAG is synthesized independently of ribosome from N-acetylaspartate (NAA) and glutamate by NAAG synthetases. Although neurons are the major source of NAAG, it is also present in cultured oligodendrocytes and activated microglia (17). In the mammalian nervous system, the highest NAAG levels have been found in the brain stem, spinal cord, and peripheral nerves (1, 18 -21). NAAG is released from synaptic terminals in a depolarization-and calcium-dependent manner, indicating its presence in synaptic vesicles (2). Extracellularly, NAAG is hydrolyzed by GCP-II or GCP-III to release NAA and glutamate (13). Glutamate is taken up by neurons or astrocytes, whereas the deliberated NAA is mainly taken up by oligodendrocytes, where NAA is hydrolyzed by aspartoacylase (23). The released acetyl groups may be used for lipid synthesis during myelin formation (23)(24)(25). NAAG may also be taken up by glial cells (26), e.g. via the proton-dependent high affinity oligopeptide transporter 2 (PEPT2) (27). The metabolic fate of NAAG taken up by glial cells, however, is not clear.
We and others recently reported that a member of the ATPgrasp protein family, encoded by the ribosomal modification protein rimK-like family member B (Rimklb) gene is a NAAG synthetase (28,29). We also identified a homologous gene, ribosomal modification protein rimK-like family member A (Rimkla), potentially encoding a protein with significant sequence similarity to the NAAG synthetase. Collard et al. (29) showed that Rimkla indeed encodes a NAAG synthetase. We show here, however, that this enzyme (NAAG synthetase II, NAAGS-II) not only synthesizes NAAG, but is capable of condensing a second glutamate residue to its first reaction product, thereby generating the tripeptide N-acetylaspartylglutamylglutamate (NAAG 2 ). Mass spectrometry confirmed the presence of NAAG 2 in the murine nervous system, where NAAG 2 concentration correlates with the Rimkla/NAAGS-II expression level.
To our knowledge, NAAG 2 has not been described previously. Its physiological role remains to be determined.
EXPERIMENTAL PROCEDURES
Synthesis of 14 C-Labeled NAA-14 C-Labeled NAA was synthesized using [ 14 C]aspartate (GE Healthcare) and acetic acid anhydride (Merck, Darmstadt, Germany) as described by Gehl et al. (30), with minor modifications. Briefly, 11.7 nmol of acetic acid anhydride was mixed with 2.7 nmol of [ 14 C]aspartate in a total volume of 500 l and the reaction mixture was shaken vigorously for 30 min. The synthesized NAA was purified from residual cations by cation exchange chromatography using AG-50W-X8 cation exchange columns (Bio-Rad).
Metabolic Labeling-Transiently transfected cells were metabolically labeled after transfection by adding [ 14 After washing 3 times with PBS, cells were scraped in 1 ml of ice-cold 90% methanol and centrifuged for 5 min at 10,000 ϫ g. The supernatant was dried in a SpeedVac concentrator, dissolved in water, and adjusted to pH 5-6 using sodium hydroxide. To remove cations from the peptide extract, the solution was passed through a cation exchange AG-50W-X8 resin column (Bio-Rad), and the eluate was dried in a SpeedVac concentrator. Dried extracts were dissolved in 20 l of 20% ethanol and applied loaded onto Silica Gel 60 HPTLC plates (Merck, Darmstadt, Germany). In some experiments, 5 mM NAA, NAAG, or NAAG 2 were added as internal standards. HPTLC plates were developed in one of the following solvent systems: (a) butanol/ acetic acid/water (12:3:5) or (b) chloroform/methanol/acetic acid (9:1:5). Radioactive signals were visualized using Bioimager screens (Fujifilm, Düsseldorf, Germany). The unlabeled peptide standards (NAAG and NAAG 2 ) were detected by UV scanning at 215 or 200 nm. The NAAG 2 peptide was obtained from PANATecs GmbH (Tübingen, Germany).
Peptide Transport Assay-Peptide transport activity of PEPT2 and NaDC3 was determined using HEK-293T cells transiently transfected with pEGFP-PEPT2 or pcDNA-NaDC3 (28) in 6-well plates. Control cells were transfected with pEGFP-C1 plasmid. Twenty hours after transfection, cells were incubated with different concentrations of 14 In some experiments, uptake of 14 C-labeled substrates was measured in the presence of 5 mM unlabeled NAAG 2 or NAAG. After washing 3 times with ice-cold PBS, cells were lysed in 1% SDS, and radioactivity was determined by liquid scintillation counting. Bound radioactivity in cells transfected with the pEGFP-C1 control plasmid was subtracted. Three independent experiments, each performed in duplicates, were done.
Acid Hydrolysis of Peptides-Aquaeous solutions of TLC-purified NAAG and NAAG 2 from metabolically labeled cells were incubated with 4 volumes of 6 M HCl for 2 h at 110°C. The samples were dried under vacuum and dissolved in 10 l of 20% ethanol. The hydrolyzed samples and 14 C-labeled glutamate and aspartate standards were applied onto Silica Gel 60 HPTLC plates and the chromatograms were developed in n-butanol/ acetic acid/water (8:2:2; v/v/v). Radioactive signals were visualized using Bioimager screens and quantified using AIDA software.
HPLC and ESI-MS Analysis-For HPLC/ESI-MS analysis and quantification of NAAG and NAAG 2 , HEK-293T cells were harvested 48 h after transfection. Cells were washed three times with PBS. Cell pellets from two 35-mm dishes were combined and resuspended in 300 l of ice-cold 90% methanol and sonicated. Protein precipitates were sedimented by centrifugation at 20,800 ϫ g for 15 min at 4°C. The protein pellet was used to measure the protein concentration using the bicinchoninic acid assay (Bio-Rad). The peptide extract was dried under vacuum and dissolved in 100 l of distilled water. The solution was centrifuged at 20,800 ϫ g (20 min) and the supernatant was subjected directly to HPLC analysis.
For quantification of NAAG and NAAG 2 in mouse tissues, 2-3-month-old female C57BL/6 wild-type mice were used. Frozen tissues were homogenized in 500 l of ice-cold 90% methanol with an ultraturrax tissue homogenizer (IKA-Werke, Staufen, Germany). Protein precipitates were sedimented by centrifugation for 20 min at 20,800 ϫ g and 4°C. The peptide extract was dried under vacuum and dissolved in 200 l of distilled water. To sediment the unsoluble material, the solution was centrifuged for 20 min at 20,800 ϫ g (at room temperature). The supernatant was subjected directly to HPLC analysis.
Samples were analyzed by a tandem LC/MS spectroscopy method. For HPLC (HPLC 1200 series, Agilent Technologies, Santa Clara, CA) a column (organic acid resin 250 ϫ 4-mm sphere image; CS-chromatographie service GmbH, Langerwehe, Germany) with organic acid resin, PS-DVB with sulfonic acid changer, was used. The mobile phase consisted of 0.05% formic acid, and absorbance was detected at 214 nm. For equilibration, the column was washed with the mobile phase for 1 h at a flow rate of 1 ml/min. The flow rate for all analysis was 0.5 ml/min. The acquired data were analyzed by integration of the NAAG peak areas in HPLC. The NAAG detection limit was 6 nmol/g of protein for HEK-293T cells and 9.4 nmol/g of tissue (wet weight) for mouse tissues.
ESI/MS spectra (ESI-MS HCTultra, Bruker Daltonics, Bremen, Germany) were recorded in negative ion mode with Auto/MS fragmentation. Eluate fractions from the HPLC were directly injected. Nitrogen was used as drying gas at 350°C. The collision gas was a mixture of helium with 3% argon. The capillary voltage was set to 4,000 V. Data of NAAG 2 were analyzed by the intensities of the NAAG 2 signals in the ESI main spectra compared with external standards measurement. The NAAG 2 detection limit was 0.5 nmol/g of protein for HEK-293T cells and 0.8 nmol/g of tissue (wet weight) for mouse tissues.
Northern Blotting-Total RNA was isolated from different tissues of 10-week-old male C57BL/6 mice and E12.5 embryos using TRIzol (Invitrogen), as described (33). RNAs (20 g/lane) were separated by agarose gel electrophoresis in the presence of 1 M formaldehyde and transferred onto Hybond Nϩ nylon membranes (Amersham Biosciences) using standard methods (34). Membranes were hybridized to the digoxigenin-labeled antisense NAAGS-II cRNA probe, followed by chemiluminescence detection, as described (33).
In Situ Hybridization-The plasmid pFLAG-NAAGS-II was linearized with BamHI or XbaI, and antisense and sense digoxigenin-labeled cRNA probes were synthesized with SP6-and FIGURE 2. Western blot analysis. HEK-293T cells were transiently transfected with pFLAG-NAAGS-II or the empty vector pcDNA3. Cells were homogenized and fractionated by differential centrifugation. Equal fractions of the 1,000 ϫ g pellet, 100,000 ϫ g pellet (Mem.), and the 100,000 ϫ g supernatant (Cyt.) were analyzed by Western blotting using anti-FLAG antibody. , not determined. C, HEK-293T cells coexpressing NAAGS-II and Nat8l were metabolically labeled with [ 14 C]glutamate and NAAG and the unidentified products (X) were purified from a methanolic peptide extract by preparative TLC. NAAG and X were subjected to acid hydrolysis in 6 M HCl at 110°C (ϩ) and left untreated (Ϫ). Reaction products were separated by TLC together with 14 C-labeled glutamate (Glu) and aspartate (Asp) standards. One representative experiment out of 4 independent experiments is shown. D, the ratio of [ 14 C]Glu to [ 14 C]Asp released by acid hydrolysis shown in C, demonstrated a significant higher Glu content in substance X compared with NAAG (mean Ϯ S.D.; n ϭ 4). E, NAAG and X were isolated as described in C, except that cells were labeled with [ 14 C]NAA. Both substances were treated with increasing amounts of CPY and reaction products were separated by TLC. Although NAAG was a poor substrate for CPY, yielding only small amounts of NAA, peptide X was more efficiently digested by CPY, releasing NAA, and NAAG as an intermediate product (note that substance X could not be completely purified from contaminating NAAG by TLC; the NAAG signal, however, increased during incubation with CPY). F, peptide extracts from NaDC3 and NAAGS-II coexpressing CHO-K1 cells metabolically labeled with [ 14 C]NAA (as shown in panel A) were separated by TLC, in the presence (ϩNAAG 2 ) or absence (ϪNAAG 2 ) of synthetic NAAG 2 , and the distribution of radioactivity was determined using a Bioimager. The position of the internal NAAG 2 standard was detected by UV scan at 200 nm. 14 C-Labeled substance X and synthetic NAAG2 comigrated. G, metabolic labeling of CHO-K1 cells expressing NAAGS-II or NAAGS-I (cotransfected with or without Nat8l expression plasmids or a control plasmid encoding the EGFP). These experiments showed that additional products (a and b) are synthesized by NAAGS-II and NAAGS-I in the presence and absence of NAA. These products were not detectable in cells transfected with a control and Nat8l expression plasmid.
Rimkla/NAAGS-II Expressing Cells Synthesize Two Different
NAA-containing Peptides-We have recently reported the identification of NAAGS-I, encoded by the Rimklb gene (28). Similar results were reported by Collard et al. (29), who also showed that the homologous gene Rimkla encodes a NAAG synthetase. The Rimkla gene product that we nameed NAAG synthetase II (NAAGS-II) is highly conserved between all mammalian species and is also found in Xenopus tropicalis (Fig. 1). Interestingly, in all other non-mammalian vertebrate genomes available in the databases only the orthologs of the NAAGS-I gene are present.
NAAGS-II was expressed as FLAG epitope-tagged protein in HEK-293T (Fig. 2) and CHO-K1 cells (data not shown). Western blot analysis showed expression of a 42-kDa protein (in good agreement with the predicted molecular mass) that was present in the cytosolic but not in the total membrane fraction of transfected cells (Fig. 2). Because hexahistidine-tagged Escherichia coli-expressed NAAGS-II was entirely insoluble in inclusion bodies, we could not detect in vitro NAAGS activity using bacterial expressed protein (data not shown). Therefore, we used transiently transfected cell lines to examine the activity of NAAGS-II. CHO-K1 cells were transiently cotransfected with NAAGS-II or NAAGS-I expression plasmids and the NaDC3 transporter, and metabolically labeled with [ 14 C]NAA. Chromatographic analysis of methanol extracts revealed the presence of a peptide comigrating with the authentic NAAG standard in NAAGS-II and NAAGS-I expressing cells (Fig. 3, A and B). However, whereas NAAG was the only detectable product of NAAGS-I, NAAGS-II expressing cells synthesized an additional product that was not detectable in NAAGS-I expressing or control cells (Fig. 3, A and B). Similar results were obtained when HEK-293T cells were transfected (data not shown).
The Second NAAGS-II Reaction Product Is N-Acetylaspartylglutamylglutamate-To identify the second NAAGS-II product ("X" in Fig. 3), cells were metabolically labeled with [ 14 C]NAA or [ 14 C]glutamate (or [ 14 C]protein hydrolysate; data not shown), and the two NAA-derived products (NAAG and X) were purified by preparative thin layer chromatography (TLC). The isolated products were subjected to hydrolysis in 6 M HCl at 110°C and the reaction products were analyzed by TLC. Two products were generated in both cases, which comigrated with aspartate and glutamate standards (Fig. 3C). Notably, the glu-tamate band was more intense relative to aspartate in the hydrolysis products of the second peptide (X) when compared with the hydrolysis products of NAAG (Fig. 3D), suggesting that this molecule contains two glutamate residues attached to NAA. To confirm this, both peptides were treated with CPY, releasing the products NAA and glutamate from NAAG and NAA, and NAAG and glutamate from the second peptide (Fig. 3E). These results suggested that the second NAAGS-II product may be the tripeptide NAAG 2 . This peptide was therefore synthesized and TLC analysis showed that it comigrates with the second NAAGS-II product (Fig. 3F).
Additional products were detected in both, NAAGS-II and NAAGS-I expressing CHO-K1 cells, in the absence of Nat8l or NaDC3, after metabolic labeling with [ 14 C]glutamate (Fig. 3G) or [ 14 C]protein hydrolysate (data not shown). The major product found in NAAGS-I expressing cells, which was only weakly labeled in NAAGS-II expressing cells (a in Fig. 3G) might be -citryl-glutamate (BCG), which was recently identified as the second major product of NAAGS-I (and minor product of NAAGS-II) (28). The structure of the second product (b in Fig. 3G) is currently unknown. A weak band migrating close to this product was also detectable in cells coexpressing Nat8l and NAAGS-I (Fig. 3G, lane 2). Because this band migrates very close to NAAG 2 (Fig. 3G, lane 1), the presence of very low amounts of NAAG 2 in NAAGS-I expressing cells could not be excluded from these experiments. Mass spectrometry, however, failed to detect NAAG 2 in NAAGS-I and Nat8l coexpressing cells (data not shown).
ESI-MS Detection of NAAG 2 in NAAGS-II Expressing Cells-
To confirm the identity of the second NAAGS-II reaction prod- uct, NAAG 2 , we subjected peptide extracts from NAAGS-II-or NAAGS-I-transfected HEK-293T cells, coexpressing Nat8l, to HPLC/MS analysis, as described previously (28). These experiments were also done with CHO-K1 cells, which, however, because of a lower transfection efficiency, resulted in significantly lower signal intensities (data not shown). These experiments confirmed NAAG synthesis by NAAGS-II, as mass peaks at m/z ϭ 303 (Fig. 4A) that by tandem MS generated the Mass peaks of m/z ϭ 302.2 and 432.1, corresponding to NAAG and NAAG 2 , respectively, were detectable in sciatic nerves but not in the liver. B, the fragmentation pattern of the m/z ϭ 432.1 mass peak obtained from sciatic nerve peptide extracts was in line with the NAAG 2 structure (see fragmentation scheme in Fig. 4C), and comparable with the fragmentation pattern of synthetic NAAG 2 (see supplemental Fig. S2). expected fragment ions (see supplemental Fig. S1), could be detected. In addition, a mass peak at m/z ϭ 432 ([M-H] Ϫ , mass of NAAG 2 ) was present in cells expressing NAAGS-II together with Nat8l (Fig. 4A) or NaDC3 (in the latter, 10 mM NAA was added to the culture medium) (supplemental Fig. S4), but not in cells expressing NAAGS-II in the absence of NAA synthesis or uptake (data not shown). Moreover, the m/z ϭ 432 mass peak was undetectable in cells coexpressing NAAGS-I together with Nat8l (Fig. 4A). Fragmentation of the m/z ϭ 432 mass peak generated fragment ions (Fig. 4B) that were in line with the NAAG 2 structure (Fig. 4C) and comparable with the fragmentation pattern of synthetic NAAG 2 (see supplemental Fig. S2). Thus, NAAGS-II is a peptide synthetase capable of adding one or two glutamate residues to NAA. We did not find evidence for longer oligoglutamylated peptides in NAAGS-II expressing cells.
The concentration of NAAG and NAAG 2 in HEK-293T cells expressing either NAAGS-II alone or in combination with Nat8l or NaDC3 was determined by HPLC (NAAG) and ESI-MS (NAAG 2 ; using synthetic NAAG 2 as external standard), respectively (Fig. 5, A and B). When cells coexpressing the NaDC3 transporter were treated with 10 mM NAA, intracellular NAA concentrations were much higher compared with cells coexpressing NAAGS-II and Nat8l (data not shown; see Ref. 28). The higher intracellular NAA concentration strongly inhibited NAAG 2 synthesis, suggesting substrate competition (Fig. 5B).
The Tripeptide NAAG 2 Is Present in the Mammalian CNS and PNS-To our knowledge, NAAG 2 has not been described in any biological material. Methanol extracts of mouse tissues (different brain regions, sciatic nerves, and liver) were therefore examined by ESI-MS. NAAG levels were determined by HPLC and ESI-MS, and the identity of NAAG was confirmed by tandem MS (MS2), as described (28). Because of the low NAAG 2 concentration in tissue samples and the inefficient chromato-graphic separation of NAAG and NAAG 2 , quantification of NAAG 2 by HPLC was not possible (data not shown). Mass peaks at m/z ϭ 432 were observed in brain tissues and sciatic nerves, but not in liver (Fig. 6A). Tandem MS fragmentation (Fig. 6B) gave very similar fragmentation patterns as found for the NAAGS-II reaction product from transfected cells (Fig. 4) or synthetic NAAG 2 standard (supplemental Fig. S2). We used synthetic NAAG 2 as an external standard to estimate the concentration of NAAG 2 in nervous system tissues (see supplemental Fig. S3 for an example of a NAAG 2 standard curve). The highest NAAG 2 concentrations were found in sciatic nerves (40 nmol/g of tissue, wet weight) and the spinal cord (about 8 nmol/ g), whereas in the cortex, NAAG 2 concentrations (Ͻ2 nmol/g) were close to the detection limit (Fig. 7A). A similar rostralcaudal increase was evident for NAAG (Fig. 7B), in line with previous studies (6), although NAAG concentrations were usually 30 -50-fold higher, compared with NAAG 2 (Fig. 7B). Comparable NAAG concentrations were measured using both methods, HPLC and ESI-MS, thus the latter is a reliable method for quantification in tissue extracts. In agreement with data from Koller et al. (18), neither NAAG nor NAAG 2 were detectable in the liver, which is in line with the absence of NAAGS-II and Nat8l expression in this tissue (see below).
Expression of NAAGS-II in the Murine Nervous System-Northern blot analysis revealed strong expression of NAAGS-II in the murine nervous system with the highest expression levels in the brain stem and spinal cord (Fig. 8A). As very weak signals were apparent in Northern blot analysis (they are, however, not visible in the scanned film shown here), we used quantitative real time RT-PCR to examine expression levels of NAAGS-II in other tissues (Fig. 8B). In addition, we examined expression of the NAA synthase Nat8l (Fig. 8C). These experiments confirmed the Northern blot data and also showed high expression of Nat8l in the murine nervous system, but only low expression in other tissues. Nat8l was absent in liver and testis. Thus, the FIGURE 7. Quantification of NAAG and NAAG 2 in mouse tissues. Methanol extracts were prepared form the indicated brain regions/tissues. Note that the forebrain samples lacked part of the cortex of one hemisphere, as cortex samples were analyzed separately. NAAG 2 (A) and NAAG (B) concentrations were determined by ESI-MS (black bars) (see Fig. 6). In addition, NAAG concentrations were determined by HPLC in the same extracts (gray bars). Both methods gave similar results. Neither NAAG nor NAAG 2 were detectable in liver. The detection limit (indicated by a dotted line) for NAAG 2 was 0.8 nmol/g of tissue (wet weight). The detection limit for NAAG was 9.4 (HPLC method) and 0.9 (ESI-MS method) nmol/g of tissue (wet weight). Shown are the mean Ϯ S.D. (n ϭ 4) of four independent experiments. N.D., not determined.
absence of NAA with the simultaneous presence of the strong NAAGS-I expression in testis (28) is explained by the absence of NAA synthesis. However, low amounts of both, Nat8l and NAAGS-II and NAAGS-I (28), suggests NAA and NAAG synthesis also in different non-neuronal tissues. For example, relatively high expression levels of Nat8l, NAAGS-II, and NAAGS-I (28) were found in thymus, suggesting that this organ may also synthesize significant amounts of NAA, NAAG, and possibly NAAG 2 . To our knowledge, however, the presence of NAAG has not been examined in this tissue.
In contrast to NAAGS-I, which was found to be expressed in most brain regions (28), distribution of NAAGS-II expression was more restricted in the CNS (Fig. 9). In situ hybridization of brain paraffin-embedded sections with digoxigenin-labeled NAAGS-II antisense cRNA probes showed high expression levels of NAAGS-II in the spinal cord, brain stem, deep nuclei of the cerebellum, and different nuclei in the medulla, pons, and midbrain. NAAGS-II expression, however, was almost undetectable in the neocortex (Fig. 9A), which is in line with its low NAAG 2 content (see Fig. 7A). In the cerebellum, strong NAAGS-II expression was restricted to deep cerebellar nuclei (Fig. 9C), in contrast to NAAGS-I, which was found highly expressed in the Purkinje cell layer (28). Strong NAAGS-II expression was also found in the brain stem, and midbrain, i.e. in the nucleus ruber and substantia nigra (Fig. 9, E and F). This expression pattern is in accordance with the immunohistochemical distribution of NAAG in these areas using NAAGspecific antibodies (21).
In the spinal cord, NAAGS-II expression was found in the ventral and dorsal horns of gray matter in laminae III to X, but NAAGS-II was low or absent in laminae I and II (Fig. 9G). This distribution is in accordance with the distribution of NAAG in the spinal cord (1,8). NAAGS-II expression appeared to be higher in the ventral part of the spinal cord, which is in accordance with the higher NAAG concentration in ventral compared with dorsal spinal cord (19). NAAGS-II expression was mainly found in large diameter cells, suggesting its presence in motor neurons (Fig. 9H). Taken together, the NAAGS-II expression level in different brain regions and peripheral nerves are also in accordance with the different concentrations of NAAG 2 detected by ESI-MS (see Fig. 7). NAAG 2 Is a Substrate for GCP-II and Is Transported by PEPT2-To examine whether NAAG 2 is active in the same metabolic system as NAAG, we tested the hypothesis that (2), cerebellum (3), brain stem (4), forebrain (5), total brain (6), liver (7), spleen (8), kidney (9), adrenal gland (10), testis (11), lung (12), thymus (13), heart (14), skeletal muscle (15), eyes (16), and spinal cord (17). NAAG 2 is a substrate for GCP-II. Using 14 C-labeled NAAG and NAAG 2 , isolated from metabolically labeled HEK-293T cells, hydrolysis of the two substrates by GCP-II was measured. As shown in Fig. 10, A-C, the rate of NAAG 2 degradation by GCP-II was comparable with NAAG hydrolysis.
The transport of NAAG 2 by the PEPT2 peptide transporter was examined using HEK-293T cells transiently expressing PEPT2. As a control, HEK-293T cells expressing the NaDC3 transporter were used. These cells showed uptake of NAA, but did not transport significant amounts of NAAG 2 or NAAG (Fig. FIGURE 9. In situ hybridization of mouse brain sections. Paraffin-embedded sections of 10-week-old brains and spinal cord were hybridized to digoxigeninlabeled cRNA NAAGS-II antisense (NAAGS-II) or sense probes (sense control), as indicated. A and B, NAAGS-II expression was low in neocortex (Cx) and hippocampus (Hc), but prominent in different nuclei of the midbrain, e.g. red nucleus (NR) and substantia nigra (SNR). C and D, NAAGS-II expression was present in the deep nuclei of the cerebellum (DCN) but was hardly detectable in the cerebellar cortex. E and F, various nuclei in the pons and medulla showed high level expression of NAAGS-II. G-I, in the spinal cord, NAAGS-II expression was found in laminae III to X, with an apparently higher expression level in the ventral laminae. Only low to undetectable expression was observed in laminae I and II. 10D), as expected. Cells expressing PEPT2, however, showed an uptake of both, NAAG and NAAG 2 ( Fig. 10E and supplemental Fig. S5), although NAAG transport was more efficient. Furthermore, uptake of [ 14 C]NAAG 2 was efficiently inhibited by NAAG and vice versa (Fig. 10F). Taken together these results provide evidence that NAAG 2 and NAAG are metabolized in the same system.
DISCUSSION
Recently, we and others have identified the Rimklb (encoding NAAGS-I) and Rimkla (encoding NAAGS-II) genes as NAAG synthetases (28,29). Collard et al. (29) showed that both enzymes in addition synthesize BCG, although BCG synthetase activity of NAAGS-II was much lower compared with NAAGS-I. We show here that NAAG and BCG are not the only reaction products of NAAG synthetases. Beside BCG, at least one other product lacking NAA is synthesized by both enzymes in transiently transfected cells, as demonstrated by metabolic labeling of NAAGS-I and NAAGS-II expressing cells. This product could not be identified yet.
Furthermore, NAAGS-II synthesized an additional peptide, using NAA as substrate, which we could identify as NAAG 2 . This tripeptide was undetectable in HEK-293T or CHO-K1 cells coexpressing NAAGS-I and Nat8l. Using tandem mass spectrometry, we confirmed the presence of this tripeptide in the murine nervous system. The concentration of NAAG 2 was 1-2 orders of magnitude below that of NAAG, and can be estimated to be in the lower micromolar range in the CNS (and up to about 40 M in peripheral nerves). To our knowledge, the presence of NAAG 2 has not been described in the vertebrate nervous system before, which might be due to the lower concentration, compared with NAAG and other small peptides. Whether NAAG 2 is present in other tissues has not yet been examined.
The condensation of more than one glutamate residue is not a unique feature of NAAGS-II, but has been described for related prokaryotic members of the ATP-grasp protein family. For example, the E. coli RimK protein catalyzes the condensation of up to four glutamate residues to the carboxyl terminus of the ribosomal protein S6 (36).
The physiological role of NAAG 2 is unknown at present. Our data suggest that NAAG 2 is a metabolite in the same systems as NAAG, as it is transported by the peptide transporter PEPT2, and is also a substrate for GCP-II. Because of the known substrate specificity of GCP-II and the broad substrate specificity of PEPT2, this was not unexpected. However, it remains to be determined whether NAAG 2 , like NAAG, is present in and released from synaptic vesicles, or acts as an agonist at metabotropic glutamate receptors. FIGURE 10. Hydrolysis of NAAG 2 by GCP-II and PEPT2-dependent NAAG 2 uptake. A, 14 C-labeled NAAG and NAAG 2 were isolated by preparative TLC from metabolically labeled HEK-293T cells and 0.2 M 14 C-labeled NAAG and NAAG 2 were incubated with GCP-II for 0, 20, 120, and 240 min. Reaction products were separated by TLC. Note that [ 14 C]NAAG 2 used in these experiments was not completely separated from contaminating NAAG. B and C, quantification of the time dependence of NAAG and NAAG 2 hydrolysis at two different concentrations (data from two independent experiments were combined). D, HEK-293T cells were transiently transfected with plasmids encoding NaDC3 and incubated with 0.5 mM 14 C-labeled NAA, NAAG, or NAAG 2 for 30 min at 37°C in Locke's buffer (pH 7.4). After washing, radioactivity in cells was determined by liquid scintillation counting. E, HEK-293T cells were transiently transfected with plasmids encoding PEPT2 and incubated with 0.1 or 0.5 mM 14 C-labeled NAAG or NAAG 2 for 30 min at 37°C in MES/Tris buffer (pH 6.0). F, NAAG inhibits NAAG 2 uptake by PEPT2. PEPT2 expressing cells were incubated with 14 C-labeled NAAG 2 for 30 min at 37°C in MES/Tris buffer (pH 6.0) in the absence (contr.) or presence of 3 or 5 mM unlabeled NAAG. Furthermore, NAAG 2 inhibits NAAG uptake by PEPT2. PEPT2 expressing cells were incubated with 14 C-labeled NAAG for 30 min at 37°C in MES/Tris buffer (pH 6.0) in the absence (contr.) or presence of 4 mM unlabeled NAAG 2 . Note that the degree of purity of [ 14 C]NAAG 2 used for the NaDC3 and PEPT2 transport assays were much higher than that of [ 14 C]NAAG 2 used for the GCP-II assay shown in A. Examples of TLC analyses of peptide extracts from PEPT2 expressing HEK-293T cells after incubation with 14 Our experiments using cells coexpressing the NAA transporter NaDC3 and NAAGS-II indicate that in the presence of high concentrations of NAA (as expected for many neurons), the NAAG 2 synthesis rate may be several orders of magnitude below the NAAG synthesis rate. Although we did not determine the intracellular NAA concentrations in the transfected cells coexpressing NaDC3, NAA levels in neurons may be comparable or even higher (NAA concentration in the culture medium was 10 mM; 4 mM NAA in total brain suggests even higher intracellular NAA concentrations in neurons synthesizing NAA). Thus, in neurons expressing high levels of Nat8l, very low NAAG 2 synthesis rates are expected. In those cells, NAAG 2 may not be a physiological important product but may an "unintended" by-product of the NAAGS-II enzyme.
Although the concentration of NAAG 2 in different brain regions was in general 30 -50-fold lower compared with NAAG, the synthesis rate and local concentrations clearly will depend on various parameters, like expression of NAAGS-II, Nat8l, NAAGS-I, and others. In cells with high NAA synthase activity, NAAG 2 synthesis is expected to be low, because of the substrate competition. However, cells expressing high levels of NAAGS-II but only low amounts of NAA synthase Nat8l, or cells that coexpress NAAGS-II and the NAAG transporter PEPT2, may synthesize much larger amounts of NAAG 2 . Also the re-uptake and turnover of NAAG 2 compared with NAAG are not known. Thus, a locally restricted or cell type-specific higher intra-and/or extracellular NAAG 2 concentration is possibly.
The expression pattern observed for NAAGS-I and NAAGS-II differ considerably. NAAGS-I expression shows a more widely distribution pattern throughout the brain (28). In contrast, NAAGS-II expression appears to be more restricted. It was almost undetectable in the neocortex and the cerebellar cortex, but reached high expression levels in different nuclei of the midbrain and brain stem, as well as deep cerebellar nuclei and the spinal cord. Thus, NAAGS-II is expressed in those areas with high NAAG concentrations (1,18,20,35). NAAGS-I may be responsible for the overall basal NAAG synthesis, whereas NAAGS-II evolved to ensure high NAAG levels in the caudal brain regions and spinal cord. In addition, the more widespread distribution of NAAGS-I (28) compared with NAAGS-II may reflect its role as a BCG synthetase (29). BCG is an iron chelator and may thus inhibit iron-dependent generation of reactive oxygen species (37). This suggest a more general role of NAAGS-I in protection against oxidative stress, e.g. in hypoxic ischemia. BCG concentrations in the rat brain are highest during embryonic and early postnatal development, thereafter it decreases significantly (38). In contrast, NAA and NAAG exhibit a continuous increase during postnatal development (38). This suggests the highest expression of NAAGS-I during embryonic and early postnatal development, which has, however, not yet been examined. Whether the NAAG increase is due to Nat8l or NAAGS-II up-regulation also remains to be examined. NAAGS-II may have been evolved to ensure sufficient NAAG synthesis in certain brain areas and the spinal cord, without concomitant increase in BCG synthesis, which could potentially affect various metabolic pathways (Krebs cycle, fatty acid synthesis, and others).
Elevated NAAG concentrations have been found in the cerebrospinal fluid of Pelizaeus-Merzbacher disease patients (39,40), and the Pelizaeus-Merzbacher disease-like disease caused by mutations in the connexin 47 gene (41), sialic acid storage diseases (35), and a leukodystrophy with an unknown genetic cause (22). Slightly elevated NAAG levels in cerebrospinal fluid also occur in Canavan disease (39,40), which is caused by deficiency in the NAA degrading enzyme aspartoacylase (25). However, a number of unrelated leukodystrophies do not show changes in NAAG concentrations (35,39). The metabolic changes responsible for these disease-specific elevated NAAG concentrations are currently not known, but mutations in the GCP-II gene appear not to be responsible (22). The possibility of elevated NAAG 2 levels in the cerebrospinal fluid of the aforementioned diseases should be examined, as this may also give further insight into the metabolic changes responsible for elevated NAAG levels. | 8,048.6 | 2011-03-25T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Expression of The Transient Receptor Potential Channel 4 ( TRPC 4 ) Gene in Goats Naturally Exposed to Haemonchus contortus Infection
Expulsion of gastrointestinal nematodes (GIN) requires gut contractions and glycoprotein hyper-secretion for detachment from the gut wall. The Transient receptor potential cation channels (TRPC) facilitates contraction of smooth muscle. A mutation in the TRPC4 of mice significantly reduces contraction and motility of the intestine. Thus far, the correlation between TRPC4 and GIN infection has not been evaluated in goats or any other species. This study evaluated gene expression of TRPC4 in Haemonchus contortus exposed resistant goats. Goats that were naturally susceptible and resistant to Haemonchus contortus were sacrificed and intestinal tissues collected. From conserved regions of human, mouse, rat, and bovine TRPC4 gene alignments, oligonucleotide primers were generated using CLC Main Workbench bioinformatics software. The RT-PCR and quantitative real time pcr were performed using total RNA extracted from intestinal tissues. The expected 388bp cDNA product was amplified and sequenced. The goat TRPC4 showed 88 and 87% homology to rat and mouse and 98%, 92%, 91% and 90% homology to the bovine, horse, pig and human TRPC4 genes, respectively. The TRPC4 expression increased (P<0.05) in naturally susceptible goats. There were breed and gender effects (P<0.05) on TRPC4 expression. A strong (P<0.05) correlation was evident when the variables TRPC4 gene expression, clinical anemia, and parasite load were compared in goats. These data indicate that TRPC4 may aid in elucidation of the mechanism of action of the TRPC genes involved in gastrointestinal contraction and motility and their link to GIN infection.
Justification
The ability of the intestine to expel worms is dependent on many factors, one of which is intestinal contractility (Artis, 2006;Hasnain et al., 2011).The TRPC4 in particular has been linked to gastrointestinal contraction and motility (Kim, So, & Kim, 2006;Unno et al., 2006).Although the role of TRPC4 has been identified as crucial in intestinal contractility in mice, the TRPC4 gene has not been isolated in goats, nor evaluated in Haemonchus contortus infection.To date, the relationship between gene expression of TRPC4 and the response to GIN infection has not been assessed in goats or any other species.Moreover, the relationship between TRPC4 control over gastrointestinal motility and contractility and Haemonchus contortus infection in goats has not been investigated.Therefore the objectives of this study were to identify and characterize the TRPC4 gene of the goat and examine gene expression of TRPC4 in selected pasture exposed goats.
Experimental Animals and Haemonchus Contortus Detection
Animals used in this study were Spanish and Myotonic goats.These animals were housed at VSU Randolph farm in accordance with institutional animal care and use guidelines.More than 100 Goats were screened for parasite load and clinical anemia status via fecal egg counts (FEC), packed cell volume (PCV) and FAMACHA eye color charts (FAM), and divided into susceptible and resistant groups accordingly.Our previously published work describe these procedures in detail (Corley & Jarmon, 2012).The FEC, FAM and PCV data collected were analyzed using SAS version 9.1.3,(Cary,North Carolina).We determined that goats with > 2000 eggs per gram of feces (EPG) with PCV <18 were naturally susceptible and those goats with > 2000 FEC with PCV >18 were resistant.Animals were sacrificed and intestinal tissue samples collected and stored at -80 o C for nucleic analysis.Haemonchus contortus spp.was verified via nucleotide sequencing as previously published (M.Corley & A. Jarmon, 2012).Nucleotide sequences were analyzed using sequence analysis software (NCBI-BLAST (Altschul, Gish, Miller, Myers, & Lipman, 1990), CLC Main Workbench).
Goat Intestinal Tissue Collection and Preparation
Animals were sacrificed as previously published (Corley & Jarmon, 2012) in accordance with national humane euthanasia guidelines.Intestinal including jejunal tissues were collected in sterile PBS and RNAlater (Invitrogen, NY) and placed at -80 o C for further molecular analysis.Tissues were homogenized in sterile PBS, centrifuged at 10,000 g and supernatants collected for gene expression analysis.
Total RNA Extraction from Goat Intestinal Tissue
Total RNA was isolated from goat tissue samples previously stored at -80 o C using a modified RNA isolation procedure (Gauthier, Madison, & Michel, 1997).Total RNA extraction procedures were performed according to previously published methods (Corley & Jarmon, 2012).Concentration and purity of total RNA were measured using a Nanodrop ND-1000 spectrophotometer (Thermoscientific).The RNA was stored at -80 o C for later use in RT-PCR and qRT-PCR.
Reverse Transcriptase PCR (RT-PCR) of Goat TRPC4
Oligonucleotide primers were designed from mRNA of the bovine human, horse, pig, mouse and rat TRPC4 nucleotide sequences using the bioinformatics software, CLC Main Workbench (http://www.clcbio.com).Primers and target regions used for isolation of the goat TRPC4 gene are given in Table 1.The RT-PCR was conducted using the recommended protocol of the Verso 1-step RT-PCR kit (Thermo Scientific).Modified thermocyling conditions for 40 cycles were as follows: 50 o C 15 minutes, 95 o C, 2 minutes (initial denaturation), 95 o C, 30 secs, 55 o C, 1 minute, 72 o C, 1 minute repeated 39 times and a final extension at 72 o C for 5 minutes.The target TRPC4 cDNA was visualized by 1.5% agarose gel electrophoresis and a UGenius UV gel documentation system (SynGene, Fredericksburg, MD) equipped with a high resolution CCD camera.
Nucleotide Sequencing of Goat TRPC4 cDNA
For TRPC4 nucleotide sequencing, the cDNA (388bp) products were cut out and purified from agarose gels (Qiagen and Bio-Rad).The purity and concentration of gel purified cDNA was measured and prepared for nucleotide sequencing per commercial instructions.Samples were sent for sequencing at GeneWiz (South Plainfield, New Jersey).Raw nucleotide sequences were analyzed using sequence analysis software
Discuss
The TRPC4 is upregulated in more Haemonchus contortus susceptible goats.The TRPC4 was also upregulated in male goats compared to female goats.This response is logical, as it has been previously demonstrated that male goats are more susceptible to Haemonchus contortus infection than female goats (M.Corley & A. Jarmon, 2012).There was no significant difference in TRPC4 gene expression relative to age, but older goats (5-7 years old) tended to express more TRPC4 than younger goats (1-5 years old).This response could be explained by the fact that older goats have a weaker immune system than younger but a closer look would have to be made by widening the age gap in different groups of goats to see any significant effect in age groups for TRPC4 gene expression.Overall, there was a strong positive correlation between TRPC4 gene expression and FAMACHA eye color chart scores and FEC, and a strong negative correlation with PCV in goats.An increase in TRPC4 gene expression correlated with a high parasite load and anemia.On the other hand TRPC4 was down regulated in goats that were not experiencing anemia.This indicated that TRPC4 is upregulated in goats more susceptible to Haemonchus contortus infection.These data indicate thatTRPC4 gene expression correlates with susceptibility in goats pasture exposed to Haemonchus contortus.This would indicate that TRPC4 may be a potential biomarker for susceptibility rather than resistance to Haemonchus contortus infection in goats.
Conclusion
The results of this study indicated that the cross species oligonucleotide primers designed from the conserved regions of the TRPC4 genes can successfully be used to isolate a partial sequence of the goat TRPC4, from intestinal tissue samples, and used as a biomarker to evaluate gene expression in response to Haemonchus contortus infection in goats.Based on our findings, the TRPC4 gene could potentially be studied as a genetic marker for susceptibility to GIN infection, specifically Haemonchus contortus and could potentially be targeted for drug development in the treatment of GIN infection. | 1,716.4 | 2013-08-15T00:00:00.000 | [
"Medicine",
"Biology"
] |
IS-LM Stability Revisited : Samuelson was Right , Modigliani was Wrong
In Hicks’s IS-LM model, where it is assumed that production is determined in the goods market and the interest rate is determined in the money market, when the marginal propensity to spend is greater than one, the IS has a positive slope. Modigliani (1944), Varian (1977) and Sargent (1987) determined that in this special case the IS-LM model is stable when the LM slope is greater than the IS. Economía Vol. XXXVIII, N° 75, semestre enero-junio 2015, pp. 123-150 / ISSN 0254-4415 * Professor and researcher, Department of Economics, Pontificia Universidad Católica del Perú (PUCP), 1801 Universitaria Ave., Lima 32, Lima. Telephone: +511-6262000 (21300). E-mail adress: waldo<EMAIL_ADDRESS>I wish to thank the comments by Juan Antonio Morales and Oscar Dancourt and the two anonymous referees of the journal for their valuable comments. I am solely responsible for any remaining errors. 124 Economía Vol. XXXVIII, N° 75, 2015 / ISSN 0254-4415 In line with Samuelson (1941), this article shows that in this case the model is stable when the IS slope is greater than the LM slope. However, in this stable case the model does not have a useful economic meaning. One solution to this theoretical problem is to abandon the Keynesian adjustment mechanism and replace it with the Classical mechanism where the interest rate is determined in the goods market and production is determined in the money market. In this case, the IS-LM model is stable when the LM is steeper than the IS.
INTRODUCTION
In the traditional IS-LM model devised by Hicks (1937), when the marginal propensity to spend is greater than one, the Keynesian multiplier is negative and the IS has a positive slope.Hicks (1937) warned that in this special case, only under certain conditions can the IS-LM model be stable, and Keynes, in a letter published in Gilboy (1939), deemed this occurrence "completely unstable".
Later developments -starting with Modigliani (1944) and including articles in specialized journals, as well as sections on macroeconomics and mathematics in economics textbooks -agree that, in this atypical case, the IS-LM model is stable when the LM slopes more steeply than IS.
This article shows that: i) The case the literature pronounces as stable is in fact unstable, because the same stability conditions that apply to the standard case are extended to the special case.
The model is stable when the IS steeper than the LM.This result is consistent with the views expressed by Samuelson (1941).
ii) However, in this special stable case, the model's linear version provides results without useful economic significance.Firstly, the equilibrium values of production and the interest rate are negative.Secondly, since the Keynesian multiplier is negative, expanding demand results in shrinking output.
iii) One solution to this theoretical problem is to abandon the Keynesian adjustment mechanism and replace it with the Classical mechanism where the interest rate is determined in the goods market and production is determined in the money market.In this case the model is stable when the LM is steeper than the IS.
The following section reviews the literature on the special case.Section 3 discusses the stability conditions of the standard IS-LM model.Section 4 repeats the exercise for the special case and argues that the traditional treatment is incorrect.Section 5 proposes a complementary argument to show the instability of the case the literature deems stable, by simulating the dynamic effects of an expansive monetary policy.Section 6 shows that in the linear version of the stable model, the equilibrium values of the interest rate and output are negative, and that the model forecasts a reduction in output as a consequence of expanding demand.Section 7 shows that only in the case in which the Keynesian adjustment mechanism is replaced by the Classical mechanism is the stability condition postulated by Modigliani true.Finally, the conclusions are set out in Section 8.
BACKGROUND
In the IS-LM model, when the marginal propensity to spend (propensity to consume plus propensity to invest) is greater than one, the Keynesian multiplier is negative and the IS curve has a positive slope.
=
The slope of the IS curve, graphed on the (Y, i) space, is given by: Where Y x = ∂Y / ∂x is the generic form representing the partial derivative of Y with respect to X.
The denominator is undoubtedly positive, while the numerator can be positive, in the typical case where the propensity to spend is less than one (C Y + I Y < 1), or negative, when the marginal propensity to spend is greater than one (C Y + I Y > 1).
One of the innovations in Keynes's General Theory was the introduction of the concepts of marginal propensity to consume and the multiplier.In a letter published in Gilboy (1939), Keynes warned that if the marginal propensity to consume is greater than one, the model would be unstable: My theory itself does not require my so-called psychological law as a premise.What the theory shows is that if the psychological law is not fulfilled, then we have a condition of complete instability.If, when incomes increase, expenditure increases by more than the whole of the increase in income, there is no point of equilibrium.(Gilboy, 1939, p. 634).
All the subsequent literature dealing explicitly with this special case (negative Keynesian multiplier and positively sloping IS) -that first appeared in specialized journals and was then adopted by macroeconomics and mathematics for inclusion in economists textbooks -has established that for the model to be stable, the LM curve must be steeper than the IS curve.Modigliani (1944) was the first to address this special case in terms of the IS and LM slopes.For the case in which IS slopes positively, he states that: Stability is also possible when the IS curve rises in the neighborhood of the equilibrium points as long as it cuts the LL curve from its concave toward its convex side (Modigliani 1944, p. 64).
That is, for the model to be stable in the neighborhood of equilibrium, the LM slope must be greater than that of the IS.
This argument appears in Figure 1.The LM and IS intercept each other at points A and B. For Modigliani, the equilibrium point would be stable at B, where the IS cuts the LM from its concave to its convex side.At this point, the LM slope is greater than that of the IS.At point A, equilibrium would be unstable.Later, Hudson (1957) presented the IS-LM model's stability condition for the special case and also erroneously considered the unstable case to be stable.Since Hudson's IS is non-linear and since it shows a negatively sloping stretch and another with a positive slope, Hudson's IS-LM model is one of multiple equilibrium, like the one shown in Figure 2. In this model: Stability of equilibrium then requires that the IS schedule slopes upwards less steeply than the LM schedule.Consequently, B is a position of unstable equilibrium, while C and C' are stable.(Hudson, 1957, p. 382).
Figure 2
According to Hudson then, the atypical model is stable when the LM slope is greater than that of the IS.
Varian (1977) builds, as Hudson, a model with multiple equilibrium and arrives at the same result: when the IS slopes positively and there are multiple equilibria, the stable region is that where the LM is steeper than the IS.
Figure 3 -which reproduces Varian's Figure 3 (1977, p. 268) -shows, as Hudson, that the region is unstable (i.e.configures a saddle point) at the intersection points between the IS and LM, when the IS is steeper than the LM, as in point B of Figure 3.The intersection points are stable if the LM slope is greater than that of the IS (when the IS slopes negatively, as in point A, or when the IS slope is positive, but less than the LM, at point C), as shown by the direction arrows of the phase diagram.Subsequent publications in specialized journals dealing with this special case leave these authors' proclaimed stability conditions unchanged, as shown in works by Silver (1971), Chang and Smyth (1972), Puckett (1973), Burrows (1974), Cebula (1980), Wang (1980), Patinkin (1987) and Ros (2004).
When referring to the special case, Sargent states that: This condition is automatically satisfied when the LM curve is upward sloping and the IS curve is downward sloping.It can still be satisfied if the IS curve is upward sloping, provided that the LM curve is more steeply sloped.(Sargent, 1987, p. 59).
In short, all the literature -starting with Modigliani (1944) -states that in the case in which the IS has a positive slope, for the IS-LM to be stable, the LM curve must be steeper than the IS curve.
The following sections argue that the IS-LM model is stable in the special case where the IS slope is greater than the LM slope.
IS-LM STABILITY: THE STANDARD CASE
This section presents Hicks's model (1937), in its dynamic version2 .In the goods market, it is assumed that adjustment is by quantities and that output increases ( ) Y 0 0 > when there is an excess of demand in that market (EDB).In the money market, the adjustment variable is the interest rate, which rises (i > 0) when there is excess demand (EDM ). (1) Where ε and η are, respectively, the production adjustment velocities and the interest rate vis-à-vis excess demand in the goods market and the money market.
If Y represents output and Y d stands for the demand for goods, in a closed economy without government, the demand for goods comes from consumption and private investment.Consumption and investment are positive functions of income and negative functions of the interest rate.Consequently, demand for goods is given by: Excess demand in the goods market therefore equals: In the money market, if M s is the money supply and P is the price level, the real money supply equals: If m d is the real demand for money, which is directly related to output and inversely related to the interest rate: Consequently, the excess demand in a money market equals: Replacing equations ( 7) and (4) in equations ( 1) and ( 2), respectively, we obtain a differential equation system to discuss the stability conditions of the traditional IS-LM model.
Where ε[.] and η[.] are rising monotone functions which are differentiable and satisfy the ε To evaluate whether a model is stable or not, it is worth taking into account Gandolfo's warning (1996) about the appropriate formulation of adjustment dynamics in markets: The dynamic formalization of the Walrasian assumption is the following The notation sgn f [...] = sgn[...] means that f is a sign-preserving function, i.e., the dependent variable has the same sign as the independent variable (which in this case is excess demand): therefore, if excess demand is positive (negative) the time derivative of p is positive (negative), i.e. p is increasing (decreasing).(Gandolfo, 1996, p.172).
Taking the case shown by Gandolfo as an example, since ϕ 1 > 0, the influence of the investment-saving (I -S ) gap on production, which equals the effect of the excess demand, also must be positive.In our presentation, since ε > 0, the influence of the high excess demand on the product must be positive.
The system made up by equations ( 8) and ( 9) is non-linear and its study implies analytical difficulties, which we will avoid.To this end, we will discuss some valid properties in a local equilibrium context, through Taylor's expansion.
If Y e and i e represent stationary (local) equilibrium values of production and the interest rate in the IS-LM model, equations ( 8) and ( 9) can be presented in a matrix as follows: Or in its compact form: Where: The necessary and sufficient conditions for this system to be asymptotically stable, that is, a system where all the movements flow cyclically or non-cyclically towards a stationary equilibrium point, are that the Ω Jacobian matrix trace be negative and its determinant positive.
In this version of the IS-LM, the assumption that the propensity to spend is less than one (C Y + I Y < 1) ensures that the two stability conditions are met without restrictions.
Stability can also be discussed from the qualitative point of view, by relating the described conditions to the graphic representation of the IS-LM model.For this purpose, we use a phase diagram to represent the goods market and the money market in stationary equilibrium, that is, when output and the interest rate reach equilibrium and stay constant ( ) .Y i Stationary equilibrium in the goods market is achieved when output equals demand (excess demand is null), and output remains stable ( ) .8), the IS equilibrium equation is obtained in the goods market when In the (Y, i) space, the IS curve has the usual negative slope, because the propensity to spend is assumed to be less than one, resulting in a positive Keynesian multiplier.
Let us assume that the interest rate slides from any point on the IS in Figure 4, where output equals demand.At this point, let us say point M under the IS curve, a lower interest rate increases consumption and investment, which generates excess demand in the goods market.The excess demand in this model, whose dynamics are expressed in equation ( 8), increases output.This is the meaning of the arrows pointing to the right from point M. The same reasoning applies to a point such as N, where excess supply reduces output, as shown by the left-pointing arrow that starts at that point.The mechanism allowing for an excess demand to increase production is seen more clearly in the classic 45º diagram such as the one shown in Figure 5.The 45º curve shows the equilibrium points in the goods market, when production (Y ) equals demand (Y d ).Curve D is a linear representation of equation ( 10) with a positive independent component and a slope that is positive, but smaller than one.This reflects a propensity to spend that is lower than one.Curve D has the interest rate as a parameter.
Figure 5 shows that if the interest rate falls from i o to i 1 , curve D shifts upwards to D 1 .Excess demand appears at the initial output level Y 0 .Such excess demand increases output.The equilibrium point moves from A to B, and production raises to Y 1 .
Figure 5
Stationary equilibrium is reached in the money market when supply equals the demand for money, i.e. excess demand is null leading to a stable interest rate The LM curve slopes positively.Let us now assume that output increases starting from any point of the LM in Figure 6, where money demand equals supply.Larger output takes us to a point such as M, to the right of the LM where the demand for money is also greater, creating excess demand in that market.If we know the dynamics of the money market represented by equation ( 9), excess demand will shift the interest rate upwards.This explains the upward-pointing arrows starting from point M. Symmetrically, to the left of the LM at a point such as N, there is an excess money supply, hence the interest rate has to fall, as shown by the downwards-pointing arrow starting at that point.In a general equilibrium condition we can see the standard IS-LM model is stable, and directional arrows show that we are in the presence of an asymptotically stable equilibrium.At point (Y 0 , i 0 ) the model reaches stationary equilibrium.Finally, since the second condition for stability, that corresponding to the positive determinant, is equivalent to the standard IS-LM model is stable when the LM curve slopes more steeply than the IS.
Starting with Modigliani (1944), the literature has extended to the special case those stability conditions that apply to the standard case.Below, an alternative approach is proposed which delivers the opposite result.
IS-LM STABILITY: THE SPECIAL CASE
When the propensity to spend is greater than one (C Y + I Y > 1), the Keynesian multiplier is negative (k < 0) and the IS has positive slope.
If the IS and the LM have positive slopes, we have to determine which slope is greater in order to reach stability.
Our review of the literature found that stability conditions in the special case are reached based on the same equation system (8) and ( 9).As a result, the same necessary and sufficient conditions for stability are achieved as in the standard case.So, even if C Y + I Y > 1, the requirement is the same as in the traditional model: the LM must slope more steeply than the IS: ) .
This procedure assumes that the dynamics of the goods market, as represented in equation ( 8), still holds for the special case 4 .
What does equation ( 8) tell us?That excess demand in the goods market increases output.Therefore, parameter ε is positive.
If the IS has a positive slope, that is, when C Y + I Y > 1, is it still true that excess demand in the goods market will increase output?
In the traditional case, excess demand in the goods market disappears when output rises, as represented in equation ( 8).
Let us assume a fall in the interest rate.A falling interest rate increases investment and consumption, and generates excess demand in the goods market.Excess demand increases output, which, in turn, through its effect on private expenditure, increases demand again.Since demand increases less than output-because the marginal propensity to spend is less than one-the excess demand shrinks.In the movement towards a stationary equilibrium, the excess demand continues to fall until reaching zero, thus reestablishing equilibrium in the goods market.
For a positively sloped IS, the dynamics of adjustment in the goods market is different.We argue that the differential equation reflecting the adjustment dynamics in this market must be reformulated to apply the usual stability conditions.Omission of this aspect has led to the incorrect treatment of stability conditions in the atypical IS-LM model.
Let us assume, as previously, that the interest rate falls, investment and consumption rise, and excess demand is created in the goods market.What adjustment mechanism would be required to restore equilibrium?What must happen to output for such excess demand to be canceled out?
If output increases as a response to excess demand, given that the propensity to spend is greater than one, the demand would increase more than production; thus, the excess demand in the goods market, rather than reducing, would rise.The excess demand would continue to grow indefinitely, preventing the system from reaching a steady state.This dynamic is clearly explosive.
For the process to be convergent output must contract rather than increase.As output shrinks, demand for goods falls more than output because the propensity to spend is greater than one, thus reducing excess demand.That is, in this special case, excess demand in the goods market reduces output.Or, likewise, if the Keynesian multiplier is negative, an increase in demand reduces, and does not increase, output.
We propose an alternative adjustment dynamics equation in the goods market by reproducing the mechanism we have just described.In the differential equation for the goods market, output increases should be related to excess supply in the market (EOB = Y -Y d ), not to excess demand.The equation that adequately reflects the new adjustment dynamics of the goods market can be written as: Excess supply in the goods market is defined by: Thus, the new adjustment dynamics in the goods market, which replaces equation ( 8), is now written as: This formulation is consistent with Gandolfo's mathematical requirement (1996, p. 328) noted above.
Since there is no change in the money market, the new differential equation system in the atypical case is comprised of: As before, linearizing the equation system ( 14) and ( 9) using Taylor's expansion yields the following: Or in its compact form: Where: The necessary and sufficient conditions for this differential equation system to be stable are, as before, for the matrix Ω 1 trace to be negative and its determinant positive.
iv) Tr
The first condition is met with no restrictions.To meet the second condition, we must have: Otherwise put, when the propensity to spend is greater than one and the Keynesian multiplier is negative, for IS-LM to be stable, the IS must be steeper than the LM: We reach the same conclusion using the phase diagram system.Since no change has occurred in the money market, let us focus our attention on the positively sloping IS.
Let us assume a fall in interest rates starting at some point of the IS in Figure 8, where output equals and demand.At point M, under the IS curve, the lower interest rate increases investment and consumption, creating excess demand in the goods market.In the framework of this model, such excess demand in the goods market -the dynamics of which are expressed in equation ( 14) -contracts, and does not increase, output, as is erroneously assumed.This explains the arrow to the left that starts from point M.
By the same reasoning, at a point such as N there is excess supply that increases output, as symbolized by the right-pointing arrow in Figure 8. 10).The difference is now that its slope is greater than one because the propensity to spend is greater than one.In these conditions, equilibrium can only occur in the quadrant where the value of output is negative.We will return to this point in section 5 below.
Let us simulate, as before, the interest rate falls from i 0 to i 1 .In Figure 9, if the interest rate falls, curve D moves upwards to D 1 .At the initial production level Y 0 , there is an excess demand in the goods market.This excess demand reduces output; it does not increase it.The equilibrium point shifts from A to B, and output falls toY 1 .
Figure 9
In general equilibrium, Figure 10 combines figures 1 and 2 in a single diagram, showing that the IS-LM model is stable when the IS is steeper than the LM5 .
Figure 10
In Figure 11 the phase diagram shows that when the LM is steeper than the IS, the model is unstable, of the saddle point type.
STABILITY, INSTABILITY AND EXPANSIVE MONETARY POLICY
An additional argument in discussing instability conditions is provided by changing an exogenous variable and evaluating how the endogenous variables will adjust.In stable models, after changing an exogenous variable, these variables temporarily depart from their initial equilibrium values, but then change until they reach a new stationary equilibrium.When models are unstable, endogenous variables move away from the initial equilibrium and never reach a new stationary equilibrium.
In Dernburg and McDougall (1976) the effects of an expansive monetary policy are illustrated within the context of a positively sloped IS.Their presentation -which assumes stability when the LM is steeper than the IS -is useful to provide evidence of the error of considering that, in this model, excess demand in the goods market leads to increased output.Dernburg and McDougall (1976) repeat the exercise with a diagram such as that shown in Figure 12, assuming that the money market is always in equilibrium and the goods market may be temporarily in disequilibrium.
In Figure 12, an expansive monetary policy (an increase in M s ), shifts the LM to the right.According to Dernburg and McDougall: The shift in the LM curve causes the interest rate to fall immediately to i 0 1 .This again means that intended investment exceeds saving and that income must therefore rise.But the rise in income stimulates further investment because of our assumption that investment is a function of the level of profits and income.Consequently, the original monetary disturbance causes income to rise; this cause additional investment to be induced, and this, in turn, causes income to rise still further.
Will income continue to rise indefinitely, or will a new equilibrium point be found?In the present case the rise in income causes the interest rate to rise and to dampen investment more rapidly than the rise in income stimulates further investment.In other words the rate of interest that keeps the money market in equilibrium rises more rapidly than the rate of interest that keeps the product market in equilibrium.Consequently, a new stable equilibrium point will be reached at Y 1 and i 1 .The path of adjustment again follows the arrows upward along the LM curve" (Dernburg and McDougall 1976, p. 244) Figure 12 shows the above mentioned authors' graph and argument.The red arrows highlight their proposed adjustment dynamics.
Figure 12
What is the mistake in this reasoning?Positing that when the interest rate falls to point B, the resulting excess demand increases output.In this atypical IS-LM model excess demand reduces output, rather than increasing it.Therefore, since at point B there is excess demand in the goods market, output should fall and the blue arrows should point to the left, not to the right.Therefore, the model is unstable and the system does not reach a stationary equilibrium.
Figure 13 shows the stable case, when the IS is steeper than the LM.As before, the expansive monetary policy shifts the LM to the right, and short term equilibrium is reached at point B, with a lower interest rate and the same production level.As in B the lower interest rate has created excess demand in the goods market, output must fall to restore equilibrium in that market.As output falls, so too does the demand for money, and the interest rate slips as well.Downward adjustment continues (following the arrows) along the new LM curve until it reaches (Y 1 , i 1 ).At this point, a new stationary equilibrium is reached.
Figure 13
The fact that an expansive monetary policy should reduce instead of increasing the interest rate, as in the unstable case shown Figure 11, is consistent with Samuelson's observation seven decades ago.By simulating the effects of an expansive monetary policy in different scenarios, including a propensity to spend of greater than one, he established that: ..the only theorem which remains true under all circumstances is that an increase in the amount of money must lower interest rates if equilibrium is stable.(Samuelson, 1941, p. 120).
If the LM were steeper than the IS, as in Figure 12 -which reproduces Dernburg's and MacDougall's arguments (1976) -an expansive monetary policy would increase the interest rate.
STABILITY, NEGATIVE KEYNESIAN MULTIPLIER AND THE RELEVANCE OF THE ATYPICAL CASE
We have argued that in the atypical case, IS-LM model is stable when the IS is steeper than the LM.In this section we will show that in this special stable case, the model yields results of no useful economic significance.
Firstly, in the model's linear version, output and interest rate equilibrium values are negative.Secondly, since the Keynesian multiplier is negative, expanding demand results in falling output.
We have adopted the linear version of the IS-LM model to illustrate how to obtain analytical equilibrium values for output and the interest rate.Equilibrium in the goods and money markets is given by: Where C I A o o 0 0 + = is autonomous private expenditure, M s is the nominal money supply, P is the price level, and all the parameters are positive.
The IS and LM equations are drawn from these formulae, respectively: where k a a is the Keynesian multiplier.
Equilibrium in the goods market can also be expressed in such a way that the consequences of having a negative Keynesian multiplier will be more clearly apparent.In this case, as shown in equation ( 19), greater demand -prompted by rising autonomous expenditure or a sliding interest rate -leads to lower output.
The IS and LM slopes are given by: To meet the second stability6 condition, the IS must be steeper than the LM: Furthermore, the stationary equilibrium values for output and the interest rate are obtained from equations ( 17) and ( 18).
Given the stability condition b 1 + k(a 1 + a 3 )b 0 > 0, the stationary equilibrium values of output and the interest rate are evidently negative, which makes no economic sense.
Figure 14, where equations ( 17) and ( 18) are represented with the IS slope larger than the LM slope, reproduces this result.Stationary equilibrium is reached in the quadrant where interest rate and product values are negative.
Figure 14
This IS-LM model configuration leads us to a situation where exogenous shocks produce analytically strange outcomes.
For example, in this model, an autonomous expansion of consumption or investment -or, if we introduce government to the model, expanding government expenditures -would lead to falling output and interest rates.This is because greater expenditure generates excess demand in the goods market, thus reducing output.Falling output reduces demand for money in the money market, thus bringing the interest rate down.
In Figure 15, in view of a larger autonomous expenditure, the IS curve shifts upwards.In the new point of intersection with curve LM, which has not shifted, both output and the interest rate are smaller.
A SOLUTION TO THE IMPASSE: THE CLASSICAL MECHANISM
Although in strictly mathematical terms our results are correct, in theoretical and empirical terms the notion that firms reduce production to meet a rise in demand for their goods, or that the equilibrium values of output and the interest rate are negative, render the results absurd.
In formal terms, the only way to justify the position of Modigliani and later economists would be to abandon the Keynesian model, which assumes that the adjustment in the goods market occurs through changes in production and the adjustment in the money market through changes in the interest rate.If, as in the Classical case -where real money demand does not depend on the rate of interest -production is determined in the money market and the interest rate in the goods market, it can be shown, with the same formal arguments used, that the stable case is that where the LM is steeper than the IS.This definition of Classical model corresponds to Hicks (1937).
If we adopt the Classical adjustment mechanism, the dynamics of the determination of the interest rate and output would be given by: Equation ( 22) indicates that the interest rate rises when there is excess demand in the goods market and equation ( 24) shows us that production rises when there is excess supply in the money market.
Linearizing the equation system ( 22) and ( 23) yields the following: The necessary and sufficient conditions for this differential equation system to be stable are: The first condition is met without problem.To meet the second condition, we must have, that is, when the propensity to spend is greater than one but the adjustment mechanism is Classical, for IS-LM to be stable, the LM must be steeper than the IS: > However, none of the studies reviewed departed from the Keynesian adjustment mechanism when discussing the stabilities of the IS-LM model in the special case.
CONCLUSIONS
This paper argues that in the special case where IS is positively sloped, for the IS-LM model to be stable, the IS must be steeper than the LM.This challenges the literature that, starting with Modigliani (1944), holds that stability is reached when the LM is steeper than the IS.
Only Samuelson (1941), by simulating the effects of an expansive monetary policy in different scenarios, including the case of a propensity to spend of greater than one, proposes a result that is consistent with ours.
However, in this special stable case, the model yields results of no useful economic significance.Firstly, in the model's linear version, the equilibrium values for output and the interest rate are negative.Secondly, since the Keynesian multiplier is negative, expanding demand results in falling output.
In this paper one solution to the problem identified was raised.That solution is to abandon the Keynesian adjustment mechanism and replace it with the Classical mechanism where the interest rate is determined in the goods market and production in the money market.In this case, the IS-LM model is stable when the LM is steeper than the IS.
Figure 15
Figure 15 Waldo Mendoza Bellido IS-LM Stability Revisited: Samuelson was Right, Modigliani was Wrong 145 | 7,470.4 | 2015-08-01T00:00:00.000 | [
"Economics"
] |
Smectic and soap bubble optofluidic lasers
Soap bubbles are simple, yet very unique and marvelous objects. They exhibit a number of interesting properties such as beautiful interference colors and the formation of minimal surfaces. Various optical phenomena have been studied in soap films and bubbles, but so far they were not employed as optical cavities. Here we demonstrate, that dye doped soap or smectic liquid crystal bubbles can support whispering gallery mode lasing, which is observed in the spectrum as hundreds of regularly spaced peaks, resembling a frequency comb. The lasing enabled the measurement of size changes as small as 10 nm in a millimeter-sized, $\sim$100 nm thick bubble. Bubble lasers were used as extremely sensitive electric field sensors with a smallest measurable electric field of 110 Vm$^{-1}$Hz$^{-1/2}$. They also enable the measurement of pressures up to a 100 bar with a resolution of 1.5 Pa, resulting in a dynamic range of almost $10^7$. By connecting the bubble to a reservoir of air, almost arbitrarily low pressure changes can be measured while maintaining an outstanding dynamic range. The demonstrated soap bubble lasers are a very unique type of microcavities which are one of the best electric field and pressure microsensors to date and could in future also be employed to study thin films and cavity optomechanics.
Introduction
A soap bubble is made of a thin film composed of water and surfactants, which encloses air and forms a spherical shape [1].Bubbles are interesting from a variety of perspectives including mathematics, physics, chemistry and even biology, due to their similarity to the biological membranes.Soap bubbles were studied in terms of the interference colors [2], geometry [3,4], fluid motion [5,6], mechanical oscillations [7] and recently, branched flow of light [8].A bubble can also be made purely from surfactant-like molecules.Specifically, smectic liquid crystals, which molecules form well defined molecular layers, are used for this purpose [9][10][11][12].The mixture of soap and water can actually also form liquid crystalline (lyotropic) phases.In fact, the origin of the word "smectic" is related to soap.Smectic bubbles have some very unique properties.Since the smectic liquid crystal molecules form well defined layers, the film thickness is quantized, that is to say, the film has an integer number of molecular layers.For example, bubbles with a completely uniform thickness of 11 nm as large as 1 cm in diameter were made [9].Therefore, quite uniquely, the ratio of the bubble size to its thickness can be as large as 6 orders of magnitude.The layered structure of the smectic bubbles makes their thickness completely stable and enables virtually infinite bubble lifetime as long as the air volume inside the bubble is kept constant.
Optical resonances called whispering gallery modes (WGMs) are formed when the light is trapped in a spherical object due to multiple total internal reflections and circulates near the surface of the sphere.WGMs were studied in various geometries including solid hollow cavities in the form of glass microbubbles [13,14] and glass capillaries [15], which were employed for sensing applications [16] and lasing [15,17].However, WGMs were not studied until now in soap bubbles.
Here we show that dye doped soap and smectic bubbles can support WGM lasing.Due to their fluid nature the bubbles are very soft compared to their glass counterparts, which influences the lasing and enables some unique applications.
Lasing of regular soap bubbles
Millimeter-sized soap bubbles doped with a fluorescent dye, inflated at the end of a capillary and illuminated with a pulsed laser were used to demonstrate the laser emission (Fig. 1a).To create the bubbles, a capillary was dipped into the soap solution and the air pressure in the capillary was briefly increased to inflate the bubble (Supplementary Video 1).The soap bubble could be later inflated or deflated to reach the desired size (0.4-4 mm).Based on the interference colors [2] observed in reflection (Fig. 1b) the thickness of the soap film was typically in the range 100-800 nm and changed slowly in time as the bubble aged.The soap bubbles had a relatively uniform fluorescence intensity, except at the end of the capillary where there was a larger amount of the solution (Fig. 1c).
When a bubble was illuminated by a pulsed laser it emitted laser light due to the circulation of WGMs.When the center of the bubble was pumped, laser light was generated in all vertical planes except the ones intersecting with the capillary (Supplementary Fig. S1).This was observed as a bright rim, except on the opposite side of the capillary (Fig. 2a).When the bubble was illuminated at an edge, the generated light preferentially circulated in one plane, which was observed as a bright narrow ring (Fig. 2b).(b) Another bubble but illuminated at its rim, which generates a bright ring of circulating light.The gain region in this case is in the shape of a vertical patch.WGMs circulating in a narrow band in a vertical plane experience the most gain and are therefore observed as a bright ring.(c) A typical emission spectrum when a bubble is pumped above the lasing threshold.(d) Lasing intensity summed over a 5 nm wide spectrum range as the input laser pulse energy was increased shows a typical threshold behavior.The dashed lines are a guide to the eye.(e) Spectrum emitted by a soap bubble attached to the end of a capillary when the pulse energy of the pump laser is increased.At lower energies only fluorescence is observed, while at higher energies sharp peaks appear.(f) Slab waveguide modes for a soap film at the wavelength of 555 nm.The shaded area corresponds to the approximate thickness range of the bubbles used in the experiments.(g) Spectra of consecutive pump laser pulses.Displaying a smaller wavelength range reveals a shift of the modes towards longer wavelengths.Sharp spectral lines were present in the spectrum of the emitted light (Fig. 2c).A clear lasing threshold was observed when the pump laser energy was increased (Fig. 2d and e).Typical thresholds were in the order of a few µJ.The WGMs can be approximated as where r is the radius of the bubble, n eff is the effective refractive index of the mode, λ is the wavelength of the mode and l is the azimuthal mode number.Since the bubbles were usually relatively large (∼1 mm) compared to the wavelength of the light, as a first approximation, the light propagation can be described as a flat slab waveguide.The effective refractive indices (Fig. 2f) were calculated using standard equations for light propagation in a flat slab waveguide [18], where the slab is the soap film (n = 1.364) and both sides are air (n = 1).Calculating the positions of WGMs by using the effective index for flat slab is accurate to a few percent compared to the WGMs simulations [19].For a typical bubble with a thickness of 600 nm the effective refractive indices were 1.319 and 1.307 for first radial order TE and TM modes, respectively.For a typical soap bubble thickness not only the fundamental, but also higher guided modes may be present.In the lasing spectrum no regularly spaced peaks could be observed.This is probably due to a large number of possible modes and irregularities of the soap film.With each pulse of the pump laser the spectrum changed slightly (Fig. 2e).Some modes disappeared, while some others appeared, but there was a general trend towards a shift towards longer wavelengths.The shifting of the peaks in time was probably caused by a combination of size, thickness and refractive index changes.However, from the spectrum alone we can not determine how each of the three parameters contributes to the shift in the spectrum.Lasing was also achieved in free floating soap bubbles.A bubble was blown into a tank filled with CO 2 so that it was floating on the surface of the CO 2 .The bubble was illuminated by the pump laser by using an optical fiber and a lens (Fig. 3a and b).The same lens and optical fiber were also used to collect the generated light and send it to the spectrometer.Spectra with sharp spectral lines corresponding to lasing were also observed in this case (Fig. 3c).
Lasing of smectic bubbles
To better control the thickness and the refractive index of the film, bubbles made out of smectic liquid crystal were employed.Specifically, 8CB was used, which has a smectic A phase at room temperature.Under transmitted light the smectic bubbles looked completely uniform (Fig. 4a) indicating that the whole surface had thickness of equal number of molecular layers (Fig. 4b).If some region had one more or one less molecular layer this would result in a well visible island or hole, respectively [11].The thickness of the bubbles could be approximately controlled by how fast they were inflated, with the faster inflation resulting in a thinner bubble.Typically the bubbles employed in this study had a thickness in the range of 30-120 nm, which was determined from the intensity of the transmitted light [20].
Imaging a bubble in between crossed polarizers [21] indicates an uniform radial orientation of the molecules (Fig. 4c and d).From the resulting colors it can be deduced that the molecules are oriented perpendicular to the surface, that is in the bubble radial direction.
When pumping with a pulsed laser, the dye doped smectic bubbles emitted laser light visible as a ring and two bright spots (Fig. 5a).The smectic bubbles were extremely stable and the experiments on a single smectic bubble could be performed for up to 30 min.For a typical bubble thickness (30-120 nm) only TM 0 and TE 0 modes exist (Fig. 5b).Since the smectic film is birefringent (n o = 1.51 and n e = 1.68), the bulk refractive indices of TE and TM modes are different.The effective refractive indices were calculated in the same way as for soap bubbles, by using standard equations for light propagation in a flat slab waveguide [18], but using the following refractive indices: n air = 1 on both sides of the film and for the smectic film, n o for TE polarized light and n e for TM polarized light.At a thickness of 50 nm the TE 0 has a significantly larger effective refractive index (1.049)than TM 0 mode (1.014) and is the only mode lasing, as also identified by measuring the emission intensity through a polarizer.
At this typical bubble thicknesses higher radial modes are not allowed.This is a significant advantage over bulk WGM microcavities which will typically display lasing of a number of different modes when their size is large.On the contrary, the emission spectrum above the lasing threshold was very regular with equally spaced spectral lines, resembling a frequency comb (Fig. 5c).The emission typically spanned 10-20 nm.For a 1.8 mm diameter bubble the average free spectral range (FSR) was 0.06 nm and the spectrum contained ∼250 lasing peaks.The azimuthal mode number of WGMs at the wavelength of 605 nm is l ≈ 10 000 ± 100.The contrast between the fluorescent background and the lasing peaks in the spectrum was excellent with the background being practically zero.From the measured FSR and the diameter of the bubble determined from the image (1.90 ± 0.02 mm), the effective refractive index is calculated to be 1.06 ± 0.01.For TE modes, using equations for a slab waveguide, this corresponds to a bubble thickness of 57 ± 5 nm.Smectic bubbles thinner than ∼30 nm did not generate laser emission due to too low effective refractive index causing significant radiative light leakage and the fact that a thin layer also results in very few dye molecules to be present, resulting in a low gain.The estimated gain length of Pyrromethene 597 recalculated for a 100 nm thick slab waveguide is 180 µm [22,23] which is much larger compared to the bubble thickness.
The size of smectic bubbles can be changed in real time, within a few seconds, by inflating or deflating them in at least the size range 0.5-5 mm resulting in a large tunability of the FSR in the range of 0.02-0.2nm.This enables to use them as largely tunable frequency-like light source.This tunability is significantly larger than for their solid state counterparts.
When the volume of the bubble is increased or decreased very quickly, stable circular floating islands with larger thickness than the rest of the bubble can be formed (Fig. 6a) [9,11].The islands can also be created by illuminating a thicker bubble (≳ 200 nm) with a pulsed laser, where each pulse can create one island (Supplementary Fig. S3).The size of the islands can be from a few micrometers up to being as large as the bubble itself.Islands larger than ∼100 µm emitted laser light upon excitation with a pulsed laser by the circulation of light around the perimeter of the island (Fig. 6b).The resulting WGM lasing was observed as periodic peaks in the spectrum (Fig. 6c).A typical effective refractive index calculated for the island size and FSR was 1.3 ± 0.1.This enabled stable laser emission with periodic lasing peaks with a wide range of FSR values (0.1-0.9 nm) and azimuthal mode numbers from 700 to 6100 depending on the size of the islands.
Precise measurement of the smectic bubble laser size
The changes in the bubble size were measured by following shifts of the lasing peaks (Fig. 7a).
The relative size change is approximately equal to the relative wavelength shift: From the size measurement in time it is visible that the size of the bubble slowly decreases in time (Fig. 7b) due to the diffusion of air molecules through the thin wall (APPENDIX A) [24].For real applications the size decrease could be compensated by pumping additional air into the bubble, stabilizing the size, or simply reinflating the bubble when it becomes too small, thus enabling almost indefinitely long measurements.The diameter change rate was 1.1 µm/s for a 1.9 mm bubble.The time dependence was fitted to the equation for the air diffusion Eq. ( 8) and subtracted from the data.The remaining residuals are extremely small with a standard deviation of only 10 nm (Fig. 7c).This means that size changes as small as 10 nm can be measured, corresponding to an exceptional relative accuracy of 5 × 10 −6 .A part of the observed noise may come from the environment, such as air pressure fluctuations and air currents, so that the actual optically limited measurement noise may be even lower than observed.To be able to track the spectral shifts, the shift in between two measurements (laser pump pulses) should be smaller than half the FSR.In the case of faster changes, FSR was measured instead, which enabled the measurement of arbitrarily large and fast size changes with a still excellent accuracy of 50 nm.By taking into account the typical effective refractive index, the absolute size of the bubble can be measured via the FSR value by using Eq. ( 1).
Smectic bubble lasers as electric field sensors
The smectic bubble lasers are very soft and therefore sensitive to external factors, which can change their size and shape.Simultaneously these changes can be measured very precisely via the lasing spectrum.Therefore, this makes soap bubble lasers excellent sensors.
The bubble lasers were employed as electric field sensors and electrically tunable laser sources.Already applying less than 1 V across the bubble deformed it [25] and caused a measurable shift in the laser peaks (Fig. 8).The sensitivity was 0.008 nm(V/mm) −1 , which is on par or larger than best previous reports using WGM microcavities [26][27][28].The minimum measurable field determined from the measurement noise and the sensitivity was 110 Vm −1 Hz −1/2 .With this, our sensors outperforms most micro DC electric field sensors including sensors based on microelectromechanical systems [29,30], piezoelectric [31], optical [32][33][34] and nitrogen-vacancy centers in diamond [35,36].Further, in contrast to some other electric field sensors, the bubbles are not conducting and therefore do not distort the measured electric field.
Smectic bubble lasers as pressure sensors
The compressibility of the air within the bubble and the ability to measure the size extremely precisely enables the use of the bubbles as pressure sensors.The spectral shift for pressure change is ∆λ/λ ≈ −∆p/3p 0 , where ∆p is the pressure change and p 0 is the initial pressure.In all the following measurements and calculations the initial pressure was equal to the atmospheric pressure.This gives a size independent sensitivity ∆λ/∆p of 2 × 10 −3 nm/Pa.This is several orders of magnitude more than previously reported for WGM microcavities, such as hollow polymer spheres (2 × 10 −6 nm/Pa) [37] and a glass microbubbles (4 × 10 −7 nm/Pa) [38].
To test the performance of the pressure sensor, the pressure around the bubble was controllably changed and the diameter of the bubble was calculated from the measured FSR value.When the pressure was increased by 400 Pa above the atmospheric pressure the bubble diameter decreased by 52 µm (Fig. 9a).In all experiments the bubble was connected to a capillary which provided an additional fixed volume of air.Since a larger volume of air is easier to compress, this significantly increases the pressure sensitivity.
A pressure difference between the interior and the exterior of the bubble is given by the Laplace law.For a small change in the external pressure, the size of the bubble changes such that the pressure difference satisfies Laplace condition.The pressure in the bubble is changed due to the change in the volume of the bubble.If we assume that the process is isothermal, the pressure inside the bubble after the change, p ′ i is: where p i is the initial pressure in the bubble, V and V ′ are the initial and final volumes of the bubble, respectively.Since the bubbles are connected to the capillary, the volume of both the capillary (V c , which remains unchanged) and the bubble (V b ) must be included in Eq. ( 3), therefore The Laplace pressure can be written explicitly as the difference between the pressures on both sides of the film: where p L and p ′ L are Laplace pressures before and after the change, p 0 is the initial external pressure, and ∆p is the change in the external pressure.Combining Eq. ( 3), ( 4) and ( 5) an equation that relates the change in external pressure to the change in bubble size is obtained: In the approximation of small radius change, the relation between the small size and pressure change is The volume of the bubble is also changing because of the diffusion of air molecules through the thin wall.Therefore if the measured pressure change is not significantly faster than the diffusion, this needs to be taken into account.Specifically, the shrinkage rate depends on the size and the size is dependent on the pressure and diffusion.To account for both contributions, Eq. ( 6) and ( 9) should be combined.It is worth noting that the sensitivity for V c = 0 is not dependent on the size of the bubble, so not dependent on the air diffusion.On the other hand, for V c /V b + 1 = 60 the sensitivity changes for 12% in one minute, which is significant and should be taken into account.Further, the temperature change of the air has the same effect on the bubble volume as the pressure change, so it needs to be taken into account as well (APPENDIX B).In practice, a separate thermometer with a resolution of at least 6 mK could be used to compensate for temperature changes.
The sensitivity is calculated by solving Eq. ( 6) while taking into account Eq. ( 2).At small additional volumes the sensitivity is inversely proportional to the total volume of the system V c + V b , or more conveniently written as the total volume relative to the volume of the bubble V c /V b + 1 (Fig. 9b).In the experiments up to V c /V b + 1 = 60 was used providing a sensitivity of 0.12 nm/Pa.
Above approximately V c /V b + 1 = 1000, for a surface tension of 0.024 N/m, the surface tension starts to have a significant contribution and make the bubble unstable.At V c higher than V crit c the bubble is not stable any more and collapses by itself due to the Laplace pressure.V crit c is dependent on the surface tension and size of the bubble (Fig. 9c).At the critical volume the sensitivity diverges.Therefore by having a very large additional volume, an arbitrarily high sensitivity can be in principle achieved.In the experimental case ( The bubble also collapses at any V c > 0 if a large enough positive outside pressure is applied.This maximum measurable pressure change (Fig. 9d) depends on the additional volume in the capillary V c and is calculated by solving Eq. ( 6).It is the maximum pressure change at certain V c at which the solutions exist.The maximum measurable positive pressure change is also limited by the minimum size of the bubble which is still capable of lasing (∼0.5 mm).The maximum measurable negative pressure change is limited by the maximum size of the bubble (∼5 mm) for which the spectral lines can still be distinguished by the spectrometer.
The resolution of the measurements is the smallest detectable pressure change (Fig. 9e) and was calculated from the sensitivity and the smallest measurable wavelength shift.By increasing V c the smallest detectable pressure change can be almost arbitrarily small, however, at the expense of the maximum measurable pressure.For a stand alone bubble (V c = 0) the smallest measurable pressure change is 1.5 Pa, while the largest positive pressure change is almost 10 7 Pa, that is 100 bar.Namely, a pressure change of 10 7 Pa would shrink a bubble from 3 mm to 0.65 mm, well within the lasing regime.This gives an exceptionally large dynamic range of almost 10 7 .For the largest additional volume used in the experiments (V c /V b + 1 = 60) the smallest measurable pressure change is 0.024 Pa, while the largest positive and negative measurable pressure changes are 1400 Pa and −1.9 × 10 4 Pa, respectively.Therefore by only changing the additional volume V c , the pressure measurement range can be tuned in a huge range while preserving outstanding dynamic range of ∼ 10 5 (Fig. 9f).
These values far exceed the performance of the best pressure sensors of a comparable size [39-41, 43-46, 55, 61] (Supplementary Table S1).Especially the dynamic range of the bubble laser pressure sensors is several orders of magnitude larger than other pressure microsensors demonstrated till now.Further, the significant advantage of the bubbles over pressure sensors using solid transducers, such as strain gauges and diaphragms, is that the smectic layer is very soft as well as practically infinitely stretchable, due to its fluid nature.The bubbles also do not experience any material fatigue.Further, the bubble pressure sensors do not need any calibration, since their sensitivity depends only on the compressibility of the air (for V c = 0).This is in stark contrast to almost all sensors, where variability in the mechanical properties and nonlinearities of the material (e.g.diaphragms), hysteresis and manufacturing tolerances play a crucial role in the accuracy of these sensors.
Discussion
In conclusion, we demonstrated lasing in soap and smectic bubbles.Compared to solid or droplet microcavities, bubbles are made of a very thin film of fluid, resulting in truly unique optical and mechanical properties.The size of a bubble laser can easily be changed by a factor of 10 or more in real time.Only a multimode fiber and a simple lens are needed to pump the bubble and observe the lasing.The smectic bubbles have a completely uniform thickness down to a molecular level.The thin walls support only a single optical mode, therefore they can be employed as tunable laser sources with frequency-comb like emission, with orders of magnitude larger tunability compared to solid microlasers.Small thickness also results in small weight and therefore almost perfectly spherical shape.The bubbles are extremely stable in time and could in principle survive indefinitely.Lastly, the bubbles are very easy to form, not requiring any complicated manufacturing procedures and if a bubble is destroyed, a new one can be formed within seconds.These outstanding properties enable a number of prominent applications.Their exceptional sensitivity enables sensing of electric field and pressure with record high sensitivity, resolution and dynamic range.In future, they could be also used to measure other quantities, which change the bubble shape, such as air flow or magnetic field by doping with magnetic particles.On the opposite, to enhance the stability of the bubbles, they could be made solid, by either cooling them to crystallize or using polymerizable liquid crystals or liquid crystal elastomers [48].Apart from the demonstrated applications, the bubble lasers could be used in future to study thin films and basic phenomena such as for example cavity optomechanics [49][50][51].
Materials and Methods
A mixture of water, glycerol and liquid hand soap (containing sodium laureth sulfate) in a 2:1:1 volume ratio was used to make the soap bubbles (n = 1.364).Alternatively, 2.5% of sodium dodecyl sulfate dissolved in 1:1 volume ratio of water and glycerol was used.For the gain, 0.1% of fluorescein sodium salt was dissolved in the above mixtures.For the smectic bubbles, 4'octyl-4-biphenylcarbonitrile (8CB) liquid crystal doped with 0.2% Pyrromethene 597 (Exiton) was used.Plastic pipette tips (Eppendorf) of different sizes (10 µl, 20 µl and 100 µl) were used as capillaries to inflate the bubbles.The pipette tip was connected by a thin tube to a glass syringe (Hamilton, 1.0 ml).The syringe was mounted onto a microfluidic syringe pump (New Era Pump Systems, NE-1002X) or pushed by hand to inflate the bubble.The syringe and the connection tube to the pipette tip were filled with water, except for the last few millimeters in order to decrease the air volume.The capillary was dipped into the solution so that a small amount entered the capillary.For electric field tuning the bubble was placed inbetween two flat electrodes with an area of 18×20 mm and a spacing of 5.46 mm.One of the electrodes had a hole with a diameter of 2.6 mm through which the pipette tip was inserted so that the bubble was in the center between the electrodes.For pressure measurements the pipette tip was inserted through a hole into a transparent container, which was connected to a pressure controller (Elveflow, OB1, 0-20 kPa).The bubbles were observed with an inverted microscope (Nikon Ti2) through a 4×, 0.13 NA objective.The bubbles were pumped with a nanosecond pulsed optical parametric oscillator (Opotek, Opolette 355) at 494 nm for the soup bubbles and 525 nm for the smectic bubbles, at a repetition rate of 20 Hz.The resulting fluorescent light was captured by a high resolution spectrometer (Andor Shamrock SR-500i) at 0.007 nm spectral resolution.
for air P air = (5.3± 0.5) × 10 −15 mol Pa −1 m −1 s −1 .Expressing γ in units of moles instead of the volume of air exiting the bubble, we can calculate the thickness of the bubble as This value is consistent with the value calculated from the measured effective refractive index.
APPENDIX B: Temperature change effects
Since we assume that the size of the bubble changes according to the ideal gas law the temperature change (T ′ − T ) of the air inside the bubble has the same effect on the bubble volume as the pressure change (p ′ i − p i ).The spectral shift for both pressure and temperature change is ∆λ/λ ≈ −(∆p/3p 0 − ∆T /3T 0 ), where ∆p is the pressure change, p 0 is the initial pressure, ∆T is the temperature change and T 0 is the initial temperature.The equation ( 6) that relates the change in external pressure to the change in bubble size taking into account the temperature change is In the approximation of small radius change, the relation between the small size, pressure and temperature change is Apart from the temperatures changes in the environment, the pump laser could have an effect as well.To estimate the temperature rise of the air in the bubble we consider the total absorbed pump laser energy which depends on the concentration of the dye in the bubble and bubble thickness.The extinction coefficient of fluorescent dye in smectic bubbles is 80 000 M −1 cm −1 [54] and its concentration is 5.3 mM.Since the bubbles are very thin (∼50 nm) only a small fraction of the incident pump energy is absorbed (∼0.2%).A fraction of that energy is converted into fluorescence light so the remaining energy that is converted into heat is given by Q = (1 − QY)E abs , with the quantum yield (QY) of the fluorescent dye is 0.77 [22].At pump pulse energy of 3 µJ that gives us the total heat energy of 2 nJ.We assume that this energy is evenly distributed across the whole bubble with the heat capacity of smectic layer 1.3 µJ/K and the air inside (heat capacity 5.1 µJ/K).This gives a temperature increase of ∆T = 0.3 mK for each laser pulse, which is a relative change of 10 −6 compared to room temperature.That is lower than the relative pressure resolution (5 × 10 −6 ), so it does not limit our measurements.Also during the experiments the pump laser was always turned on and the bubble was in the stationary regime meaning there are no changes in the temperature during the measurement.It could only have an effect as the measurement is started and the laser is turned on.In order to test this, lasing was observed immediately after switching on the pump laser (Supplementary Fig. S6).Apart from the change in size due to air diffusion, there was no additional size change due to the increase in temperature on a short timescale.Even with longer observations the increase in temperature seems to have no effect (Supplementary Fig. S5). Figure 6: Change in the diameter of a bubble in time measured by following the lasing peaks immediately after switching on the pump laser (red arrow).The bubble size was slowly decreasing due to air diffusion through the thin wall but there was no additional change in size due to the increase in temperature.When the decrease in size due to air diffusion is subtracted from the data, only a small noise remains and have no larger deviations right after switching on the pump laser.
Figure 1 :
Figure 1: A soap bubble formed at the end of a capillary.(a) Scheme of the experimental configuration.A dye doped soap bubble is inflated at the end of a horizontal capillary and illuminated by a laser from below.The soap film is composed of a layer of water, surfactant molecules and fluorescent dye molecules.(b) A soap bubble in reflected light.Interference colors are visible.(c) Fluorescence image of a dye doped bubble.
Figure 2 :
Figure2: Soap bubbles emitting laser light.(a) A soap bubble illuminated in the center (red cross) by a pump laser.The drawing shows the gain regions (red) which are formed at the positions where the laser beam (green) passes through the soap film.The WGMs circulating in any vertical plane (black lines) pass through the two gain regions and are therefore excited.(b) Another bubble but illuminated at its rim, which generates a bright ring of circulating light.The gain region in this case is in the shape of a vertical patch.WGMs circulating in a narrow band in a vertical plane experience the most gain and are therefore observed as a bright ring.(c) A typical emission spectrum when a bubble is pumped above the lasing threshold.(d) Lasing intensity summed over a 5 nm wide spectrum range as the input laser pulse energy was increased shows a typical threshold behavior.The dashed lines are a guide to the eye.(e) Spectrum emitted by a soap bubble attached to the end of a capillary when the pulse energy of the pump laser is increased.At lower energies only fluorescence is observed, while at higher energies sharp peaks appear.(f) Slab waveguide modes for a soap film at the wavelength of 555 nm.The shaded area corresponds to the approximate thickness range of the bubbles used in the experiments.(g) Spectra of consecutive pump laser pulses.Displaying a smaller wavelength range reveals a shift of the modes towards longer wavelengths.
Figure 3 :
Figure 3: Free floating bubbles.(a) Experimental setup for the free floating soap bubbles.(b) A photo of a larger floating soap bubble, which is illuminated by the pump laser, and emits laser light.(c) Spectrum from a ∼2 cm diameter free floating bubble.
Figure 4 :
Figure 4: Smectic bubbles.(a) A smectic bubble in transmitted light.(b) Scheme of the molecular structure of the film.Drawn is an example of a three layer smectic film composed of ordered liquid crystal and dye molecules.(c) Same bubble under crossed polarizers.The typical bright cross is observed indicating uniform molecular orientation either parallel or perpendicular to the bubble surface.(d) Same bubble with an additional waveplate inserted between the polarizers.From the resulting colors it can be deduced that the molecules are oriented perpendicular to the surface, that is in the bubble radial direction.
Figure 5 :
Figure 5: Lasing of smectic bubbles.(a) Image of a lasing smectic bubble.The cross indicates the pump laser beam position.(b) Effective refractive indices for different modes calculated for a slab waveguide at the wavelength of 610 nm.The shaded area correspond to the approximate thickness range of the bubbles used in the experiments.The bulk refractive indices of TE and TM modes are different (dashed lines).(c) Frequency comb-like spectrum of a lasing smectic bubble 1.75 mm in diameter.
Figure 6 :
Figure 6: Lasing of smectic islands.(a) Two smectic islands (indicated by the arrows) on a bubble.(b) Image of a lasing island observed as a red ring (white arrow) on the surface of the bubble (indicated by the red arrows).(c) Lasing spectra of two smectic islands of a different diameter d.
Figure 7 :
Figure 7: Measurement of smectic bubble size.(a) Lasing spectrum from a 1.75 mm smectic bubble in time.The color represents the intensity.The white line highlights the shifting of a single lasing mode in time due to the decreasing size of the bubble.(b) Change in the diameter of a bubble in time measured by following the lasing peaks in a.The initial size was measured from the FSR.The bubble size was slowly decreasing due to air diffusion through the thin wall.Since the size measurement is very precise, the plot looks like a perfectly smooth line.(c) When the decrease in size is subtracted from the data, only an extremely small noise remains.
Figure 8 :
Figure8: Electric field measurement with smectic bubble lasers.Shift of the lasing modes when an electric field is increased continuously to 30 V/mm and back to zero.The continuous decrease in the size due to diffusion was subtracted from the data.
Figure 9 :
Figure 9: Pressure measurement with smectic bubble lasers.(a) Change of the bubble size measured from the lasing spectrum as the pressure outside the bubble was increased by 400 Pa above the atmospheric pressure.The volume of the capillary in this experiment was 80 µl and the initial volume of the bubble was 4.2 µl, resulting in V c /V b + 1 ≈ 20 for this particular case.(b) The calculated pressure sensitivity of a 2 mm bubble as a function of total volume relative to the volume of the bubble (V c /V b + 1).The dotted line would be for a bubble with zero surface tension.The red circle in all panels represents the maximum V c used in the experiments.(c) Critical additional volume relative to the volume of the bubble, at which the bubble becomes unstable.Above this additional volume the bubble collapses due to the Laplace pressure.(d) Maximum measurable positive and negative pressure.(e) The minimum measurable pressure change.(f) The dynamic range for positive pressure change, that is the ratio between the highest and the lowest measurable pressure changes.
Figure 2 :
Figure 2: A dye doped soap bubble at the end of a capillary.(a) In white light interference colors can be observed.(b) The same bubble when illuminated by the pump laser.The arrow shows the direction of the incoming laser beam.A bright ring, which crosses the laser spot, is formed due to circulating light.
Figure 3 :Figure 4 :
Figure 3: Smectic island lasers.(a) Two examples of smectic islands (regions of higher thickness) floating on a smectic bubble.Each island has a different thickness, therefore each displays a different interference color.The smectic bubble (marked by the red arrows) is very thin and despite the enhanced contrast of the image, it is barely visible.(b) The islands can also be created by illuminating a thicker bubble (> 200 nm) with a pulsed laser (black cross), where each pulse can create one island.The islands move away from the laser spot (green arrow).(c) Two examples of lasing from smectic islands, which are visible as bright rings.
Table 1 :
Diameter of a bubble in time measured from FSR.The pressure outside the bubble is increased to 400 Pa above the atmospheric pressure and then decreased to 240 Pa.The continuous decrease in size is due to air permeation through the smectic layer.The comparison of smectic bubble laser as pressure sensor to different pressure sensors of comparable size. | 8,586.2 | 2023-06-26T00:00:00.000 | [
"Physics"
] |
Physical Economics and Optimum Population Density
Here we examine the economic problem of what is the optimum population density (300 persons/km^2)? We use mathematics from Economics, and concepts such as the marginal propensity to consume to determine it. We calculate the optimum taxation rates (48%). We invent a concept author call the Minderland, comparable to the Cusack hinterland factor (6.47). Data from the US, Europe and the Great Toronto Region are used along with Linear Regression techniques.
Introduction
In examining the Regional Disparities of the Saint John New Brunswick, Canada region, I theorized that there might be a population density that is optimum for economic performance of a City and Region and Country New Brunswick has a population density of just 10 people per square km 2 . That is far too low to compete with the likes of Toronto, Vancouver, and Boston. It's interesting that the population density of the 1970s Toronto Centered Region (now the GTA, Hamilton, Kitchener Waterloo, Barrie, and Peterborough) has a 2006 population of 6,993,689 Million people. With an area of 22,270 km2, (Saint John region is 24,000 km 2 ) there is a population density of 314 persons/km2 -not far off the 300 persons/km 2 optimum considering that the Toronto Region is the best performing Economic Region in Canada. Does the relationship hold elsewhere?
The United States
Author looked at the US States. The basic data of population per state and GSP were available on the Wikipedia internet site. In the USA, note the following maps: One of gross domestic product (GDP) per State and one of the Populations per State. There is a direct correlation between Population of a given State and the GDP per state. A statistical regression analysis was done and it was determined that the range of populations were from 544,270 in Wyoming to 36,961,664 in California. The GSP ranged from $25,442 Million in Vermont to $1,846, 757 Million in California the R squared correlation coefficient was 0.97 for these two variables. When the GSP PER CAPITA was compared, there was found to be no correlation (R squared=0.36). This suggests that the higher a state's population, the higher will be the GSP/GPP with confidence. The R 2 for the US states alone is 0.44slightly higher. This suggests that if the government wants to increase the absolute GNP of a Province, Region or a State or even country, it must increase the population to the optimum density of about 300 persons/km 2 . This will not necessarily increase the GSP per capita. The function: Y=45.8994825 × +33,224 has a derivative of 45.9 or say 46. This is the incremental amount added to a Regional Economic Domestic Product, viz 1 person per km 2 adds $46 to the GSP or GPP.
This means that for each person in density per square kilometer in density a region has, then income goes up by $46 per capita per year. It appears to be a linear function. Therefore, higher densities mean higher gross products. If we consider Critical Mass from nuclear Fission in the physical sciences: A numerical measure of a critical mass is dependent on the neutron multiplication factor, k: K=f/l Where f is the average number of neutrons released per fission event and l is the average number of neutrons lost, either by leaving the system or being captured in a non-fission event. When k=1, the mass is critical. If total taxes were lowered from 42.8% to say 40%, then: MPC=0.367879 × 1/40.=0.919698. Therefore "K critical" would change. This means that the K critical is dropped by 5.45% for a 1% change in the total tax rate.
Note: This is K the critical Taxrate is asymptotic at about 48 (the derivative of the regression plot is 45.899 $/person/km 2 ) and minimized at about 55%. This means that the optimum Taxrate is 55% since K critical must be >0.
Tax/Y MPC K critical
Above this figure, energy is wasted or divided among more people, thus a lower per capita income. Below this figure, the critical mass is not reached. That's why the universe is stable at the Gravitational constant and Planks constant. And that is why the cities populations perform at a maximum optimally at a HLF=6.71 $/km 2 .
Here's one for the Physicists: The Gravitational Field is the same as the field surrounding a city centre. In large mega metropolis, the satellite cities are areas of high mass density (black holes?) and when we substitute dollars $ for Mass kg, we have a black hole at the city centre. This has further implications. For example, if we examine an outward migration, the k critical fact decreases and people who stayed behind must SPEND MORE to maintain a critical factor of 6. k=f/l k=[(6-1)/( 7-1) ]/1; /[(7-1)]=5/(1/6) 5/6 × 6/1/1/6
× 6=30
If k=30 instead of 6 as the critical factor, then one must spend $46.00 × 30=1380 person/km 2 . instead of 3000 person/km 2 so the population must increase dramatically and spending must increase as a portion of income. Therefore, once outward migration starts, its impossible to stop unless those with deep pockets spend. Federal Government policy toward the Maritime Provinces has left the Maritime economy in shambles due to its outward migration policies. However, when a similar analysis was undertaken for 57 countries, their GDP per Capita showed absolutely no correlation to Population Density. The Regression analysis showed a scatter plot (Below) that has an R squared of 0.001 i.e., no correlation between the two variables. For the City of Saint John, New Brunswick, for example, with an area of 318 km 2 and a population of about 66,000, for every 318 people population increase, the GPP goes up by $856,000 to the local economy. This linear relationship works in reverse also. If further studies were undertaken, it would seem that perhaps regional economies such as Toronto or Chicago for example may show a correlation for these two variables. Data would have to be collected for regional economies if GDP is available for a region as opposed to a state or country. There was found a weak correlation between City Pop Density and Per Captia Income (0.1), but a weaker correlation with City Population (0.07) What does this all mean? Since there is a high correlation of State population and Per Capita Income, drawing people to one's state will increase the Per Capita Income. Drawing people to one's country will not. Neither will drawing people to one's City. The reason must lie in the fact that the Cities have hinterlands. Hinterlands, or the State as a general proxy, are what is important to a per Capita income level.
European data
As another check on the regression line, we use the European Union Data. What would be the reason for higher density jurisdictions to have a higher income. An economist would say that people in those areas are creating more wealth and thus earn more. From my time in Ontario, one of the wealthiest jurisdictions in the world, I noticed empirically that people are willing to take more and greater risk. I suspect psychologists would find that higher densities lead to greater risk taking -the "Rat Race". Perhaps a Psychologist will undertake a study to see if this is true [1,2].
The Cusack Hinterland factor
If we look at the Hinterland Area and Population, we find there is a linear relationship between percentage of HL Area/Total Area and the Population of the HL. This is described by the equation: Population: HL=946087+410788 (Area HL/Total Area of the Jurisdiction) For Quebec, for example, which has a well-defined hinterland, the population of Pop HL/Area HL=Density HL=6.06 (per km 2 ) Why is there a factor of 6.06. Because the Earth's population is approximately 6.6 billion people. There are 1.48 billion km2 less Antarctica, Artic, and the Sahara Desert (1.05 km 2 ) 1.48 km 2 × 6.06=8.9 billion people which will be reached in 2030.
The maximum population the world can sustain at a multiplier effect of 7 and a savings rate at 1/7 is 6 persons per<EMAIL_ADDRESS>since the population of the earth is currently at that figure, not further population increase will permit an increase in wealth, unless patent value were to increase. This is treated below. This works because all wealth created comes from the land. Even services ultimately come from the land because the person providing the service had to consume food and location or a computer terminal to provide the service. All wealth is either a service or commodity (material and labour) or a transfer of wealth (such as legal fees for example). But since all wealth comes from the land then all the wealth on earth (World GDP) divided by the Earth land surface=6.6 billion people/1.0 billion km 2 =6. The closer a country, such as Canada, comes to the 6 persons/km 2 , then the more it is based on Natural Resources, as Canada is. The larger the Hinterland is, either the country has stolen hinterland from another country or it is a more serviced based economy. If the sea is used for economic gain besides seafood, such as oil exploration, then the effectively the amount of land under 'cultivation" is increasing. This formula implies many interesting relationships of wealth to the land. But average worldwide wealth depend on about 1.5 acres per person. Even your computer comes from the earth (plastic=oil, copper, glass, etc.) The arrangement of these natural resources is a mental constructs and do not take form until the earth's resources are put into play. We will come to the end of growth of wealth when no more ingenious mental constructs can alter the use of the Earth's natural resources. Then we would have to look off the planet for additional wealth. This land exploration is what drives economies. That is why Western Europe -the explorers-became so wealthy. They increase the land under their control. If a city/region/ country exceeds the 6 persons/km 2 , then that country will have poverty e.g., India, If a city/region/country is under the 6 person/km 2 , then that country will grow e.g., Australia. Vancouver is short one million people so there will be a net influx of people into the Region. As a check, Saint John, New Brunswick has a hinterland of 24,000 km 2 approximately. Therefore, the population should be around 144,000. It is actually 145,000. so it works.
MINDerland
If we consider that the area of land on the planet is limited, to increase consumption beyond present capacity to match population growth, we therefore must get more out of our patents if we are to increase output. This is why the future for cities is in their creative ideas or for example university output. The future hinterland I the city is the mind which I call MINDerland.
If we consider the formula: Therefore, Canadians earn the same as their US counterparts. If we look at the Environmental perspective, if the price of M, materials increase (scarce natural resources), then either the Y Productive output must increase, the Pat Patents must increase or the L Labour must decrease (unemployment increase) or decrease the number of workers i.e., the population Either this means that we get smarter with better patents, have a lower standard of living, or endure higher unemployment or have less population. If we ask the question, why is Saint John so run down, the answer is that the citizens are poor. Why are they poor? Because their parents were poor. Why were their parents poor? Because they had no jobs: Why did they have no jobs? Because companies were moving out instead in to the City. Why were the companies moving out? Because there were not enough dollar circulating to reach critical mass. Government and Corporate Executives decided against Saint John in favor of Toronto or Halifax or even Moncton [1,2]. Saint john has no future unless immigrants decide to move here despite the poor economy. A city can't grow beyond its hinterland. Saint John has the same regional area as Toronto, but doesn't have the hinterland. A region is supposed to be 2.5% of the hinterland. Saint john has no hinterland for a modern first world economy. If Canada had economic rights that Pierre Elliott Trudeau wrote about in his Charter of Human Rights, the system would collapse, due to inefficient allocation of resources. The poor would not have the foresight to manage money in an efficient way. The inefficiency would cause the economy to collapse as did Marxism in the U.S.S.R. in about 80 years' time. "The poor you will have with you always. "
Conclusion
So, we see that mathematics from Physics can and should be used to be applied to Economic problems. The mathematics is well understood and thus sheds light on these sorts of problems such as population density and optimum taxation. | 3,000.2 | 2017-03-30T00:00:00.000 | [
"Economics"
] |
On the distribution of the Cold Neutral Medium in galaxy discs
The Cold Neutral Medium (CNM) is an important part of the galactic gas cycle and a precondition for the formation of molecular and star forming gas, yet its distribution is still not fully understood. In this work we present extremely high resolution simulations of spiral galaxies with time-dependent chemistry such that we can track the formation of the CNM, its distribution within the galaxy, and its correlation with star formation. We find no strong radial dependence between the CNM fraction and total HI due to the decreasing interstellar radiation field counterbalancing the decreasing gas column density at larger galactic radii.However, the CNM fraction does increase in spiral arms where the CNM distribution is clumpy, rather than continuous, overlapping more closely with H2. The CNM doesn't extend out radially as far as HI, and the vertical scale height is smaller in the outer galaxy compared to HI with no flaring. The CNM column density scales with total midplane pressure and disappears from the gas phase below values of PT/kB =1000 K/cm3. We find that the star formation rate density follows a similar scaling law with CNM column density to the total gas Kennicutt-Schmidt law. In the outer galaxy we produce realistic vertical velocity dispersions in the HI purely from galactic dynamics but our models do not predict CNM at the extremely large radii observed in HI absorption studies of the Milky Way. We suggest that extended spiral arms might produce isolated clumps of CNM at these radii.
INTRODUCTION
The interstellar medium (ISM) exists in a complex, multi-phase, form from hot ionised gas, to cold molecular star-forming clouds (e.g. Mc-Kee & Ostriker 1977;Draine 2011;Tielens 2005;Klessen & Glover 2016). Matter cycles through these phases as stars formed in the cold dense molecular clouds release energetic feedback and momentum into their surroundings, which then influences the subsequent evolution of the galaxy by setting the equilibrium disc structure and depletion time.
A crucial component in this phase matter cycle is the cold neutral medium or CNM. This consists of the neutral atomic hydrogen (H ) with temperatures around 100 K (Kulkarni & Heiles 1987;Dickey & Lockman 1990) that makes up the bulk of the neutral gas in galaxies, alongside the warm neutral medium (WNM) which has temperatures of order 10 4 K. These atomic phases exist together in pressure equilibrium such that they can be considered as a two-phase medium (Field et al. 1969;Wolfire et al. 2003;Bialy & Sternberg 2019). It is from the CNM that gas is compressed and cooled to form individual molecular clouds where stars are born (e.g. McKee & Ostriker 2007; ★ E-mail<EMAIL_ADDRESS>Girichidis et al. 2020). The CNM is consequently a gateway and a pre-condition for star formation in galaxies, and determining its distribution is crucial for any theory of galaxy evolution. In this paper, we seek to use numerical models to investigate the broad trends of where the CNM is located in galaxies, what sets this distribution, and how it corresponds to star formation.
The phases of the ISM are usually explained in terms of pressure equilibrium (Field et al. 1969;McKee & Ostriker 1977). As summarised by Ostriker & Kim (2022), the midplane pressure is in vertical dynamical equilibrium with the weight of the ISM. This midplane pressure then determines the balance between hot and twophase (warm+cold) gas such that they have median pressures within 50% of each other. Ultimately this leads to a near linear relationship with star formation (Σ SFR ∝ 1.2 ) due to the importance of feedback in setting this disc pressure. Similarly, the existence of the cold neutral gas phase should also be related to the local pressure environment. Wolfire et al. (2003) have shown that there is a minimum thermal pressure for the CNM phase to coexist with the WNM, which has a value of th / ≈ 3000 K cm −3 at the solar circle. As the pressure falls with galactocentric radius, this leads to a predicted limit of < 18 kpc for the CNM in the Milky Way.
One of the key ways of observing the CNM is through deep surveys of the absorption of the 21 cm line of neutral hydrogen against radio continuum sources since the emission is dominated by the warm phase. Using the Millenium Arecibo 21 cm absorption line survey, Heiles & Troland (2003) found that in the solar neighbourhood the CNM component makes up about 40% of the neutral gas, whereas more recently Murray et al. (2018) estimated the CNM was 28% of the total H but that 20% of the H was in a thermally unstable phase. Dickey et al. (2009) combined multiple 21 cm emission surveys of the galactic plane to deduce that the CNM in the Milky Way was similarly located to the WNM. Moreover, recent work with the Australian Square Kilometer Array Pathfinder (ASKAP) by Dickey et al. (2022) found that in the inner Milky Way, the CNM has a similar scale height to the molecular gas, but in the outer galaxy the CNM and WNM are well mixed and maintain a constant CNM/WNM ratio out to radii of at least 40 kpc. Similarly, Strasser et al. (2007) found CNM at large galactocentric radii in spiral arms in the outer galaxy. Soler et al. (2022) investigated the filamentary structure in the H emission toward the Galactic disk using a Hessian matrix method. The identified filamentary structures correspond to roughly 80% of the H emission and most likely consist of CNM material due to their higher density. The mean scale height of the filamentary H , was lower than that of the total H in the outer galaxy, suggesting that the CNM and WNM have different scale heights in this regime. These results are at odds with the aforementioned pressure equilibrium models, which suggest that in the low-pressure environment of the outer galaxy, the CNM phase should become increasingly less prevalent (Wolfire et al. 2003).
Another way of probing the CNM phase is through the [C ] 158 m emission line, as this is the main coolant of the diffuse ISM. As the emission is sensitive to the density of the gas, that from the WNM is a factor of ∼ 20 less bright than that from the CNM. This means that [C ] 158 m emission from diffuse regions can be assigned to the CNM phase. This exercise was done by the GOTC++ Herschel/HIFI survey published by Pineda et al. (2013). In contrast to H absorption studies these authors found that the CNM column density decreased more rapidly with galactocentric radius than that of the WNM, and consequently, the fraction of the atomic gas in the cold phase was much lower in the outer galaxy (∼ 20%).
The distribution of the neutral atomic medium is something that has not just been a focus of observational studies, but is also a topic of interest for theoretical numerical studies. In cosmological simulations, resolving the scale height of the H gas is a key challenge for reproducing Milky Way type galaxies (e.g Hopkins et al. 2018). Previously, the H scale heights of simulated galaxies were of order of kiloparsecs (Bahé et al. 2016;Marinacci et al. 2017), however recent FIRE (Gensior et al. 2022) simulations have more realistic heights of ∼ 100 pc in the galactic centres, rising to ∼ 800 pc in the outer galaxy. The authors postulate that the solution to faithfully reproducing galaxy scale heights comes from the inclusion of a realistic multiphase medium with cold gas and small scale stellar feedback. Such models typically omit a detailed chemical treatment of the gas, and even FIRE has a mass resolution of over 6 × 10 4 . To obtain a finer description of the CNM, therefore, simulations of gas in an isolated galaxy are needed.
One approach is to simulate gas in stratified boxes (e.g. Hennebelle & Iffrig 2014;Walch et al. 2015;Girichidis et al. 2016;Rathjen et al. 2021). However, these studies have mainly focused on molecular gas and star formation rather than H . One prominent example is the work of Kim et al. (2013) which showed that the star formation rate surface density varied almost linearly with the midplane pressure set by the weight of the ISM. As earlier stated, the local pressure balance of the ISM is also theorised to be extremely important in setting the CNM fraction (Wolfire et al. 2003).
Unfortunately, stratified box simulations typically only cover an area of a few square kiloparsecs of an idealised disc and are therefore unable to investigate the full galactic distribution. One approach is to embed a co-rotating high-resolution box within a galaxy simulation as is done in the Cloud Factory simulations of Smith et al. (2020). These models, and stratified boxes from the FRIGG project (Iffrig & Hennebelle 2017), have been used to interpret the orientations of the H filaments identified in The H /OH/Recombination-line survey of the inner Milky Way (THOR, Beuther et al. 2016), as reported in Soler et al. (2020). However, the simulations of Smith et al. (2020) only reach a high resolution in a 3 kpc box in the star-forming disc. To investigate the true galactic distribution of the CNM we need the entire galaxy to be included. We, therefore, turn to an updated version of the isolated hydrodynamic galaxy models presented by Tress et al. (2020) and Tress et al. (2021), which reach parsec resolution or better in the cold gas across the galaxy while including full hydrogen chemistry.
The paper is structured as follows. We first outline the details of the galaxy models and how they are analysed in Section 2. Then we describe the radial and vertical distribution of the CNM that we find in Section 3. In Section 4, we investigate how this relates to the local pressure conditions in the galactic disc, and the effect of spiral arms. Finally, in Section 5 we give our conclusions.
Isolated Galaxy Simulations
Our simulations are carried out using the A code (Springel 2010) with custom physics modules to describe star formation and cold dense gas. For a full description of our numerical setup, see Tress et al. (2020). However, we briefly summarise the major features and differences from previous work here.
The models of Tress et al. (2020Tress et al. ( , 2021 consist of two simulations of a galaxy disc consisting of dark matter halo (6 × 10 11 M ), bulge (5.3 × 10 9 M ), stellar disc (4.77 × 10 10 M ), and gas disc (5.3 × 10 9 M ). The first model is isolated and develops large scale spiral structure but with breaks and bifurcations, whereas the other is perturbed by a fly-by from a companion galaxy, inducing the formation of strong spiral arms. For our analysis, we focus on an updated version of the Isolated case as this best lends itself to radial averaging. However, in Section 4.4 we will revisit the latter Interacting model.
The original simulation assumed a constant interstellar radiation field consistent with the solar neighbourhood value 0 = 1.7 in Habing units (Draine 1978;Habing 1968). However, this is not a good description of the outer galaxy where the field will be lower due to the low star formation rate. To account for this we computed the time-averaged radial star formation rate surface density profile from 280 to 320 Myr in the original Tress et al. (2020) model during the steady state period of the galaxy. We then fit an exponential function to the star formation density and scale it such that, at the radius where the star formation rate surface density is equal to the solar neighbourhood value, it takes the value of 0 . The final expression takes the form where is the interstellar radiation field, the galactocentric radius in units of kpc, and 0 the radius where the star formation surface density matches the value of the solar neighbourhood. Figure 1 shows the average radial star formation rate density profiles from the original Figure 1. The star formation rate density as a function of radius from the isolated galaxy simulation of Tress et al. (2020) between 280 − 320 Myr. The blue line shows the exponential function used to represent the scaling of the interstellar radiation field. models, and the radial dependence of the new varying interstellar radiation field. (We checked subsequently that the star formation rate was not significantly different in the new runs with varying field.) Shielding from the field is computed assuming a constant shielding length of 30 pc using the T C algorithm . We also assume that the mean free path of the FUV photons in the Milky Way is much less than the scale on which the SFR density is varying, this is a reasonable assumption out to galactocentric radii of 10 kpc in the Milky Way (Wolfire et al. 2003) but may be less valid in the outer galaxy. Nonetheless, while this is an approximation, it is far better than assuming a constant value for the interstellar radiation field. Cosmic ray ionisation is also included at a constant rate of = 3 × 10 −17 s −1 for atomic hydrogen, with the rates for other chemical species (H 2 , He, etc.) scaled appropriately. However, in the atomic gas (i.e. when < 1), photoelectric heating dominates over cosmic ray heating by an order of magnitude or more (see e.g. Figure 10 in Wolfire et al. 2003).
Heating and cooling of the gas is computed simultaneously with the solution of the chemical rate equations. We use the latest version of the cooling functions used by Clark et al. (2019).A temperature floor of 20 K is imposed on the ISM to prevent anomalously low temperatures occurring close to our resolution limit. For our chemistry, we adopt the approach first used in A by Smith et al. (2014), namely the NL97 network of which utilises the network for hydrogen chemistry first presented in Glover & Mac Low (2007a,b). A simplified description of CO formation and destruction is also included in this network, based on Nelson & Langer (1997). However, we do not analyse CO in this work.
For simplicity, we assume that the gas has solar metallicity throughout our simulated galaxy and that the dust-to-gas ratio is the same as in the solar neighbourhood. In reality, the Milky Way has a metallicity gradient of approximately −0.037 dex kpc −1 (Arellano-Córdova et al. 2020), and similar results are found in other nearby spiral galaxies (see e.g. Kreckel et al. 2019). Therefore, in our simulation, the gas in the outer galaxy ( > 12 kpc) is roughly 2-3 times more metal-rich than we would expect to be the case in a real Milky Way-type spiral galaxy. However, this will have only a minor effect on the temperature of the atomic ISM, since the dominant cooling and heating processes have the same dependence on metallicity, provided that the dust-to-gas ratio scales linearly with the metallicity (Wolfire et al. 1995).
The resolution of the simulations depends on two criteria. Firstly we set a target gas mass of 300 , which means that by default will refine or de-refine the grid so as to keep the masses of all of the grid cells within a factor of two of this value. However, on top of this we require that the Jeans length is resolved by at least four resolution elements to satisfy the Truelove criteria and avoid artificial fragmentation (Truelove et al. 1997;Greif et al. 2011;Federrath et al. 2011). This leads to a mass resolution of ∼ 10 M between densities of 10 −22 < < 10 −21 g cm −3 , which equates to spatial scales of a parsec or smaller. We are therefore confident that the CNM is well resolved.
Star formation is modeled via sink particles (Bate et al. 1995;Federrath et al. 2010), which are non-gaseous particles that represent collapsing regions of gas that will form small (sub)clusters of stars. These are formed by checking if regions of gas with a density greater than 10 −21 g cm −3 are unambiguously bound, collapsing, and accelerating inwards. Only if these criteria are met will the gas be replaced with a sink particle, which can then accrete additional mass that falls within a radius of 2.5 pc of the cell if it is gravitationally bound to it. As star formation is still inefficient at these scales we assume a 5% star formation efficiency (Evans et al. 2009) and associate a stellar and gas fraction to each sink.
Using the model of Sormani et al. (2017), we sample the IMF and associate supernovae with the massive stars as described by Tress et al. (2020). For each supernova, we calculate an injection radius, which is the radius of the smallest sphere centred on the supernova that contains at least 40 grid cells. If the injection radius is smaller than the expected radius of a supernova remnant at the end of its Sedov-Taylor phase, we inject thermal energy from the supernova; otherwise, we inject momentum (e.g. Gatto et al. 2015). Mass is returned with each supernova explosion such that when the last supernova occurs the gaseous component of the sink is exhausted. The sink is then turned into a star particle. To account for type Ia supernovae, we also randomly select a star particle every 250 years and create a supernova event at its position (based on the star formation history of M51, as quantified by Eufrasio et al. 2017, which is similar to our model). Figure 2 shows our updated Isolated galaxy simulation at a time of 300 Myr, where we carry out our analysis. Note, that we performed a smooth averaging procedure on the original simulations to investigate if choosing a single snapshot in time biased the result. We found that the radial profile followed an identical trend radially but with a smoother shape. However, the averaging process masked real fluctuations due to spiral structure and so chose instead the snapshot analysis.
Data Analysis
In order to explore how the CNM varies as a function of galactocentric position and height above the midplane, we bin the simulation output in two different ways. Firstly, we bin the data in cylindrical coordinates in order to study whether the CNM fraction varies with galactocentric radius. We use 60 bins linearly from an inner radius of 0.1 kpc to an outer one of 15 kpc. In the Isolated galaxy simulation, the star formation is mostly contained within a radius ∼ 10 kpc and so both the inner and outer disc is included. To examine the impact of features such as spiral arms we also bin the data in angle using 60 bins from 0 to 360 degrees. For this binning scheme we include all the material within a vertical extent of 1.5 kpc of the disc midplane.
Secondly, we use a vertical binning scheme where we bin the data in radius and vertical extent rather than angle. The same radial interval is used, but the -direction is now binned from −1.0 kpc to 1.0 kpc relative to the galactic midplane using 60 bins.
In each bin we record both the total H mass, and the H mass with a temperature of 200 K or lower, which is chosen to be consistent with the CNM features identified in absorption by Heiles & Troland (2003). (Although it should be noted that the majority of the gas in the CNM phase is much colder than this.) The CNM fraction is taken to be the mass of cold H with < 200 K divided by the total H mass in the bin. The H surface density Σ HI is then simply the total H mass divided by the area of the bin. The exact choice of temperature threshold chosen for the CNM definition makes only a small (10 − 20%) difference to the results. For example, when testing we found the total CNM fraction for the galaxy to be 0.164 with a threshold of 150 K, 0.192 with 200 K, and 0.227 for 300 K. The 200 K threshold is consequently a reasonable compromise, and has the advantage of being consistent with the observational literature.
In order to investigate the origin of the CNM fraction we calculate the mid-plane thermal pressure th in units of the Boltzmann constant, , using the below sum for all cells in the bin with | | < 50 pc in the radially binned data, where is the cell number density, the cell temperature, and the weighting variable, which can be either mass or volume. Note, that we investigated changing the threshold in at which material was included and found it made little difference to the average below a threshold of 100 pc. Following the approach of Kim et al. (2013) we exclude dense regions with number density > 50 cm −3 from the average. Above such densities regions may become self-gravitating, at which point their pressure would no longer be reflective of the global disc conditions. We also calculate the vertical turbulent pressure by taking the sum over all cells in the midplane defined above, where is the cell density, is the vertical velocity, and is the weighting variable of the cell. We divide by for comparison with the thermal pressure.
To investigate the scale height of the H and CNM we use our vertically binned data and fit a Gaussian as was recently done by Bacchini et al. (2019) for THINGS galaxies, using the equation where is the number density in the bin, 0 the number density at the midplane, the distance from the midplane, and the scale height. The fit is performed using the curve_fit routine. As a second check of the vertical extent we also calculate the massweighted position and its dispersion, in each of our radial bins to investigate local variations.
As discussed in Section 2.1 the simulation uses the sink particle method to track the amount of mass going into bound collapsing regions. We use these to find the growth rate of the stellar mass in each sink. To calculate this we compare the analysed time snapshot with the previous snapshot created by the simulation. If a sink has no precursor we assign it as newly created sink mass, but if it does, then we take the difference between the stellar masses to calculate how it has grown. We then divide by the time between snapshots to get the rate. The sinks are then binned spatially in the same manner as the gas mass and added to get the total star formation rate Σ SFR from all the sinks in the bin. Figure 2 shows a top-down view of the CNM column density alongside similar maps of the full atomic hydrogen distribution and the H 2 distribution. The CNM closely follows the distribution of the molecular gas. For the inner 12 kpc of the galaxy disc we calculate the area covering fraction of material with surface density greater than = 10 19 cm −2 (chosen from a visual inspection of Figure 2) for the three species. H covered the majority of the disc (92%), but the CNM phase covers only a small fraction (5.5%) of the disc surface. The molecular gas covers an even smaller fraction (3.4%) of the disc surface and a visual inspection of Figure 2 shows that the CNM coincides more closely with the H 2 than the total H . This suggests that the CNM is both intermixed with, and extends beyond the molecular clouds. Figure 3 shows the probability density distribution of the H surface density within this region. For comparison we select the pixels where the H 2 and CNM surface density was above our chosen threshold of = 10 19 cm −2 and plotted the probability density of H surface density in these pixels. Both the CNM and molecular hydrogen are found at similar H surface densities. This is in good agreement with recent results from 3D dust mapping that show that nearby clouds are surrounded by extended CNM (Zucker et al. 2021).
Overlap with Molecular Gas
The full H distribution is much more extended than the CNM and reaches out to 15 kpc beyond the star-forming disc. The morphology of the H also changes between the star-forming disc and the outer regions. In the outer disc the H is less filamentary and fills the surface area smoothly without voids, unlike the inner regions where it follows the arms more closely. In Figure 4 we show the CNM fraction throughout the galaxy model at the same time as shown in Figure 2. This shows that the differences between the CNM and H distribution are not simply due to the total mass of gas being larger in the H component and thus making it appear more extensive and space-filling. Instead the relative abundance of CNM to the total H , changes inside and outside of spiral features. Figure 5 shows the CNM fraction of the gas in the galaxy disc after the radial binning procedure described in Section 2.2. The CNM fraction peaks at small values of and then gradually decreases until values of ∼ 0.8. The distribution is similar to the Arecibo survey of Heiles & Troland (2003) but, as in that survey, a single mean value does not describe the variation in the CNM well. Figure 6 shows how the H surface density and CNM fraction vary with galactocentric radius. As our simulated galaxy is not designed to be an exact analogue of the Milky Way we will adopt the following terminology. The 'inner galaxy' is where < 2 kpc, the 'star-forming disc' is at 2 < < 9 kpc and the 'outer disc' is at > 9 kpc. The total H surface density peaks in the inner galaxy then declines radially until it is below 0.1 M pc −2 in the outer disc. The HI surface density has multiple peaks in the star-forming disc which correspond to spiral arms. The warm and cold neutral mediums contribute to this surface density in different ways. In the inner galaxy the CNM and WNM (where here we include all H with > 200 K in the WNM) are well mixed and follow a similar distribution, albeit at lower surface density Figure 6. The radially averaged gas surface densities (top) and CNM fraction (bottom). The CNM fraction decreases more steeply with galactocentric radius than the total HI surface density.
Dependence on Galactocentric Radius
for the CNM. In the outer disc there is CNM only out to a radius of approximately 12 kpc in contrast to the full H distribution, which continues to beyond 14 kpc. In contrast the molecular hydrogen falls below surface densities of 0.1 M pc −2 at a radius of 8 kpc and so is confined to the star-forming disc. Note that the H 2 surface density is a lower limit as some mass will be inside sink particles. Dickey et al. (2009) report significant CNM in the outer disc of the Milky Way out to very large galactic radii, however we only find CNM in our model at disc radii < 12 kpc, as opposed to < 14 kpc for H . We will discuss this further in Section 4.4.
The CNM fraction is approximately constant in the disc at a value of roughly = 0.2, apart from a peak at the galactic centre. This trend of constant with galactocentric radius is in agreement with the absorption studies of Dickey et al. (2009Dickey et al. ( , 2022, but is in tension with the GOTC+ survey of Pineda et al. (2013) who see a clear radial decreasing trend in the CNM fraction derived using the C line. Figure 7 shows the angular dependence of the CNM fraction at increasing radii. The contours denote the H surface density and take the form of diagonal stripes in this phase space due to the spiral arms. Unsurprisingly, there is a low CNM fraction outside the spiral arms where the H column density is low and the gas temperature is hotter. However, the CNM fraction remains non-negligible between 10-12 kpc where the spiral arms are less defined.
When comparing to observations it is useful to understand the temperature distribution of the gas. Most obviously because the CNM, WNM split is a division in temperature, but also because the CNM temperature, cool , will determine the observed spin temperature that is directly measured from absorption features. For example, Strasser et al. (2007) found that the values of cool derived from absorption features in the outer galaxy showed no strong dependence on galactic Figure 7. The CNM fraction calculated in radial and angular bins. The contours show the HI column density with contour levels of 5, and 10 M pc −2 respectively. The CNM fraction is higher in the spiral arms but is still substantial at 10 kpc where the arms are less defined. radius. In Figure 8 we show the mass-weighted mean CNM temperatures in our model. In agreement with observations we see a mostly flat trend. There is some hint that in the star forming disc the temperature is ∼ 10% lower than the outer disc but this is only a small variation. In the star-forming disc, the right hand panel of Figure 8 shows that the temperatures are cooler in the spiral arms.
Dependence on Vertical Extent
In the top panel Figure 9, instead of binning with angle we bin in height above the disc. Material with a significant CNM fraction pervades all the H contours, even at the lowest surface density of 0.1 M pc −2 . To investigate this further, in the bottom panel of Figure 9 we plot how the scale-height varies with radius in the total H compared to the H in the CNM phase. The scale height is determined by fitting to Equation 4 as described in Section 2.2. We find typical H scale heights of 100 pc in the star forming disc, which then flares in the outer galaxy as expected from observations (e.g. Yim et al. 2014;Bacchini et al. 2019;Randriamampandry et al. 2021;Soler et al. 2022) The distribution is noisy due to the spiral structure but the two populations follow each other closely. However, at 12 kpc they diverge due to the CNM disappearing from the gas phase. Unlike the H , the CNM scale height does not flare.
To investigate whether this behaviour is due to averaging we investigate another metric which does not depend on a fit. Figure 10 shows radial profiles and angular maps of the mass-weighted mean position and its dispersion. While the mean position is always close to the 0 position in the simulation there are variations of up to 50% of the scale height in Figure 9. The CNM's mean vertical position closely follows that of the full H distribution, but the dispersion shows very different behaviour. In the galaxy centre both the total H and the CNM have very low scale heights (∼ 50 pc). However, while the dispersion of the total H gas steadily rises, the CNM remains tightly confined below values of 100 pc. Note that this is in good agreement with the findings of Dickey et al. (2022) for the inner galaxy and star-forming disc. The CNM distribution, therefore, seems to consist of discrete 'clumps' of CNM which are at a variety of vertical heights.
Local Pressure Environment and Star Formation
To investigate how the local pressure environment corresponds to star formation and how this connects to the CNM fraction, we plot in Figure 11 the mass-averaged total (thermal and turbulent) pressure, , vs the local star formation rate surface density Σ SFR . The grey dashed line shows the empirically determined scaling found from the simulations of Kim et al. (2013) Σ SFR = 2.1 × 10 −9 / 10 4 1.18 (5) where Σ SFR is in units of M pc −2 yr −1 and / in units of cm −3 K. This follows the same trend as the scaling of our models confirming that the total midplane pressure plays a major role in setting the global star formation rate. The normalisation of our model is different, however it should be noticed that our treatment of star formation, using sink particles, is different from the star particle approach of Kim et al. (2013) who assume a constant efficiency per free fall time and this may account for the discrepancy. However, the observational results of Sun et al. (2020) are in good agreement with Kim et al. (2013) so further investigation may be needed. We also considered the volume-weighted pressure relation and found it had similar scaling behaviour but there was a discontinuous jump in Σ SFR at high stellar densities. This is most likely due to the small volume fraction of dense star-forming regions. For this reason we consider only mass-weighted pressures going forward. Regardless of how calculated, the dispersion in the relationship decreases at large star formation rate surface densities as high pressures are needed to create sufficient dense gas.
In Figure 12 we investigate how the total mass-weighted pressure corresponds to the CNM and star formation. The left panel shows the mass-weighted total pressure vs the CNM surface density. The dashed line shows a line with gradient of 0.6, which matches the upper envelope of the data. There is a large amount of scatter, but the CNM column density increases with pressure as would be expected.
The middle panel of Figure 12 shows the star formation surface density plotted against the CNM surface density in a manner analogous to a Kennicutt-Schmidt relation (Kennicutt 1998). Observationally, the star formation density scales with a power of ∼ 1.4 against the total gas surface density (H + H 2 ), as shown, for example, in de los Reyes & Kennicutt (2019). However, when compared to only molecular gas, the scaling is linear down to at least 0.1 solar metallicity (Bigiel et al. 2008;Whitworth et al. 2022). Using the S P linear regression feature we fit a power law to the data in logspace and found a slope of 1.56 with a standard error of 0.06 best described the data. That this value is closer to the Kennicutt total gas exponent scaling with star formation than the molecular value, suggests that the CNM is not gravitationally bound and actively forming stars.
The right-hand panel of Figure 12 shows how the CNM fraction corresponds to the local star formation rate surface density. There is a large scatter, particularly at low CNM fractions, however, the trend is clear that higher values of correspond to higher star formation rate surface densities. In the regions of our galaxy model with the most star formation, the CNM fraction is as high as 80%.
We can now investigate how the pressure contributes to the largescale trends discussed in Section 3.2. Figure 13 shows the massweighted pressure as a function of the galactic radius where the CNM fraction > 0. As expected the pressure steadily falls from the inner galaxy to the outer, albeit with local variations. However, strikingly the CNM vanishes from the gas phase at radii of about 12 kpc, which corresponds to where the local pressure falls below 1000 K cm −3 .
Dependence on Radiation Field
The original Isolated galaxy modelled in Tress et al. (2020) had a constant UV field and so it is possible to use this as a comparison to investigate the importance of the interstellar radiation field in setting the CNM distribution of the galaxy. Figure 14 shows the radial dependence of the CNM surface density, fraction, and vertical dispersion for the constant field of magnitude 0 . Most strikingly the CNM only extends to a radius of 10 kpc due to the higher field at large radii. This is the same extent as the star-forming disc in these models, and so the CNM no longer extends into the outer galaxy. The surface density of the HI and CNM are flatter as a function of radius than the surface distribution with the varying field shown in Figure 6. Intriguingly, while the radial extent of the CNM changes (from 12 to 10 kpc) the H and H 2 extent remains largely unchanged (at 14, and 8 kpc respectively). Clearly, the CNM is far more sensitive to the local radiation field in this regard compared to the other phases.
With a constant interstellar radiation field the CNM fraction , shown in the middle panel of Figure 14, decreases radially rather than remaining flat in the disc as was the case in Figure 6. The vertical dispersion of the CNM, however, remains similar with and without the varying field with the CNM having a dispersion of around 100 pc throughout the disc in both cases with no flaring.
To explore the origin of this behaviour we plot the mean total midplane pressure as a function of radius for the model with a constant field in Figure 15. The pressure now falls below a value of 10 3 K cm −3 at a galactocentric radius of 10 kpc as opposed to 12 kpc in the varying case. Both of these radii exactly correspond to where the CNM disappears from the gas phase, once again confirming the important role of midplane pressure in setting the CNM distribution.
Turbulence in the Outer Galaxy
However, pressure alone is not a full explanation for the distribution of the CNM in the gas phase. Turbulent velocity fluctuations are important for pushing gas out of equilibrium and creating local over-densities where the CNM can form. The vertical velocity dispersion observed in H in spiral galaxies is observed to be both The correspondence between the local mass-weighted pressure and the CNM surface density Σ . Middle: The surface density of the CNM vs the local star formation rate surface density, analogous to a Kennicutt-Schmidt relation for the CNM. The grey line shows a linear fit to the data, which has a similar exponent to that derived observationally for total column density. Right: The CNM fraction vs star formation rate surface density, which shows that higher CNM fractions correspond to greater star formation rate surface density. Figure 13. The galactocentric radius and mass weighted pressure of the points shown in Figure 12 for all points where > 0. The CNM distribution ends at ∼ 12 kpc, which is where the pressure falls below values of 10 3 K cm −3 . highly turbulent, and varying radially from values of roughly 12 to 15 kms −1 in the central parts, to ∼ 4 − 6 kms −1 in the outer parts e.g. (Dickey & Lockman 1990;Kamphuis & Sancisi 1993;Rownd et al. 1994;Meurer et al. 1996;de Blok & Walter 2006). Such velocity dispersions can be theoretically explained via a number of mechanisms that include supernova feedback (Dib et al. 2006), MRI instability (Piontek & Ostriker 2005, 2007, or accretion from the galactic halo (Klessen & Hennebelle 2010). However, none of these effects are operating in our model. In the outer galaxy there is no star formation and hence no supernova feedback. Our model is purely hydrodynamic, and it is isolated with no accretion of material from the intergalactic medium.
In Figure 16 we plot the mass averaged vertical velocity dispersion in radial bins for H and the CNM. For both phases the velocity dispersion is higher in the star-forming disc where we have supernova feedback, however, it remains substantial and of similar magnitude to observations in the outer galaxy. Therefore we must conclude that galactic dynamics alone are enough to match the observed velocity dispersion. Consequently, we are confident that we are not underestimating the CNM fraction in our models due to a lack of turbulence.
CNM and Spiral Arms
In our model, the CNM extends to midway in the outer disc but does not cover its full extent. However, the absorption studies of Dickey et al. (2022) find CNM in the entirety of the outer disc of the Milky Way, out to radii of 40 kpc. Kinematic distances are hard to estimate and absorption measurements give information along single sight lines. Still, it seems clear that in the Milky Way the CNM can exist beyond the radii predicted in our models.
One possibility is that spiral arm structure is able to concentrate gas in isolated pockets in the outer disc. For example, Strasser et al. (2007) observe CNM in the outer Milky Way, but specifically target spiral arms. Our fiducial model is of an isolated galaxy without strong spiral structure in the outer regions. Therefore to test the importance of spiral arms in forming the CNM we turn to the other model described by Tress et al. (2020), where a close encounter has triggered the formation of well-defined spiral arms to create an 'M51 analogue'. Figure 17 shows the gas distribution of the Interacting model at the same time as the isolated model analysis. Note that the interstellar radiation field is constant in this model so we also compare it to the model discussed in Section 4.2 rather than just our fiducial case. Figure A1 in Appendix A shows the column density maps for the Isolated model with a constant interstellar radiation field.
The atomic hydrogen distribution in the outer galaxy is now very different from the Isolated constant field case. Where before the distribution smoothly filled the surface of the outer disc, now the atomic hydrogen is concentrated in the extended spiral arms. The lower panels of Figure 17 show that both H 2 and the CNM now can be found in the outer galaxy. For instance, a particularly large clump of CNM can be seen in the lower left corner. Once again the CNM closely follows the molecular hydrogen, but with a higher column density over a slightly larger area than the Isolated case. Figure 18 shows the binned CNM fraction of the interacting galaxy. In the spiral arms, the CNM now persists to large radii albeit at a low level. Figure 19 shows the radially averaged profiles for these maps. When averaged over an annulus of the galaxy disc the surface density of the CNM is negligible in the outer galaxy and follows the same radially decreasing trend as Figure 6.
Finally, the lower panel of Figure 19 shows the mean dispersion in the z position, ( ), of both H and the CNM. In the inner galaxy, the CNM and H have increasing dispersion, suggesting that they are both well mixed vertically in the disc. However, beyond 6 kpc, where the dense gas is mainly in spiral arms, ( ) drops for the CNM but continues to rise for H . This further suggests that the CNM in the extreme outer galaxy is confined to distinct clumps of gas where the local pressure has been enhanced by spiral arms.
Certainly, the scenario presented here is only one possibility for how CNM might exist in the extreme reaches of the outer galaxy. This paper focuses on a pair of models, and is not a parameter study. It explores two extremes, one model with weak spiral structure, and another with strong, and consequently brackets a range of possible CNM distributions. Magnetic fields could add additional support to molecular cloud envelopes which could then contain a substantial amount of CNM as well as CO-dark molecular gas (e.g Smith et al. 2014). Another possibility is that cold gas could be accreted onto the outer galaxy from the circumgalactic medium such as in the Magellanic Stream of our own Galaxy (Fox et al. 2014). Alternatively, the discrepancies that we see between Milky Way observations and our model may be due to projection effects. Our analysis has been from an external perspective, in order to better quantify the variation with galactocentric radius and vertical height above the disc midplane within the Milky Way the CNM distribution has to be recovered from sight lines containing emission and absorption from gas at multiple different radial positions within the disc.
CONCLUSIONS
We have investigated the distribution of the CNM in new highresolution simulations of an isolated disc galaxy based on those presented by Tress et al. (2020) but with a radially varying interstellar radiation field. Our A simulations use a variety of custom modules allowing us to follow the chemical and star-forming evolution of the dense gas. These included time-dependent gas chemistry, gas self-shielding from the ambient UV field, sink particles to represent star formation, and supernova feedback. The resulting models allow the distribution of the CNM to be followed down to sub-pc scales in the dense gas. Most of our analysis focuses on an isolated galaxy model, however, we later compare it to a model with a constant interstellar radiation field, and another with identical mass which has been perturbed by a close encounter to generate strong spiral arms that extend to the outer galaxy.
Our conclusions are as follows: (i) No single value describes the CNM fraction (the fraction of the atomic gas with T<200 K relative to the total H ) everywhere in the galaxy. Values range from = 0 to 0.8 with high values being less likely.
(ii) The CNM is not uniformly distributed in the star-forming disc, but follows a clumpy distribution. A comparison of the column density maps shows that it overlaps more closely with the H 2 distribution than the total H . This is particularly true in spiral arms where the CNM fraction is clearly enhanced.
(iii) The CNM extends into the outer galaxy, beyond the starforming disc but does not extend as far out as the full H distribution. In our model, the radial surface density profiles of H , the CNM, and H 2 remain above a value of 0.1 M pc −2 out to radii of 8, 12, and 14 kpc respectively.
(iv) The CNM fraction remains approximately constant with galactocentric radii in agreement with measurements with H absorption. This effect is a consequence of the falling interstellar radiation field compensating for the decreasing column density at large radii. In our comparison model with a constant interstellar field the CNM fraction decreased with galactocentric radius.
(v) The vertical distribution of the CNM is clumpy in the starforming disc with a scale height of around ∼ 100 pc and no flaring. An analysis of the dispersion, ( ), suggests that the CNM is more localised in the -direction than the overall H distribution.
(vi) The star formation rate in the galaxy is well correlated with the total midplane pressure in logspace as suggested by Kim et al. (2013). We find that the CNM column density also correlates well with pressure as expected from models of the ISM such as Wolfire et al. (2003). We find no CNM in our isolated galaxy discs beyond the point where the total pressure / drops below 1000 K cm −3 . (vii) The 'CNM Kennicutt-Schmidt' relation (i.e. the star formation rate surface density plotted against the CNM column density) has a scaling of ∼ 1.5, which is more similar to the relation seen for total column density than the linear relation seen for molecular gas. This suggests that while the formation of the CNM is a precursor to star formation, it is not the predictor of gravitationally collapsing star-forming regions that the H 2 is. However, the CNM fraction does increase with star formation density, with the highest CNM fractions always associated with active star formation.
(viii) The vertical velocity dispersion of the H in our models is in good agreement with observations of H in nearby galaxies. In the star-forming disc a major contribution to this is supernova feedback, but even in the outer galaxy where there is no star formation galactic dynamics is sufficient to drive velocity dispersions between 5 − 10 km s −1 .
(ix) Spiral arm features in the outer galaxy may give rise to isolated clumps of CNM at extremely large radii, beyond where it is expected in our more symmetric isolated models. This may be an explanation for the CNM seen at extremely large galactic radii in the Milky Way by H absorption studies (Dickey et al. 2022). | 11,225.2 | 2023-01-19T00:00:00.000 | [
"Physics"
] |
Viscoelastic flow simulations in random porous media
We investigate creeping flow of a viscoelastic fluid through a three dimensional random porous medium using computational fluid dynamics. The simulations are performed using a finite volume methodology with a staggered grid. The no slip boundary condition on the fluid-solid interface is implemented using a second order finite volume immersed boundary (FVM-IBM) methodology [1] . The viscoelastic fluid is modeled using a FENE-P type model. The simulations reveal a transition from a laminar regime to a nonstationary regime with increasing viscoelasticity. We find an increased flow resistance with increase in Deborah number even though shear rheology is shear thinning nature of the fluid. By choosing a length scale based on the permeability of the porous media, a Deborah number can be defined, such that a universal curve for the flow transition is obtained. A study of the flow topology shows how in such disordered porous media shear, extensional and rotational contributions to the flow evolve with increased viscoelasticity. We correlate the flow topology with the dissipation function distribution across the porous domain, and find that most of the mechanical energy is dissipated in shear dominated regimes instead, even at high viscoelasticity. open
Introduction
The flow of complex fluids through porous media is a field of considerable interest due to its wide range of practical applications including enhanced oil recovery, blood flow, polymer processing, catalytic polymerization, bioprocessing, geology and many others [2][3][4] . The flow of Newtonian fluids though porous media is relatively well understood in the framework of Darcy's law [2] . Also, a significant effort has been made to understand flow through porous media of non-Newtonian fluids with a viscosity that depends on the instantaneous local shear-rate (inelastic non-Newtonian fluids, or quasi-Newtonian fluids), as reviewed by Chhabra et al. [5] and Savins [6] . However, flow through disordered porous media of viscoelastic fluids, i.e. non-Newtonian fluids displaying elasticity, is far from being understood [5,7,8] . This is due to the complex interplay between the nonlinear fluid rheology and the porous geometry. Several types of numerical frameworks have been used to model flow of non-Newtonian fluids through porous media, including extensions of Darcy's law [9] , capillary based models [10] , and direct numerical simulations based on computational fluid dynamics. Unfortunately, extensions of Darcy's law and capillary based models are found to be inadequate to * Corresponding author. accurately capture the complete physics of pore scale viscoelastic flow through porous media [11][12][13] .
Many numerical works focus on relatively simple geometries to uncover the essentials of non-Newtonian fluid flow through porous media [14][15][16][17] . Sometimes a full three-dimensional random porous medium is studied, which is already closer to a realistic pore geometry, but such studies are then usually limited to power-law fluids, which are the most commonly applied quasi-Newtonian fluids [11,[18][19][20] . For example, Morais et al. [18] applied direct numerical simulations to investigate the flow of power-law fluids through a disordered porous medium and found that pore geometry and fluid rheology are responsible for an increase in hydraulic conductance at moderate Reynolds numbers. Simulations of fully viscoelastic fluid flows are limited to two dimensional pore geometries [21][22][23][24][25] . It is now commonly agreed that including viscoelasticity is important: both numerically and experimentally, viscoelasticity is found to introduce profound effects and complex phenomena such as enhanced pressure drop and elastic instabilities (sometimes referred to as elastic turbulence) [5,[26][27][28][29][30][31][32][33][34][35] . So, although it is known that viscoelastic fluids behave more complex than inelastic non-Newtonian fluids, the current literature shows a lack of detailed simulations of fully three dimensional flows of viscoelastic fluids through random porous media.
In this paper, we report on a numerical study of the flow of viscoelastic fluids through three dimensional random porous media consisting of packed arrangements of monodispersed spherical particles using a combined finite volume immersed boundary (FVM-IBM) methodology. Four different porosities are studied for a range of low to high Deborah numbers (defined later). We measure in detail the viscoelastic fluid flow structure and stress development in the porous medium. We will show a transition from a symmetric Newtonian flow profile to an asymmetric flow configuration, and will relate it to a strong increase in pressure drop. An analysis of the flow topology will show how shear, extension and rotation dominated flow regimes change with increasing viscoelasticity for different porous structures. Finally, we will show how the distribution of mechanical energy dissipation in the porous medium changes with increasing viscoelasticity and correlate this with the flow topology. This analysis will help us to understand the interplay of pore structure and fluid rheology in three dimensional random porous media.
Constitutive equations
The fundamental equations for an isothermal incompressible viscoelastic flow are the equations of continuity and momentum, and a constitutive equation for the non-Newtonian stress components. The first two equations are as follows: Here u is the velocity vector, ρ is the fluid density (assumed to be constant) and p is the pressure. τ is the viscoelastic polymer stress tensor. The Newtonian solvent contribution is explicitly added to the stress and defined as 2 η s D , where the rate of strain is D = (∇u + (∇u ) T ) / 2 . The solvent viscosity η s is assumed to be constant. In this work the viscoelastic polymer stress is modeled through the constitutive FENE-P model, which is based on the finitely extensible non-linear elastic dumbbell for polymeric materials, as explained in detail by Bird et al. [36] . The equation is derived from a kinetic theory, where a polymer chain is represented as a dumbbell consisting of two beads connected by an entropic spring. Other basic rheological models, such as the Maxwell model and Oldroyd-B model, take the elastic force between the beads to be proportional to the separation between the beads. The main disadvantage of such models is that the dumbbell can be stretched indefinitely, leading to divergent behavior and associated numerical instabilities in strong extensional flows. These problems are prevented by using a finitely extensible spring. The basic form of the FENE-P constitutive equation is: In Eq. (3) the operator ∇ above a second-rank tensor represents the upper-convected time derivative, defined as In Eq. (3) the constant λ is the dominant (longest) relaxation time of the polymer, η p is the zero-shear rate polymer viscosity, tr ( τ) denotes the trace of the stress tensor, and L characterizes the maximum polymer extensibility. This parameter equals the maximum length of a FENE dumbbell normalized by its equilibrium length. When L 2 → ∞ the Oldroyd-B model is recovered. The total zero shear rate viscosity of the polymer solution is given as η = η s + η p .
The viscosity ratio, which for a real system depends on polymer concentration, is defined as β = η s /η.
We simulate an unsteady viscoelastic flow through a static array of randomly arranged monodisperse spheres, constituting a model porous medium, using computational fluid dynamics (CFD). The primitive variables used in the formulation of the model are velocity, pressure and polymer stress. The complete mass and momentum conservation equations are considered and discretized in space and time. A coupled finite volume -immersed boundary methodology [1] (FVM -IBM) with a Cartesian staggered grid is applied. In the FVM, the computational domain is divided into small control volumes V and the primitive variables are solved in the control volumes in an integral form over a time interval t .
The location of all the primitive variables in a 3D cell is indicated in Fig. 1 . The Cartesian velocity components u, v, w are located at the cell faces while pressure p and all components of the stress τ are located at the center of the cell.
We apply the discrete elastic viscous stress splitting scheme (DEVSS), originally proposed by Guénette and Fortin [37] , to introduce the viscoelastic stress terms in the Navier-Stokes equation because it stabilizes the momentum equation, which is especially im- Here η p ∇ 2 u n +1 and E n p = η p ∇ 2 u n are the extra variables we introduce to obtain numerical stability, where n indicates the time index. C represents the net convective momentum flux given by: Here the first order upwind scheme is used for the implicit evaluation of the convection term (called C f ). In the calculation of the convective term we have implemented a deferred correction method. The deferred correction contribution that is used to achieve second order spatial accuracy while maintaining stability is ( C n m − C n f ) and is treated explicitly. In this expression C m indicates the convective term evaluated by the total variation diminishing min-mod scheme. A second order central difference (CD) scheme is used for the discretization of diffusive terms. Eq. (5) is solved by a fractional step method, where the tentative velocity field in the first step is computed from: In Eq. (7) we need to solve a set of linear equations. Here it is important to note that the enforcement of a no-slip boundary condition at the surface of the immersed objects is handled at the level of the discretized momentum equations by extrapolating the velocity field along each Cartesian direction towards the body surface using a second order polynomial [1,38] . The main advantage of using the immersed boundary method is that it requires no conformal meshing near the fluid-solid interface whereas the method is computationally robust and cheap. The velocity at the new time step n + 1 is related to the tentative velocity is as follows: where δ p = p n +1 − p n is the pressure correction. As u n +1 should satisfy the equation of continuity, the pressure Poisson equation is calculated as: We use a robust and efficient block -incomplete Cholesky conjugate gradient (B-ICCG) algorithm [39,40] to solve the resulting sparse matrix for each velocity component in a parallel computational environment.
As the viscoelastic stress tensor components are coupled amongst themselves and with the momentum equation, the velocity at the new time level u n +1 is used to calculate the stress value accordingly. As a steady state criterion, the relative change of velocity and stress components between two subsequent time steps are computed in all the cells. If the magnitude of the relative change is less than 10 −4 the simulation is stopped. This part is also explained in detail in our methodology paper [1] .
Problem description
We employ our method to investigate the flow of viscoelastic fluid through a static array of randomly arranged spherical particles in a 3D periodic domain ( Fig. 2 ). The domain size is set by the solids volume fraction φ, the diameter of each particle d p and number of particles N p . To generate the random packing for φ ≤ 0.45, a standard hard sphere Monte-Carlo (MC) method [41] is used. The particles are placed initially in an ordered face centered cubic (FCC) configuration in a domain with periodic boundary conditions in all directions. Then each particle is moved randomly such that no overlap between particles occurs. However, such a MC method does not provide sufficiently random configurations in highly dense packings [42] . Thus, to generate random configurations at φ > 0.45, an event driven method combined with a particle swelling procedure is applied [43] . This ensures the particles are randomly distributed. The same approach was followed by Tang et al. , for Newtonian fluid simulations for a range of low to intermediate Reynolds numbers [44] . A detailed analysis of different packing generation and drag correlation study for Newtonian fluid flow through such a random monodispersed porous media has also been performed [44,45] .
In all simulations, the flow is driven by a constant body force exerted on the fluid in the x -direction, while maintaining periodic boundary conditions in all three directions. Note that this is slightly different from using periodic boundary conditions with a pressure jump condition. Simulations of random arrays are carried out with N p = 108 spheres arranged in different configurations. The The precise flow configuration through the random packings, i.e. the amount of rotational, shear and extensional flow, will depend on the level of viscoelasticity. To characterize the flow configuration, we introduce a flow topology parameter Q which is the second invariant of the normalized velocity gradient. This parameter is defined as where S 2 = 1 2 ( D : D ) and 2 = 1 2 ( : ) are invariants of the rate of strain tensor D , introduced before, and the rate of rotation tensor = 1 2 ( ∇ u T − ∇u ) . Values of Q = − 1, Q = 0, and Q = 1 correspond to pure rotational flow, pure shear flow and pure elongational flow, respectively. In this paper we will correlate the above flow topology parameter Q with the dissipation function in the flow domain. The dissipation function expresses the work performed by the viscoelastic and Newtonian stress per unit volume (in W/m 3 ), and is defined as: . e t = ( τ + 2 η s D ) : ∇u (11) By correlating the spatial distributions of Q and . e in the porous domains at different De numbers, we will be able to identify the flow configurations which lead to the predominant energy dissipation, and are therefore predominantly responsible for observed pressure drops.
To quantify the dissipation function in a dimensionless manner, we express the total work performed by Newtonian and viscoelastic stress per unit of volume as E t = . e t ηU 2 /R 2 C . We warn the reader that the word "dissipation function" may be a bit misleading because for a viscoelastic fluid not all of the work represented by this term is irreversibly turned into heat, but instead can be stored elastically and released at a later point in time, leading to a local negative value of the dissipation function. We will show later that this indeed is the case. These streamlines provide an idea about the complex flow pattern in these porous media. For solids volume fraction 0.3, the flow is rather homogeneous. However for solids volume fraction 0.5, the pore structure triggers more tortuous flow paths and more preferential flows.
Apparent relative viscosity
To quantify the viscoelastic effects we express the results in terms of the viscosity that appears in a generalized Darcy law for flow through porous media. The volume-averaged fluid velocity u in porous media is controlled by the pressure drop across the sample. According to Darcy's law, for a Newtonian fluid the relation between the average pressure gradient ( − dp dx ) and the average fluid velocity across the porous medium is: Here k is the permeability (units: m 2 ), which is related to the solids volume fraction (or porosity), pore size distribution and tortuosity of the porous medium, whereas η is the viscosity of the fluid. Eq. (12) presents an operational way of measuring the permeability k by flowing a Newtonian fluid of known viscosity through the porous medium. For a viscoelastic fluid, the viscosity is not a constant but generally depends on the flow conditions. However, if we assume that k is constant for a specific porous medium, we can still define an apparent viscosity by using a generalized Darcy law. Dividing the apparent viscosity by its low flow rate limit gives us insight in the effective flow-induced thinning or thickening of the fluid in the porous medium. In detail, the apparent relative viscosity η app of a viscoelastic fluid flowing with a volumetric flow rate q and pressure drop P through a porous medium is given by: The subscript VE indicates viscoelastic fluid at a specific flow rate or pressure drop, while the subscript N indicates its Newtonian counterpart in the low flow rate or low pressure drop limit. Fig. 4 depicts how the apparent relative viscosity changes with an increase in viscoelasticity for flow through flow configurations with different solids volume fractions. With increasing De number, where De is based on the sphere radius as the characteristic length scale, we initially observe a (relatively weak) flow-induced thinning. Then beyond a certain flow rate we observe a strong flow-induced thickening, which means a sharp increase in flow resistance. With increasing solids volume fraction (decreasing poros-ity), the onset of this increased flow resistance shifts to a lower De number. This shows that the increased fluid-solid interaction facilitates the onset of such a flow resistance. Experimental evidence of this increase in apparent relative viscosity was previously reported in literature [5] , especially for packed bed systems.
The pore porosity and pore geometry are very important for the increase in apparent relative viscosity, but this is not reflected in the De number based on the radius of the spheres. Therefore, we next try to use the square root of the permeability, √ k obtained from Newtonian flow simulations, as the characteristic length scale. This altered Deborah number is defined as D e k = λU √ k . Fig. 5 shows the apparent relative viscosity versus the altered De k for different solids volume fractions. We find a collapse of all data sets of Fig. 4 to a single curve for the entire range of De k numbers. This is remarkable considering the fact that, despite the different arrangement of pore structures for the different porosities, the resulting increase in flow resistance follows the same universal thickening behavior. However we should keep in mind that these results are strictly only valid for a FENE-P type of fluid with L 2 = 100 flowing through a random array of monodisperse spheres. The increase in flow resistance for flow of viscoelastic fluid through packed bed are also experimentally shown in the work of Chhabra et al. [5] and by W. Kozicki [46] based on a capillary hybrid model. Recently M. Minale, [47,48] showed a similar kind of scaling relationship for flow of second order fluids through porous media using a generalized Brinkman's equation.
Velocity and stress profiles
Next we investigate the velocity and stress profiles of viscoelastic fluid flow through the three dimensional porous medium, and analyze the interplay between the flow structures and fluid rheology. Although we have investigated different porosities, here we show the profiles for a solid fraction of φ= 0.5 for a range of De numbers. Fig. 6 shows snapshots of velocity contours (across a representative section of flow domain), colored by the normalized xvelocity, for different De numbers after the same time of simulation. The flow structure becomes non uniform with increasing De number. Especially at De of order 1 we see the onset of preferential flow paths (paths with higher velocity) in the flow domain. Fig. 7 illustrates the same effect with streamlines (colored with normalized vorticity), clearly showing the meandering flow paths through the pore space. Such preferential flow paths emerge due to differences in flow resistance through different parts of the porous medium, leading to asymmetric flow structures, as will be discussed in detail later.
The non-dimensional viscoelastic normal stress component along the flow direction τxx ηU/ R c , is shown in Fig. 8 for different De numbers. Such viscoelastic stresses are absent in a Newtonian fluid. We observe that the viscoelastic normal stress increases with increasing De number, and that the largest normal stresses are present near the walls of the sphere at locations which are shear dominated. This will also be analyzed in detail in the subsequent section.
Though the flow is in a non-inertial regime, at higher viscoelasticity the uniformity in the streamlines is found to be less, showing flow asymmetry, compared to its Newtonian counterpart.
To understand the effect of viscoelasticity on flow anisotropy we have analyzed the velocity probability distribution function (PDF) across the entire three dimensional flow domain (each porosity) for all the De numbers based on √ k . Fig. 9 shows the distribution of normalized velocities along the flow direction x. For low Reynolds numbers such as studied here one might expect the PDFs to collapse on each other. This is not the case. At low De number the PDFs of the x velocity component superimpose and are mostly positive. However, at increased De numbers the PDFs also increase for negative velocities. This shows that there is emergence of recirculation zones in the system. Though the driving force for the flow is along the positive x direction, the negative components in the PDFs give a measure for recirculation appearing in the system. Another interesting fact is that the width of the PDFs increases with increasing solids volume fraction, and show an appearance of a slower decaying tail for higher velocities. Fig. 10 shows the distribution of normalized velocities along the transverse flow direction y. In a non-inertial flow regime with random placement of the spheres we expect a symmetric distribution of y-velocities. Fig. 10 , shows that for low De number the PDFs of the transverse velocity components are completely symmetric. However with increasing De number the PDFs become slightly asymmetric, and the broadness of the PDFs also increases. The decay is more exponential in nature. The reader should keep in mind that the vertical scales of the PDF distributions are plotted on a log scale, so the tails represent very small probabilities. Thus, because a finite number of samples are used, the PDFs are not smooth in the tails.
These observations quantitatively validate our findings of the existence of preferential flow paths, observed from the streamlines. A possible mechanism might be that, at increased viscoelasticity, strong elastic effects come into play, leading to asymmetric curved streamlines and possibly causing elastic instabilities, as also shown in the work of Pakdel et al. [30] . To understand these effects further we have performed a detailed analysis of the flow topology and dissipation function, and will be presented in the next section.
Flow topology
This section focuses on the flow topology. As explained in Section 2.2 , the main idea is to investigate how the shear, extensional and rotational parts of the flow are distributed and develop in the three dimensional interstitial space. As explained Q = −1, Q = 0, and Q = + 1 correspond to pure rotational, shear and elongational flows, respectively. Fig. 11 shows the flow topology parameter distribution for a random porous medium with solid fraction 0.5. We observe that the flow becomes more shear dominated at higher De, while, perhaps surprisingly, the presence of extensional flow regions seems to decrease. To better quantify the effect of viscoelasticity on flow topology, in Fig. 12 we have plotted the histograms of flow topology parameter for different De numbers, for each solids volume fraction. The common feature observed from all histograms is that all flow structures are more shear dominated than extensional flow dominated. Although the extensional component ( Q = 1) increases slightly up to De = O(1) , at larger De it sharply decreases again and shear effects ( Q = 0) become more dominant. Note that the PDFs of normalized velocity ( Fig. 10 ) and flow streamlines ( Fig. 7 ) show a transition to flow asymmetry around the same De = O(1) . So the flow topology analysis shows that the increase in flow resistance at larger De, observed in a random porous medium of monodisperse spheres, may be caused by strong normal viscoelastic shear flow stresses, rather than their extensional counterparts. Fig. 13 compares the flow topology histograms at the same De k = 1.0 for four different solids volume fractions. This shows that at this relatively high De number, the overall shear contribution (Q = 0) also increases with increasing solids volume fraction (decreasing porosity), and subsequently the extensional contribution decreases. These results are also consistent with our recent observations of viscoelastic fluid flow in a model porous media [49] .
Dissipation function
Finally, we analyze the spatial distribution of the dissipation function, expressing the work done by the total stress (viscoelastic + Newtonian solvent) per unit of time and per unit of volume, as defined in Section 2.2 . This dissipation function can be both positive and negative, but we note that energy is always dissipated from the Newtonian solvent contribution. As an example, Fig. 14 shows the nondimensionalised spatial distribution of E t for a solid fraction of 0.5 and De number of 1.0, in a representative plane of the random porous media. We have clipped the dissipation function color scale to clearly show regions in the domain where energy is released (negative dissipation function) by the polymer solution. For De = O(1) and higher, energy is dissipated by the solvent, but also stored as elastic energy by the polymers, close to the particle surfaces, and released after a pore throat has ended, further away from the particle surfaces. This is consistent with the physical picture in which polymers in fast contraction flow are extended and therefore store energy in their entropic springs; this energy is subsequently released when the polymers can recoil to their upstretched state when the contraction flow has stopped.
In the previous section, we showed that the fraction of shear dominated regions increases significantly beyond De = O(1) . We now ask whether these shear dominated regions are also responsible for the observed increase in flow resistance. To answer this, in Fig. 15 we show what fraction of energy is dissipated in the flow domain with a particular value of flow topology Q (per unit Q ).
Only for the lowest solid fraction 0.3 (highest porosity of 0.7), we find significant energy dissipation in mixed shear and extensional flow (0 < Q < 1). For more closely packed domains this zone significantly reduces: the width of the histograms reduce and their peaks grow around Q = 0 with increasing solid fraction, while the contribution of extensional flow to the energy dissipation generally decreases with increasing De. This conclusively shows that at increased solid fraction and with increasing De, shear regions are predominantly responsible for the increase in flow resistance in the random porous media studied here.
Conclusion
We have employed a finite volume -immersed boundary methodology to study the flow of viscoelastic fluids through an array of randomly arranged equal-sized spheres representing a three dimensional disordered porous medium, for a range of solid fractions (or porosities). Irrespective of the solid fraction, we found a strong increase in flow resistance after a critical De number is reached. The increase in apparent relative viscosities measured for different solids volume fractions overlap among each other if the Deborah number is chosen with a length scale based on the permeability of the pore space (more precisely, D e k = λU/ √ k , with k the permeability of the medium for a Newtonian fluid. The PDFs of flow velocity suggest that with increasing viscoelasticity the flow profiles become more asymmetric, and increasingly preferential flow paths are found. A detailed study of the flow topology shows that for the porous media investigated in our study, shear flow becomes more important than extensional or rotational flow at higher De number. We have analyzed the distribution of the dissipation function across the flow domain and correlated it to the flow topology. These findings helped us conclude that the observed increase in flow resistance should be attributed to an increase in energy dissipation in shear flow dominated regions. More generally, simulations such as shown here help us to understand the complex interplay between the fluid rheology and pore structure in porous media. In our future work we will study flow through three dimensional realistic porous media which have a larger distribution in pore and throat sizes than studied here. | 6,477.8 | 2017-10-01T00:00:00.000 | [
"Physics",
"Environmental Science",
"Engineering"
] |
APPLICATION OF FUZZY SETS IN AN EXPERT SYSTEM FOR TECHNOLOGICAL PROCESS MANAGEMENT
Automation of technological processes requires a computer simulation of the process operator’s work. Such an approach is very successful in many cases, and the computer-based management is equally good or even better than manual management. However, there are also many examples of simulations which do not achieve the managerial level performed by man as an operator, no matter how complex these simulations are. Such problems occur, for instance, in processes whose outcome strongly depends on the operator’s experience.
INTRODUCTION
Automation of technological processes requires a computer simulation of the process operator's work.Such an approach is very successful in many cases, and the computer-based management is equally good or even better than manual management.However, there are also many examples of simulations which do not achieve the managerial level performed by man as an operator, no matter how complex these simulations are.Such problems occur, for instance, in processes whose outcome strongly depends on the operator's experience.
Application of expert systems (ES) is one of the ways how to exploit human experience when trying to describe the managerial issues mathematically during the automation (Kelemen & Linday, 1996;Popper & Kelemen, 1989;Bělohlávek & Novák, 2002).Table 1 presents advantages of process management based on ES as compared to the management run by an expert.
Expert ES
Experts who are off active duty for some time lose some of their skills.
ES can be switched off and switched on again without any loss of its managerial capabilities.
Transferring knowledge from an expert to another person is a long process.
Knowledge transfer means copying a computer program only.
It is difficult to record or preserve knowledge of an expert.
Documentation is held easily.
Expert's work is costly.
Expenses on ES operations are low.
There is a shortage of experts.Availability of ES is high.
On the other hand, complete elimination of man has many disadvantages.Some of the most important ones are shown in Table 2. f) The experts should be in agreement on major issues relating to the process.
The so-called decision rules constitute one of the key stages when creating an ES.These rules are based on experts' experience, and it is in this stage that difficulties arise because experts often use terminology which is very hard to formalize.An example of such terminology is the statement: "If the pressure is high, we will lower the temperature a little bit." How to express "high" and "a little bit"?
If top-class cooks described their "technology" of preparing meals, they would certainly use many similar terms, such as: a pinch of salt, a bit of oil, steam slightly, etc.It wouldn't be easy to simulate such instructions.An appropriate way to formalize this terminology may, however, be found in the fuzzy set concept (Novák, 1986;Lin, He, and Qian, 2010;Wang et al. 2010).
FUZZY SETS IN EXPERT SYSTEMS
When defining "a standard" set, the elements of the set are listed individually, or distinctive features of the elements are defined.This can be done in many ways.
One way is to use the so-called characteristic function . This function basically represents a logic statement, taking on the value being an element of the universe U (true statement), or Fuzzy sets approach generalizes the standard set approach: every element x of U is assigned a number from the interval 1 , 0 , a membership degree describing the weight with which x belongs to the set A. We shall denote such value Ax.The pair (x, Ax) forms a fuzzy set.Graphical depiction of the points (x, Ax) forms a membership function.
We depict the membership function as a trapezoid in this paper, according to Combining experience of an expert and fuzzy sets approach, which enables us to describe the expert's language, we may simulate the management of a process.
In this paper, we aim to show a simulation using the aforementioned mathematical tools, taking as an example the management of a process with one input and one output.Generalization of such procedure to the case with more than one input is straightforward.
Let us assume that an output Y is produced by an input X.Both X and Y fall into five categories: very small, small, standard, big, very big.
We define five levels to allow for a sufficient distinction of the levels of input and output.The levels represent fuzzy sets which must be defined, i.e. we must define when X is small, standard, etc. Definition of Y and X as fuzzy sets is given in Table 3.The aim of the regulation is to keep the output Y on a standard level, which is understood to be the values of Y within the range (0.9; 1.0).Based on the last measurement of Y, the manager performs an operative regulation X according to the following rules: Selecting X1 for Y1, selecting X2 for Y2,…, selecting X5 for Y5.
It is up to the operator to set the rules in a concrete simulated situation.Each line of Table 3 defines fuzzy sets "very small, small, standard, big, very big", and is a regulation rule at the same time.
PROCESS MANAGEMENT USING EXPERT SYSTEM
How does an automatic process management simulating expert's work run?The program receives a measured value of the output Y.It is now to react to this value with an operation.Man evaluates the value in such a manner that he or she assigns it a level from the scale of levels used; in our case the levels are ranging from very small to very big.After that, the input X is adjusted according to regulation rules.
Automatic activity, depicted in Figure 3, runs in the following steps:
Figure 3 -Graphical depiction of the calculation of intervention
The software Matlab was used for the calculations and drawings.Integrals used to calculate the coordinate of the center of gravity was obtained numerically using a modified rectangle method.
MATHEMATICAL REASONING BEHIND THE CALCULATION
Each of the ten regulation rules is an implication, where A and B are fuzzy sets.
An example is the verbal statement (Novák, 2002;Bělohlávek, Dvořák, Jedelský, and Novák 1998): If Y is standard (Y3), then keep X on the standard level (X3).This regulation rule is the third mentioned rule in Table 3.
Every implication between fuzzy sets can be expressed as a set AxB of pairs (x,y) with degree of membership of (x,y) in AxB given by AxB(x,y) = min (Ax,By).This tells us the weight with which the value x implies the value y.
The whole managerial algorithm can be formally expressed as
CONCLUSION
Automation of technological process management is a subject of intensive study.
There is a variety of approaches to this issue.These approaches usually try to resolve a concrete problem, such as calculation of inputs into the process, which will ensure economically and technologically optimized outputs, or maximal robustness of the process.
We showed in this article a simple possibility of setting up an expert system whose great advantage is the use of operator's experience.The description of operator's work is, however, often burdened with the problem of formalization of such work as it is described verbally.Fuzzy sets are a suitable tool for such purposes.Setting regulation rules based on operator's experience, using fuzzy sets, is simple, and calculation of regulatory intervention is fairly simple as well.Further generalization of the described procedure is given by a process management with more than one input.The principle of such management remains essentially the same in this case.
AKNOWLEDGEMENT
This paper was elaborated within the frame of the specific research project No. SP2011/85 which has been solved at the faculty of Metallurgy and Material Engineering, VŠB-TU Ostrava with the support of the Ministry of Education, Youth and Sports in the Czech Republic.
Figure 2 -
Figure 2 -Regulation rules based on table 3 a) the value of Y is portrayed in all the fuzzy sets (in fig.3Y = 0.768, as an example), b) a point of intersection of the corresponding vertical line with fuzzy set graphs is found, c) the point of intersection is transfered into the fuzzy set graph of the input X,d) the graph is reduced as depicted, e) the resulting graphs are unified, f) found is the center of gravity in the mass contained in the area of the resulting graph, g) the x-axis coordinate of the center of gravity is the recommended intervention (Xo = 1.1753 in Fig.3).
Table 2 -
Disadvantages of process management based on ES
Table 3 -
Definition of fuzzy sets and regulation rules Graphical depiction of all the five fuzzy sets for Y and X is in figure 2. | 1,988 | 2011-12-22T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
DFT Analysis of Hole Qubits Spin State in Germanium Thin Layer
Due to the presence of a strong spin–orbit interaction, hole qubits in germanium are increasingly being considered as candidates for quantum computing. These objects make it possible to create electrically controlled logic gates with the basic properties of scalability, a reasonable quantum error correction, and the necessary speed of operation. In this paper, using the methods of quantum-mechanical calculations and considering the non-collinear magnetic interactions, the quantum states of the system 2D structure of Ge in the presence of even and odd numbers of holes were investigated. The spatial localizations of hole states were calculated, favorable quantum states were revealed, and the magnetic structural characteristics of the system were analyzed.
Introduction
Currently, there is active study into solid-state semiconductor materials that can be used to develop quantum computers. Loss and DiVincenzo [1] proposed a quantum computation model based on the spins of electrons enclosed in quantum dots. Later, the so-called DiVincenzo criteria were formulated [2]; it imposes certain requirements on the qubit system. According to these criteria, the system must be scalable and well characterized. Before starting any type of computation, the qubit states must be initialized with a reasonable speed, which is essential for quantum error correction. The system must have a sufficiently long coherence time for the qubit states. Reading computational results should be carried out without affecting neighboring qubits and, as a result, the entire quantum computing system. The first successful application of spin qubits in semiconductors was realized in gallium arsenide (GaAs) [3]. However, one of the main drawbacks of Groups III to V elements is the spin decoherence caused by surrounding nuclear spins. At the same time, silicon is characterized by much less hyperfine interactions, since it consists mainly of 28 Si atoms with zero nuclear spin that can be isotopically purified. The authors of [4] achieved coherence times on the order of one second for isotopically purified silicon. At the same time, for the gate operation to be fast and fully electrically controllable, a spin-orbit interaction is required, which is absent in the electrons of silicon. Maurand R. et al. [5] showed that holes have the necessary spin-orbit interaction. In theoretical studies [6] it has also been shown that not only filled states of the conduction band but also vacant states of the valence band are promising for the realization of spin qubits. For these reasons, it makes more sense to use structures with holes to produce spin qubit systems. Compared with other semiconductors, germanium has a stronger and more controlled spin-orbit coupling. [7][8][9][10]. Theoretical studies show that, near the Gamma point, the states in the ceiling of the valence band of Ge are well-described by the Luttinger-Kohn Hamiltonian [11,12], whose eigenvalues can be grouped into states with heavy-hole (HH) and light-hole (LH), with values of spin projections on the direction of motion equal to ±3 /2 and ± /2, respectively. There are several candidate materials for the design of quantum calculators. Such materials can be planar Ge/SiGe heterostructures [13,14], germanium hut wires (HW) [15,16], and germanium core-shell nanowires (NW) [17,18]. HW germanium nanowires grown on silicon surfaces are of the greatest interest. It has been shown that, during the anisotropic growth of Ge on a Si(001) substrate, the quantum dot clusters are pulled along the [001] or [010] Si directions [19]. For the HW structure, the spin relaxation and phase mismatch times were measured, and a single qubit spin control operation was performed. It was also shown that Ge {105} facet formation plays a key role in determining the stability and homogeneity of nanowires [10,16,20,21].
However, all studies carried out so far are purely experimental in nature; there are no works that theoretically describe the behavior of qubits in the proposed germanium-based quantum systems. Thus, the literature data on germanium nanowires are mainly devoted to the description of their production technologies and experimental studies of the behavior of quantum dots in them. Our work is devoted to a detailed analysis of quantum states of the two-dimensional structure system of germanium in the presence of even and odd numbers of holes. The spatial localizations of hole states are investigated in detail, advantageous quantum states are identified, and the magnetic structural characteristics of the system are analyzed.
Computational Details
All calculations of the atomic structures, their total energy, and charge distribution were performed using the Quantum ESPRESSO software package [22]. The ultrasoft, fully relativistic form of the pseudopotential for germanium within the generalized gradient Perdew-Burke-Ernzerhof approximation in the spin-orbit interaction approximation was used. During the relaxation of the atomic structures of the unit cell and the germanium slab, all atoms were given complete freedom. For structural relaxation, we used the BFGS quasi-newton algorithm [22]. Special sets of k-points were used to sample the Brillouin zone. A 6 × 6 × 6 set was used for the Ge unit cell, and a 2 × 6 × 1 set was used for the ultrathin germanium layer. The cut-off energy of the plane waves was 680.28 eV. The values of the interatomic forces, after structural relaxation, did not exceed the value of 0.026 eV/Å. The atomic geometry and distribution of the charge and magnetic characteristics of the structures were analyzed using the Vesta software package [23].
Results
After the full atomic relaxation of the unit cell of bulk germanium with Fd-3m symmetry consisting of eight Ge atoms [24], we obtained the following cell parameters: a = b = c = 5.616191 Å (Figure 1a). The experimental data for this structure are a = b = c = 5.657820(5) Å [25]. The quasi-two-dimensional structure of germanium with direction (105) was constructed from a bulk cubic cell with the number of atomic layers equal to three and with 14 Ge atoms. The symmetry of the non-relaxed slab structure corresponded to the monoclinic P2 1 /c group with a unit cell basis equal to a = 14.31850 Å, b = 5.61620 Å and the angle β equal to 128.89 • (Figure 1b). After full relaxation, the 2D atomic layer remains in the monoclinic structure, but is transformed into the P2/m symmetry with cell parameters equal to a = 11.4742 Å, b = 4.3403 Å, β = 81.53 • (Figure 1c). Thus, we see that, during the relaxation of the atomic structure, the cell parameters shrink, with the parameter a decreasing by almost 20% and the parameter b decreasing by almost 23%. However, since there is a transformation of the angle β, there is a corresponding increase in the interatomic distance from 2.463 Å to 2.660-2.730 Å. A similar atomic structure, in the so-called J-germanium phase, was recently theoretically predicted in another paper [26].
Then, as a model of a hole qubit in a 2D layer of germanium, a hole was created (a lack of one electron). For this purpose, one electron was removed from the structure, resulting in the formation of a hole in its place. Figure 2 shows the hole state distribution for one hole in the 2D germanium structure with an isosurface level equal to 0.004 (marked in light green). These states were determined by the difference in the bulk charge densities. The neutral system charge density was subtracted from the charge density of the charged system (with a hole present). The hole localization states for systems with two and three holes were determined in the same way. The yellow color in the figure represents the states with increasing charge density; these localizations occur as a result of atomic shifts during the hole formation in the structure. This distribution, which characterizes the hole state, is located in the center of the structure. We see from the figure that similar mirror hole states are observed closer to the edge of the structure. These mirror states are located 11.4742 Å away from each other. Thus, such states arise due to the presence of the magnetic space group P2b2/m in the 2D germanium structure under consideration, which imposes conditions for the translation of magnetic states through the two cell parameters 2b along the Y axis {2 010 |0 1 0} [27,28]. Indeed, our calculated structure of germanium with P2/m symmetry and cell parameters equal to a = 11.4742 Å, b = 4.3403 Å, β = 81.53 • and shown on Figure 1c can be transformed into a triclinic structure with an elementary basis of seven atoms and parameters equal to a = 4.3403 Å, b = 5.7371 Å, α = 81.53 • . Then, the distance of 11.4742 Å to which the magnetic state is translated corresponds exactly to twice the value of the parameter b. The difference between the complete magnetic states for the ±1 spin directions is 0.72 µeV. Thus, we see that, for a single-hole qubit in germanium, the quantum state |1>with a spin down direction s = −1 is the most energetically advantageous state compared to the |0>state (i.e., with a spin up direction s = +1). The magnetization corresponding to the |1>states is more localized on the surface atoms of the 2D germanium layer and the |0>states in the center of the structure (Figure 3). Then, as a model of a hole qubit in a 2D layer of germanium, a hole was created (a lack of one electron). For this purpose, one electron was removed from the structure, resulting in the formation of a hole in its place. Figure 2 shows the hole state distribution for one hole in the 2D germanium structure with an isosurface level equal to 0.004 (marked in light green). These states were determined by the difference in the bulk charge densities. The neutral system charge density was subtracted from the charge density of the charged system (with a hole present). The hole localization states for systems with two and three holes were determined in the same way. The yellow color in the figure represents the states with increasing charge density; these localizations occur as a result of atomic shifts during the hole formation in the structure. This distribution, which characterizes the hole state, is located in the center of the structure. We see from the figure that similar mirror hole states are observed closer to the edge of the structure. These mirror states are located 11.4742 Å away from each other. Thus, such states arise due to the presence of the magnetic space group P2b2/m in the 2D germanium structure under The formation of two holes in the germanium structure leads to the mutual destruction of their magnetic components such that the total magnetization will be zero. Figure 4 shows the localization of hole states for two holes in the 2D germanium structure with an isosurface level equal to 0.004. Thus, an even number of holes does not lead to total magnetization; this result is consistent with the experimental results [20]. With an even number of holes, the most favorable state is the ground singlet state, i.e., with different spin directions. The formation of three holes in the germanium structure, i.e., an odd number of them, results in the quantum state |0˃, with an upward spin direction s = +1, becoming the most advantageous state at 0.14 μeV compared to the |1˃ state. This is because, with three holes, the first two occupy a favorable ground singlet state in one orbital, and the third must occupy another higher orbital [20]. As a result, there is Coulomb repulsion between the two holes occupying the singlet state and the third [20,29,30], so a system with three holes is easier to transfer between the |0˃ and |1˃ states compared to a system with a single hole. This would require only 0.14 μeV. Figure 5 shows the localization of hole states for three holes in the germanium structure. The formation of two holes in the germanium structure leads to the mutual destruction of their magnetic components such that the total magnetization will be zero. Figure 4 shows the localization of hole states for two holes in the 2D germanium structure with an isosurface level equal to 0.004. Thus, an even number of holes does not lead to total magnetization; this result is consistent with the experimental results [20]. With an even number of holes, the most favorable state is the ground singlet state, i.e., with different spin directions. the state |0˃, with a spin up s = +1. The directions of the downward spins s = −1 are indicated by red arrows. The formation of two holes in the germanium structure leads to the mutual destruction of their magnetic components such that the total magnetization will be zero. Figure 4 shows the localization of hole states for two holes in the 2D germanium structure with an isosurface level equal to 0.004. Thus, an even number of holes does not lead to total magnetization; this result is consistent with the experimental results [20]. With an even number of holes, the most favorable state is the ground singlet state, i.e., with different spin directions. The formation of three holes in the germanium structure, i.e., an odd number of them, results in the quantum state |0˃, with an upward spin direction s = +1, becoming the most advantageous state at 0.14 μeV compared to the |1˃ state. This is because, with three holes, the first two occupy a favorable ground singlet state in one orbital, and the third must occupy another higher orbital [20]. As a result, there is Coulomb repulsion between the two holes occupying the singlet state and the third [20,29,30], so a system with three holes is easier to transfer between the |0˃ and |1˃ states compared to a system with a single hole. This would require only 0.14 μeV. Figure 5 shows the localization of hole states for three holes in the germanium structure. The formation of three holes in the germanium structure, i.e., an odd number of them, results in the quantum state |0>, with an upward spin direction s = +1, becoming the most advantageous state at 0.14 µeV compared to the |1>state. This is because, with three holes, the first two occupy a favorable ground singlet state in one orbital, and the third must occupy another higher orbital [20]. As a result, there is Coulomb repulsion between the two holes occupying the singlet state and the third [20,29,30], so a system with three holes is easier to transfer between the |0>and |1>states compared to a system with a single hole. This would require only 0.14 µeV. Figure 5 shows the localization of hole states for three holes in the germanium structure.
Conclusions
The atomic and electronic structure of a 2D germanium layer with a crystallographic direction (105) was studied in detail. Spatial localizations of the hole qubit states were investigated, advantageous quantum states were identified, and an analysis of the magnetic structural characteristics of the system was given. It is shown that, for a nanoscale germanium layer with a thickness of 0.27 nm, its atomic structure is transformed into a structure with a monoclinic spatial group with P2/m symmetry. This transformation is accompanied by an increase in the interatomic distance. We analyzed the quantum states of the hole qubits in the system in the presence of an even and odd number of holes. The results show that, for a single hole, the advantageous quantum state at 0.72 μeV is the |1˃
Conclusions
The atomic and electronic structure of a 2D germanium layer with a crystallographic direction (105) was studied in detail. Spatial localizations of the hole qubit states were investigated, advantageous quantum states were identified, and an analysis of the magnetic structural characteristics of the system was given. It is shown that, for a nanoscale germanium layer with a thickness of 0.27 nm, its atomic structure is transformed into a structure with a monoclinic spatial group with P2/m symmetry. This transformation is accompanied by an increase in the interatomic distance. We analyzed the quantum states of the hole Nanomaterials 2022, 12, 2244 5 of 6 qubits in the system in the presence of an even and odd number of holes. The results show that, for a single hole, the advantageous quantum state at 0.72 µeV is the |1>state, with a spin down direction s = −1, compared to the |0>state, with a spin up direction s = +1. An even number of holes in the system does not result in full magnetization. The formation of three holes causes the quantum state |0>, with a spin up direction s = +1, to become the most advantageous state at 0.14 µeV compared to the state |1>, so a system with three holes is easier to transfer between the quantum states |0>and |1>compared to a system with one hole. The paper shows that hole qubits are characterized by the condition of the translation of their magnetic states through two cell parameters 2b along the Y axis. We are confident that our theoretical results will be relevant and promising for use by technologists and experimentalists in the design and study of quantum computing systems based on hole qubits.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest. | 4,004 | 2022-06-29T00:00:00.000 | [
"Physics"
] |
Defining Nature-Based Solutions Within the Blue Economy: The Example of Aquaculture
The concepts of Nature-based Solutions (NbS) and the Blue Economy (BE) are two prominent sustainability frameworks at the forefront of policy dialogues. However, investment within the BE has been slowed by the lack of a sufficiently robust operational definition. This lack of definition reduces investor confidence and impacts adoption by policy makers and practitioners. By considering the overlap between the two sustainability frameworks it is possible to identify specific sectors and activities within the BE that also fit the operationalised criteria for NbS. Undertaking this process for one sector of the BE (aquaculture) has provided evidence that aquaculture activities, if planned and operated within the criteria, would qualify as NbS and as such may unlock financing for the provision of ecosystem services.
INTRODUCTION
As our understanding of the impacts that human economic activity has on the environment increases, a number of frameworks have been developed and utilised to contextualise, measure and ultimately manage these impacts. In a western context this thinking can be traced back to the 18th Century and earlier (Du Pisani, 2006), with links made more recently between economic and ecological equilibrium and sustainable economic growth (Meadows et al., 1972) and with key milestones such as Rio 1992 and the millennium ecosystem assessment (Reid et al., 2005). One such framework is that of Nature-based Solutions (NbS) which signposted a shift in thinking from conserving nature for its own sake to conserving for peoples sake (Seddon et al., 2021) and sits within a wider stable of concepts that can be termed the green economy (Loiseau et al., 2016). This concept of NbS and its subsequent framework has grown out of a term first used, but not defined, in a world bank report of 2008 titled "Biodiversity, Climate Change, and Adaptation: Nature-based Solutions from the World Bank Portfolio" (MacKinnon et al., 2008). The report detailed strategies for the management and adaptation to climate change and biodiversity loss that were based in the concepts of ecosystem management and conservation. The term was further defined in 2009 by explicitly making the link between biodiversity and ecosystem management and human economic development through forests, fisheries and agriculture (MacKinnon and Hickey, 2009); also the role these can play in carbon sequestration. In the same year an IUCN position paper to COP15 endorsed NbS for climate change to "harness the potential of healthy and well managed ecosystems to build resilience and reduce the vulnerability of people to the impacts of climate change" (Parker, 2009). Subsequently the term was adopted by the IUCN as one of three areas in its 2012-2016 work program (Cohen-Shacham et al., 2016). The IUCN adopted the definition of NbS as "actions to protect, sustainably manage, and restore natural or modified ecosystems, that address societal challenges effectively and adaptively, simultaneously providing human well-being and biodiversity benefits." Importantly this definition builds on the concept in three important aspects. Firstly, it expands the concept of NbS from ecosystem management and conservation to include not just natural ecosystems but also to modified ecosystems. This links to the 2009 framing of the term that included forests, agriculture and fisheries, but significantly expands the scope into the framework of social ecological systems (SES; Berkes et al., 2000). Secondly it also expands the desired outcomes from focussing on climate change resilience to a broader category of societal need and from poverty alleviation to human well-being. Thirdly it explicitly recognises the NbS' needs to deliver both ecological and social benefits. The IUCN definition was further developed and effectively operationalised through the development of a typology that categorised NbS into three main groupings along two orthogonal axes (Eggermont et al., 2015): firstly the degree of engineering of the ecosystem that is undertaken and the second axis being the number of ecosystem services (ES) and stakeholder groups targeted. This ordination reveals three main typologies, of which Type 3 which is the furthest along both of these axes and is described as follows: "consists of managing ecosystems in very intrusive ways or even creating new ecosystems (e.g., artificial ecosystems with new assemblages of organism)." The authors recognise that in some cases this typology moves the definition beyond the IUCN definition and caveats that for all cases, NbS should contribute to preserving biodiversity and restoring ecosystems while delivering a range of ES.
POLICY OPTIONS AND IMPLICATIONS
This development of the definition and steps toward the operationalisation of the NbS concept clearly demonstrates a strong link or nexus between NbS, ES (Almenar et al., 2021) and the context and sector specific challenges that are being addressed. However, the concept is still open and loosely defined (Randone et al., 2017), and this can represent a barrier to practitioners and policy makers in the adoption and application of the concept (Maes and Jacobs, 2017;Almenar et al., 2021). But the use of ES and an understanding of the specific context allows the anchoring of NbS within existing SES frameworks (Shah et al., 2020). These frameworks are crucial where there is a need to take the expanding ES literature and to use it as the catalyst for an action situation (Rodríguez-Robayo and Merino-Perez, 2017; sensu Ostrom) that leads to the benefits ascribed to the NbS approach. This anchoring allows both the recognition that any NbS is embedded in a complex web of interactions, subsystems, and internal variables (Ostrom, 2009) and that practitioners and policy makers can use existing SES frameworks to operationalise the concept. This operationalisation is key for the marine environment where inadequate frameworks and taxonomies have been highlighted as a principle barrier to sustainable financing and investment in Sustainable Ocean Economy or the Blue Economy (BE; Sumaila et al., 2020).
The concept of the BE as it relates to the sustainable development of coastal countries came out of the preparatory discussions for the Rio+20 conferences which were held in Rio de Janeiro in 2012. The concept of greening the BE was introduced at a meeting in Paris titled "A Blueprint for Ocean and Coastal Sustainability" (IOC/UNESCO et al., 2011). This meeting recognised the use of ocean space and resources as an essential component of global economic growth and prosperity and used the term "Blue-Green" Economy to refer to the transition to a human-ocean centred relationship of living with and from the oceans in a sustainable way. This meeting and the subsequent report highlighted the role of several key industries in this development. At the core of the BE concept is the de-coupling of socio-economic development from environmental degradation. It breaks the mould of the business as usual "brown" development model where the oceans have been perceived as a relatively unregulated source of resources and a waste dumping location with costs, financial and environmental, generally externalised from economic calculation. A robust definition of the "Blue Economy" is proving challenging to establish, with different stakeholders using it to cover wide ranging aspects of sustainable development in the context of utilising marine resources: fish, energy, mineral, transport, tourism, biotechnology, and others. However, the concept carries much political weight having been widely adopted by a large number of Governments and Non-Governmental Organisations including the United Nations, the World Bank (Bank et al., 2017), the Asian Development Bank, and WWF. Along with this recognition there have been a number of attempts at valuing the "economy" of our seas. An OECD report in 2017 valued the Ocean Economy at $1.5 trillion as of 2010 (Cervigni and Scandizzo, 2017), while other estimates of Gross Marine Product place the value at $2.5T (Hoegh-Guldberg, 2015). Despite this value there is a recognised lack of investment within the sector and a lack of capital flowing toward the BE (REUTERS, 2020), and there are also recognised information gaps in level of financial investment in the BE The Organisation for Economic Co-operation and Development (OECD) estimated that official development assistance leveraged a total of $2.96 billion of private finance between 2013 and 2017 (Whisnant and Vandeweerd, 2019;OECD, 2020). In addition the OECD recognised the lack of marine focus in the rapidly increasing Environmental, Social and Governance (ESG) investing, and cites the variety of standards and methodologies as challenging investor confidence in this sector. One of the barriers to investment in the BE, cited by 39% of asset managers, was a lack of definition of the BE (Suisse, 2020). The United Nations Environment Program Finance Initiative through the Sustainable Blue Economy Finance Initiative has recognised the TABLE 1 | The eight criteria as developed by the IUCN global standard for Nature-based Solutions (IUCN, 2020) and evidence to demonstrate how specific, planned aquaculture activities meet the criteria.
Criteria
Evidence that aquaculture operations can meet the criteria NbS effectively address societal challenges Although aquaculture is principally concerned with the production of food (food security) it can be designed to meet other societal challenges such as climate change mitigation (Sondak et al., 2017), adaption (Galappaththi et al., 2020), economic and social development (Ponte et al., 2014), and the mitigation of environmental and biodiversity degradation (Lacoste et al., 2020).
Design of NbS is informed by scale Aquaculture development is a commonly licenced activity with that licence pertaining to a single spatial location, although the granting of licences often draws on considerations at a larger geographical scale (Hishamunda et al., 2014). As such aquaculture developments are often considered within a large seascape planning context. However, extending this to an integrated framework of marine spatial planning can identify further opportunities to increase food production, to reduce environmental harm (Lester et al., 2018) and to allow integration with other sectors (Abhinav et al., 2020).
NbS result in a net gain to biodiversity and ecosystem integrity
The nature of the environmental impact of aquaculture is dependent on a range of factors including the scale, the type of organism cultured and the receiving environment (Ahmed and Thompson, 2019). However, aquaculture operations can be specifically designed to deliver conservation goals (Froehlich et al., 2017) and to provide an increase in regulating ecosystem services such as nutrient cycling and carbon storage in low trophic aquaculture such as shellfish or seaweed production. Furthermore aquaculture sites can, if designed appropriately, provide habitats and positively impact on biodiversity locally and at a regional scale (Gentry et al., 2020).
NbS are economically viable
Although growth rates of the aquaculture industry have slowed since the 1980s and 1990s, they remain high (4.5% between 2011 and 2018) and aquaculture now has a farm gate value of $263B (FAO, 2020). Within these figures there are significant differences between the economic viability, and this variability is context specific. However, there is a general negative relationship between sustainability and unit value of aquaculture (Neori and Nobre, 2012) reflecting possible trade-offs between sustainability and profitability for any aquaculture NbS. Furthermore, business models in aquaculture are often weighted toward economic rather than social development (Kaminski et al., 2020).
NbS are based on inclusive, transparent, and empowering governance processes At a global level aquaculture exhibits a wide range of governance structures and processes. Much of this governance is based at the farm level and has a focus on environmental regulation (Bush et al., 2019). There is, however, significant opportunity within the aquaculture industry and its associated governance to address issues of human rights and gender equality (Gopal et al., 2020;Graham and D'Andrea, 2021), community well-being (Campbell et al., 2021), and stakeholder intervention (Krause et al., 2020).
NbS equitably balance trade-offs between achievement of their primary goal(s) and the continued provision of multiple benefits
Constraints within complex systems makes trade-offs inevitable and the equitability of those trade-offs is dependent on how decisions are made (Sowell, 2019). Within aquaculture, trade-offs can be broadly distributed amongst the multiple pillars of sustainability (Valenti et al., 2018) and a number of tools have been developed to manage these trade-offs (Gimpel et al., 2018;Bohnes et al., 2019).
NbS are managed adaptively, based on evidence Although there is no systemic review of the philosophical basis to aquaculture governance and management, there is a body of evidence to demonstrate the application and value of adaptive management to the aquaculture sector (Fang et al., 2016;Craig, 2019) and clear frameworks for their future application (Doremus et al., 2011).
NbS are sustainable and mainstreamed within an appropriate jurisdictional context As previously discussed, aquaculture development already exists within comprehensive governance structures, policy frameworks and regulatory environments. However, their inclusion within the NbS framework would require an additional transparency in terms of design, implementation and lessons learnt to allow the effective scaling and persistence of the solution. This type of activity has already been implemented within the aquaculture sector, connected to technology or diffusion of more sustainable practices (Lebel et al., 2016;Alexander and Hughes, 2017).
Frontiers in Marine Science | www.frontiersin.org importance of finance to the development of the sustainable BE. It has highlighted a wide range of public and private investment initiatives within the BE, and within this current investment landscape, financial services within the Seafood sector (including aquaculture) were prominent, although climate change and ecosystem service loss dominated the identified non-financial risks to investment. Conversely climate resilience and capturing positive environmental impact were the dominant non-financial considerations for financial institutions to engage with the sustainable BE (UNEP, 2021). Defining Nature based Solutions within the BE is a mechanism both to boost the investment within this sector of the economy and to mitigate some of the risks to investment that relate to a lack of definition. There are strong linkages between the development of the concepts that underlie both the BE and the Nature Based Solutions framework and the development of the definitions, typologies and standards for NbS are transferable to the marine environment (IUCN, 2020) and its application in the marine environment has been particularly prevalent in the area of flood management and coastal defence (Inácio et al., 2020). The IUCN global standard for NBS lays out eight criteria based on the premise that an ecosystem-based approach can be used to manage functioning ecosystems and their natural resources as well as deliver solutions to societal needs, increasing both human wellbeing and biodiversity benefits. Using this premise, it is clear that some sectors of the BE using the European Union typology (European Commission, 2020) fall outside the framework of NbS, such as off shore renewables and shipping, whilst sectors such as aquaculture, fishing and coastal tourism have the capacity to fit within the NbS framework. Whilst not all activities within these sectors will fit the criteria for NbS, those that do have the potential to benefit from increased investment from private and public finance, and to receive better recognition of their benefit from policy makers and regulators.
ACTIONABLE RECOMMENDATIONS: AQUACULTURE AS AN EXAMPLE OF NbS WITHIN THE BLUE ECONOMY
Aquaculture is widely accepted as one of the pillars of the BE (Wenhai et al., 2019) and as an industry has a farmgate value of $263.6 billion (FAO, 2020). Aquaculture can be broadly divided into two categories, those that rely on the addition of feed to the production system (fed aquaculture) and those which rely on the wider ecosystem to provide nutrients and energy (extractive aquaculture; Troell et al., 2009). In general, extractive aquaculture concerns the cultivation of low trophic species such as photoautotrophs and filter feeding animals. The impacts of these two types of aquaculture on ES are generally accepted to be functionally different (Alleway et al., 2019). The definitions of NbS are based around taking an ecosystem based approach to manage a functioning ecosystem, where the managed activities enhance ES by delivering benefits that both enhance human wellbeing and reduce ecosystem degradation. The concept of taking an ecosystem approach to aquaculture (EAA) management is already well developed (Soto et al., 2007).
It strives to balance diverse societal objectives by taking account of the knowledge and uncertainties of biotic, abiotic and human components of ecosystems (including their interactions, flows and processes) and applying an integrated approach within ecologically and operationally meaningful boundaries (FAO, 2005). This EAA has gained recognition, been widely adopted and has been linked to the development of aquaculture within the BE (Brugère et al., 2019). The first two principals of the EAA clearly link to the NbS approach, through ES and human well-being.
1. Aquaculture should be developed in the context of ecosystem functions and services (including biodiversity) with no degradation of these beyond their resilience capacity, 2. Aquaculture should improve human well-being and equity for all relevant stakeholders, 3. Aquaculture should be developed within the context of (and integrated with) other relevant sectors.
Not all aquaculture activity takes an EEA and not all aquaculture activity can offer NbS to the seven societal challenges the IUCN identify. The principles of NbS are wider than those of EAA, but an Ecosystem Approach to resource management does cover five of the eight principles of the NbS (Cohen-Shacham et al., 2019). In the IUCN global standard (IUCN, 2020) it is clear that to be classified a NbS, the activity must deliver human wellbeing, biodiversity and climate benefits and conform to the eight criteria. This framework allows the assessment of aquaculture activities (or any other sector) against these criteria. These criteria would allow for the mindful design and development of future aquaculture activities to deliver NbS (Table 1) inline with Eggermont's typology of "managing ecosystems in very intrusive ways or even creating new ecosystems (e.g., artificial ecosystems with new assemblages of organism)." When considering the application of these criteria to a highly diverse sector such as aquaculture it becomes evident that some sectors and practices within the industry are easier than others to demonstrate their potential future design as NbS. A systemic attempt to synthesise the way in which aquaculture can augment ES to deliver solutions to societal challenges while augmenting those ES and protecting ecosystem functioning showed that provisioning and regulating (using the Millennium Ecosystem Assessment Framework) were the most commonly addressed services (Weitzman, 2019). In conjunction with this there is clear understanding that the culture of low trophic species (such as bivalve and seaweed culture) have a good potential to augment regulating ES (Alleway et al., 2019) and as such may be the most appropriate to align with the concept of NbS.
CONCLUSION
The application of the NbS framework to low trophic and integrated aquaculture may allow the unlocking of investment linked to the enhancement of ES such as nutrient reduction. In general those aquaculture operations that are considered to extract inorganic nutrients (seaweed aquaculture) and organic nutrients (bivalve aquaculture; Troell et al., 2009) are more easily aligned with the criteria than fed species. But the conscious integration of multiple species groups (both extractive and fed) to meet multiple challenges may also meet the criteria, when balanced so as to provide a net benefit to human well-being and ecosystem functions (Chopin et al., 2012).
AUTHOR CONTRIBUTIONS
The author confirms being the sole contributor of this work and has approved it for publication.
FUNDING
This work was funded through the AquaVitae project under the European Union's Research and Innovation Programme, Grant No. 818173.
ACKNOWLEDGMENTS
The author would like to thank the reviewers for their valuable insight and comments, and also Prof. Paul Tett for his time in discussing these ideas. | 4,447.8 | 2021-07-29T00:00:00.000 | [
"Economics"
] |
A sequential EMT-MET mechanism drives the differentiation of human embryonic stem cells towards hepatocytes
Reprogramming has been shown to involve EMT–MET; however, its role in cell differentiation is unclear. We report here that in vitro differentiation of hESCs to hepatic lineage undergoes a sequential EMT–MET with an obligatory intermediate mesenchymal phase. Gene expression analysis reveals that Activin A-induced formation of definitive endoderm (DE) accompanies a synchronous EMT mediated by autocrine TGFβ signalling followed by a MET process. Pharmacological inhibition of TGFβ signalling blocks the EMT as well as DE formation. We then identify SNAI1 as the key EMT transcriptional factor required for the specification of DE. Genetic ablation of SNAI1 in hESCs does not affect the maintenance of pluripotency or neural differentiation, but completely disrupts the formation of DE. These results reveal a critical mesenchymal phase during the acquisition of DE, highlighting a role for sequential EMT–METs in both differentiation and reprogramming.
R eprogramming of somatic cells into pluripotent ones with defined factors not only provides a new way to generate functional cells for regenerative medicine, but also establishes a new paradigm for cell fate decisions. For the latter, a cell at a terminally differentiated state can be restored back to pluripotency under well-defined conditions fully observable through molecular and cellular tools. Indeed, the reprogramming process has been analysed in great detail to reveal novel insights into the mechanism of cell fate changes [1][2][3] . Of particular interest is the acquisition of epithelial characteristics from mesenchymal mouse embryonic fibroblasts (MEFs) commonly employed as starting cells in reprogramming experiments 4 . Termed the mesenchymal to epithelial transition (MET), we and others have described the MET as marking the earliest cellular change upon the simultaneous transduction of reprogramming factors POU5F1 (OCT4), SOX2, KLF4 and MYC or OSKM into MEFs 5,6 . However, when delivered sequentially as OK þ M þ S, they initiate a sequential epithelial to mesenchymal transition (EMT)-MET process that drives reprogramming more efficiently than the simultaneous approach 7 , suggesting that the switching between mesenchymal and epithelial fates underlies the reprogramming process, that is, the acquisition of pluripotency. We then speculated that such a sequential EMT-MET process might underlie cell fate decisions in other situations such as differentiation, generally viewed as the reversal of reprogramming with the loss of pluripotency. Herein, we report that a similar epithelial-mesenchymal-epithelial transition drives the differentiation of human embryonic stem cells (hESCs) towards hepatocytes. A synchronous EMT occurs during the formation of DE and DE cells are in a typical mesenchymal-like status, while further differentiation of DE to hepatocyte-like cells is accompanied by a MET. We reveal that the intermediate mesenchymal DE cells is induced by an autocrine TGFb signalling and mediated by SNAI1. On the other hand, the neural differentiation of hESCs is not dependent on TGFb signalling or SNAI1. Thus, EMT-related transcriptional factor such as SNAI1 participates in lineage-specific cell fate changes.
Results
A sequential EMT-MET connects hESCs to hepatocytes. Human embryonic stem cells robustly express E-cadherin (CDH1) and are epithelial cells in a pluripotent state. Conversely, hepatocytes are also epithelial cells, but are somatic and fully differentiated. Naively it seems possible that epithelial hESCs could move directly to hepatocytes with the gradual loss of pluripotency and gain of hepatic characteristics, without the necessity to pass through a mesenchymal state. To map the cell fate changes along the differentiation pathway between hESCs and hepatocytes, we adopted a serum-free, chemically defined protocol of hepatic differentiation of hESCs based on the stepwise addition of Activin A, FGF4/BMP2, HGF/KGF and then Oncostatin M 8,9 . As shown in Fig. 1a, there were distinct stages marked by POU5F1/NANOG (pluripotency), SOX17/FOXA2 (definitive endoderm, DE), HNF4A/AFP (hepatoblast) and albumin (ALB)/TTR (hepatocyte-like cell) at days 0, 3, 13 and 21, respectively. The cells at day 21 showed typical metabolic activities of hepatocytes such as ALB secretion, synthesis of glycogen or urea, uptake of low-density lipoprotein (LDL) and so on ( Supplementary Fig. 1), indicating the effectiveness of the protocol. We characterized the molecular signature of this process first by performing RNA-seq analysis of a time course from days 0 to 21, and compared it with the RNA-seq data of primary human hepatocytes and liver [10][11][12] . Principal component (PC) analysis indicated that the cells transitioned from pluripotent stem cell to DE then to hepatocyte-like state (Fig. 1b), based on the gene loading for the respective PCs ( Supplementary Fig. 2). In addition, we noticed that PC2 and PC3 contain many EMT-related genes that were dynamically regulated during the hepatic differentiation of hESCs ( Fig. 1c; Supplementary Fig. 2). We next performed real-time RT-polymerase chain reaction (PCR) analysis which confirmed the induction of mesenchymal genes at the DE and hepatoblast stages of hepatic differentiation (Fig. 1d). For example, the mesenchymal gene CDH2, VIM and SNAI1 were all upregulated from D3 to 13 then they were gradually downregulated in the more mature hepatocyte-like cells at D21. The epithelial marker CDH1 showed the opposite expression pattern. Mesenchymal transcriptional factors such as SNAI2, ZEB1 and KLF8 were also dynamically regulated.
We further analysed the expression of these genes at protein level by immunofluorescence staining. As shown in Fig. 2a, CDH1 was clearly downregulated at day 3 and it was re-established in some of the ALB positive cells at day 21. CDH2 was induced at day 3 and maintained up to day 13 in the alpha fetoprotein (AFP) positive hepatoblasts, however, it was clearly downregulated in the ALB positive hepatocyte-like cells (Fig. 2a). Other mesenchymal-related changes such as the expression of VIM and the formation of stress fibres were also present in DE cells. In addition, we measured the migration activity of those cells by scratch assay (Fig. 2b,c) and found that hESCs showed very limited migration (75±17 mm/24 h) while cells at day 3 (395 ± 15 mm/24 h) or 13 (505 ± 14 mm/24 h) were highly motile, further indicating the mesenchymal-like statues of those cells. Cells of D21 had a migration activity of 274 ± 20 mm/ 24 h (Fig. 2c) which was significantly slower than that of D13 (P ¼ 1.14781E À 15), suggestive of the cells losing their mesenchymal phenotype. Together, these results indicate that a sequential epithelial-mesenchymal-epithelial transition underlies the differentiation of hESCs to hepatocyte-like cells.
hESCs begin differentiation with a near synchronous EMT. The bulk RNA-seq and quantitative PCR (qPCR) analyses revealed a global epithelial-mesenchymal-epithelial transition during the hepatic differentiation of hESCs, but they cannot reveal heterogeneity in the differentiation process. To resolve this process further at the single-cell level, we performed single-cell qPCR with 46 selected genes (Supplementary Table 3) and two control genes (GAPDH, ACTB) on 501 cells ( Fig. 3a; Supplementary Data 1) and constructed relational networks of the gene expression of all cells (Fig. 3b). Surprisingly, the pluripotency genes POU5F1 and NANOG were downregulated but not extinguished by day 3. On the other hand, the DE markers SOX17 and GATA6 were robustly upregulated by day 3 and GATA4 and HNF4A are induced slightly later. Upregulation of CDH2 and downregulation of CDH1 were also clearly seen at day 3. Furthermore, cells at day 3 showed substantial homogeneity in their response to Activin A and acquisition of a DE character (R 2 ¼ 0.74 compared with the bulk RNA-seq, indicating that the bulk RNA-seq recapitulates the single cell data), suggesting a near synchronous traversal through the EMT. This is remarkable, perhaps reflecting either homogeneous starting hESCs or the synchronization power of Activin A.
We then took advantage of the single-cell qPCR analysis, which when organized into relational maps suggests the temporal order of DE acquisition and EMT, to analyse the possible relationship of EMT and DE formation. When we clustered the single-cell qPCR correlated gene expression for days 0-3 (Fig. 3c), we detected two major clusters centred on either CDH1 or CDH2. The CDH1 cluster contained POU5F1 and SOX2, that is, the pluripotent state, while the CDH2 cluster contained the major DE marker genes FOXA2, GATA6, GATA4 and SOX17. To map the precise timing in more detail we generated scatter plots of individual cell expression of CDH1 or CDH2 against the DE marker genes SOX17 and GATA6 (Fig. 3d). We showed that SOX17 and GATA6 expression was mutually exclusive with CDH1 expression at day 3, whilst conversely SOX17 and GATA6 were both coincident with CDH2. These results revealed the occurrence of EMT in all DE cells at the single cell resolution and implied a possible role of EMT during the conversion of hESCs to DE cell fate.
TGFb induced by Activin A drives EMT and DE formation.
While a similar EMT was first noticed during the Activin A-induced DE formation of hESCs in a low-serum differentiation media 13 , it is currently unclear what signal induces this EMT and whether or not the EMT plays a functional role during the acquisition of a DE cell fate. Our serum-free and chemically defined differentiation protocol provided an ideal model to investigate these questions. Activin A is not a strong inducer of an EMT by itself so it might function through the induction of other EMT-inducing signals. TGFb1 is one of the best characterized inducer of EMT and, although no exogenous TGFb1 protein was used in our differentiation system, we noticed from our RNA-seq data that TGFb1 mRNA was strongly induced by Activin A. We confirmed the strong induction of TGFb1 gene after Activin A treatment by qRT-PCR (Fig. 4a). Indeed, we detected physiological levels of TGFb1 protein (1.5 ng ml À 1 ) in the conditioned medium at day 3 by enzyme-linked immunosorbent assay (ELISA) (Fig. 4b).
To test whether the endogenously produced TGFb1 is involved in the EMT and DE formation, we blocked the TGFb signalling by a small chemical inhibitor Repsox 14 and determined its effect on EMT and DE GATA6 formation. As shown in Fig. 4c, the induction of mesenchymal markers such as VIM, CDH2 and SNAI1 were blocked by Repsox. Furthermore, other EMT changes such as the downregulation of CDH1 and formation of F-Actin stress fibre were also inhibited by Repsox (Fig. 4d). We then measured the expression of mesoendoderm markers such as EOMES, GSC, SOX17, FOXA2 and LGR5 and found that they all failed to be induced in the presence of Repsox, and these cells also die in hepatic specification media (BMP2 þ FGF4) thus failing to be induced into hepatoblasts. These results indicated that the Activin A-induced autocrine/paracrine of TGFb is not only essential for the EMT but also plays an indispensable role in the acquisition of DE cell fate. We then determined whether TGFb is sufficient to induce DE formation in the differentiation media in the absence of Activin A. As shown in Fig. 4d, we found that TGFb was able to activate part of the EMT programme (downregulation of CDH1, formation of F-Actin) but it failed to downregulate the pluripotency gene (POU5F1) and was not able to stimulate the expression of SOX17. EMT and DE induction, we first focused on the SNAI and ZEB family mesenchymal transcriptional factors upregulated at day 3 (Fig. 1). We knocked-out SNAI1/2 and ZEB1/2 by CRISPR/ Cas9-mediated gene targeting then examined their effects on DE formation. The strategy of gene targeting is outlined in Supplementary Fig. 3 and we successfully obtained multiple biallelically targeted cell lines for all of them. The resulting mutant cell lines appeared morphologically normal and the expression of pluripotency genes such as POU5F1 and NANOG was maintained (Fig. 5a), indicating these EMT factors are dispensable for the maintenance of pluripotency. We then tested the ability of Activin A to induce DE differentiation in these cell lines. SNAI1 knockout severely inhibited the induction of SOX17 and FOXA2 (Fig. 5b). SNAI2 or ZEB1 deficiency moderately reduced the induction of SOX17 while ZEB2 knockout curiously stimulated a small induction of SOX17, suggesting some kind of compensation effect (Fig. 5b). We confirmed this observation by immunofluorescence staining (Fig. 5c), which showed that SNAI1 mutants failed to induce the expression of SOX17. Similar to the Repsox-treated wild-type (WT) cells, SNAI1 mutants did not survive in the hepatic specification media, thus could not be induced into hepatoblasts. On the other hand, SNAI2 or ZEB1/2 mutants were able to be further induced into hepatocyte-like cells at efficiency similar to that in WT hESCs, indicating that ultimately SNAI2, ZEB1 and ZEB2 are dispensable for hepatocyte differentiation and that SNAI1 is required for EMT and DE induction. We further characterized the function of SNAI1 in the Activin A-induced EMT and DE formation. As expected, SNAI1 deficiency prevented the downregulation of CDH1 both at the mRNA and protein levels by Activin A (Fig. 6a,b). In contrast, CDH2 induction by Activin A was not affected by SNAI1 (Figs 5c and 6a). Additional EMT-related changes such as the induction of VIM and the formation of F-Actin also occurred in the Activin A-treated cells in both WT and SNAI1 deficient cells (Fig. 6a,b). We then measured the migration capacity of these cells and showed that SNAI1 deficient cells migrate poorly compared to the WT cells (Fig. 6c,d). Thus, SNAI1 is responsible for part of the Activin A-induced EMT programme (downregulation of CDH1, cell migration). On the other hand, the SNAI1 deficient cells failed completely to activate mesoendoderm/DE lineage markers such as GSC, EOMES, SOX17 and LGR5 compared to the WT cells (Fig. 6a,b), suggesting that SNAI1 is required for Activin A-induced DE formation.
Our observation that inhibition of TGFb signalling blocked the Activin A-induced activation of SNAI1 as well as DE formation ( Fig. 4c), together with the results from SNAI1 knockout cells, suggested SNAI1 as the major mediator of the TGFb induced DE formation. So we tested whether overexpression of SNAI1 could rescue the defect in DE formation by Repsox treatment. We established a hESC cell line stably expressing SNAI1 and showed that it maintained the expression of pluripotency gene such as NANOG and an epithelial state when maintained in mTeSR1 media (Fig. 7a). We then treated this cell line with Activin A in the presence of Repsox and found that overexpression of SNAI1 promoted the degradation of CDH1 in the presence of Repsox, however, it was not sufficient to restore the expression of DE markers such as SOX17 (Fig. 7b). These results suggest that SNAI1 is required but not sufficient to mediate Activin A-induced DE formation.
SNAI1 is not required for neural differentiation of hESCs. We next tested if SNAI1 is involved in the exit of pluripotency or differentiation of hESCs to lineages other than DE. To this end, we determined the neural differentiation capacity of SNAI1 deficient hESCs. Neural differentiation was induced by dual inhibition of SMAD signalling (2i, SB431542 þ compound C). Neural lineage markers such as SOX2, PAX6 and OTX2 were induced at similar efficiency in both the WT and SNAI1 deficient cells at day 5 (Fig. 8a). Meanwhile, pluripotency markers such as POU5F1 and NANOG were clearly downregulated during the same period (Fig. 8a). These results indicated that SNAI1 deficient cells exit the pluripotency state normally and differentiate into a neural lineage. We examined the EMT-related changes in this process and found that in WT cells CDH1 was downregulated while stress fibre was not obviously formed during neural differentiation (Fig. 8b). Downregulation of CDH1 was less efficient in SNAI1 mutants which is consistent with the function of SNAI1 in suppressing CDH1during EMT, and this abnormality seemed not to affect the induction of neurectoderm marker gene PAX6 (Fig. 8b). These results indicated that SNAI1 is not required for neural differentiation of hESCs.
Discussion
The transitions between mesenchymal and epithelial states, that is, both the EMT and its reversed process, the MET, have been observed in many processes during normal development as well as in diseased conditions such as metastasis [15][16][17] . We previously reported that an MET initiates cell fate change during somatic reprogramming of MEF 5 and a sequential EMT-MET process is beneficial for optimal somatic reprogramming 7 . In this study, we presented evidence here that a sequential EMT-MET process drives the hepatic differentiation of hESCs. Together, our studies indicate that an EMT/MET underlies cell fate conversions in both reprogramming and in differentiation along an endoderm cell fate. It is possible that EMT/MET induces cellular reorganization and changes a cell's responsiveness to the extracellular stimuli and/or rewires epigenetic regulatory circuits thus modifies the outcome of a given stimuli. It is interesting that although the first EMT is synchronous and the later MET is asynchronous. We speculate that the lack of a synchronous MET in later stages of in vitro differentiation may be related to the lack of full maturity in these induced hepatocyte-like cells. Our discoveries about the functions of EMT/MET in cell fate changes in vitro might have in vivo implications as well. It is well established that mesendoderm cells undergo an EMT and migration during gastrulation. It is generally thought that migration brings a cell to its destiny where it receives signals required for differentiation. It remains an open question whether or not EMT plays additional roles in cell fate conversions in vivo. In a chemically defined in vitro differentiation system such as the one we used here, cells are accessible to the induction signal without the need to migrate. Thus, it is an ideal model to investigate the involvement and mechanism of EMT/MET-related changes during lineage-specific differentiations. Surprisingly, we identified an autocrine TGFb signalling process during the Activin A-induced EMT and DE formation. We further showed that SNAI1, a master regulator of EMT, is required for the suppression of CDH1 but is dispensable for the induction of CDH2 and F-actin. It remains to be determined whether other EMT-related transcriptional factor, either alone or together with SNAI1, regulates the expression of CDH2 and formation of stress fibres during DE formation. We further showed that the differentiation of DE into hepatocyte involves a MET. By choice, we did not pursue the mechanism associated with this MET process. In that regard, we have noticed that HNF4A is highly induced in the hepatoblast stage and it could be a critical regulator of the subsequent MET. HNF4A is required for the formation of hepatic epithelium and liver architecture in mouse development 18 and is capable of promoting MET and hepatic maturation of hepatoblasts derived from hPSCs 19 . Furthermore, it is an essential factor used in direct conversion of both mouse and human fibroblasts to hepatocytes 20,21 . Together, we propose that HNF4A, together with the downregulation of EMT factor such as SNAI1, plays a critical role in the MET phase of hepatic differentiation.
The in vivo function of SNAI1 has been studied in mouse. Snai1 knockout mice are embryonic lethal with abnormal mesoderm formation, presumably due to the failure to downregulate CDH1 and initiate EMT during gastrulation 22 . The Snai1 knockout mouse embryonic stem cells (mESCs) have normal self-renewal but show reduced mesoderm and enhanced neuroectoderm commitment in an embryoid body differentiation system 23 . In the same assay, the induction of SOX17 appears unaffected in the mutant mouse cell lines. However, conversely to the hESC date, ectopic expression of SNAI1 in mESCs induces an EMT (upregulation of CDH2 and downregulation of CDH1) and promotes mesoderm differentiation 24 . These results demonstrate that SNAI1 is not required for self-renewal of mESCs and it favors the mesoderm differentiation during germ layer commitment. Our loss-of-function and gain-of-function studies reveal that in hESCs SNAI1 is dispensable for the maintenance of pluripotency, which is consistent with the mouse data. We found that SNAI1 has lineage specific function during the differentiation of hESCs: it does not affect neural differentiation of hESCs but markedly blocked the Activin A-induced expression of SOX17.
The role of SNAI1 in SOX17 induction is thus different between mESCs and hESCs. This discrepancy could be attributed to the different methods of differentiation (embryoid body vs. targeted differentiation), differences in the nature of mESCs and hESCs (naïve vs. primed), or it might indicate a species-specific function of SNAI1. Further investigations in a chemically defined system will be very helpful to definitively reveal the function of SNAI1 in endodermal differentiation of mESCs.
Methods
Maintenance and targeted differentiation of hESCs. Undifferentiated human H1 ES cells (WiCell) were maintained in monolayer culture on Matrigel (BD Biosciences, 354277) in mTeSR1 medium (Stemcell Technologies, 05850) at 37°C with 5% CO 2 . Cells were manually passaged at 1:4 to 1:6 split ratios every 3 to 5 days. For hepatic differentiation, we established a serum-free protocol based on previously described protocols with minor modifications 8,9 . Briefly, cells were cultured for 3 days in RPMI/B27 medium (Insulin minus, Gibco, A18956-01) supplemented with 100 ng ml À 1 Activin A (Peprotech, 120-14E), followed by 4 days with 20 ng ml À 1 BMP2 (Peprotech, 120-02) and 30 ng ml À 1 FGF- Table 3). Relative expression was calculated as described before 29 except that a Ct value of 25 was used for low expressed genes. Relational network plots (mdsquish) were implemented as part of glbase 28 . Briefly, the normalized Euclidean distance between all cells was measured for singular value decomposed PCs 1, 2, 3, and a network was then constructed using a threshold of 0.92 for weak links (dotted lines) and 0.99 for strong links (solid lines) with a maximum of the 50 best scoring edges per node, the network was then laid out using graphviz 'neato' layout. Node sizes are 2 (relative expression) .
Cell migration assay. Scratch assay was used to determine the migration activity of H1-derived cells. Briefly, cells in a confluent monolayer were scratched with a needle to form a cell-free zone into which cells at the edges of the wound can migrate. The denuded area was imaged to measure the boundary of the wound at pre-migration. Images of cell movement were captured at regular intervals within a 24 h period for data analysis.
ELISA. The protein level of TGFb in the Activin A stimulated H1 cell culture media was determined with an ELISA Kit (R&D Systems, DB100B) as described by the manufacturer.
Gene targeting. The strategy for gene targeting in H1 cells was outlined in Supplementary Fig. 3a. sgRNAs to human SNAI1, SNAI2, ZEB1 and ZEB2 were designed using the software provided by Dr Feng Zhang's lab at crispr.mit.edu and cloned into the pX330 vector. Donor constructs were prepared by inserting gene specific left and right arms into the donor vector with either a puromysin-resistance (pGK-Puro) or a neomycin-resistance (pGK-Neo) cassette flanked by loxP sites. For gene targeting, H1 cells were digested with Accutase (Sigma) for 8 min at 37°C then electroporated with linearized donor DNA containing pGK-Puro (2 mg), pGK-Neo (2 mg) and pX330-sgRNA (4 mg). Cells were then plated onto Matrigel (Corning)-coated six-well plates in the presence of Y-27632 (10 mM, Sigma, Y0503) for 1 day. Positive colonies were selected by puromycin (0.5 ng ml À 1 , Gibco, A1113803) plus G418 (50 ng ml À 1 , Gibco, 10131035) in mTeSR1. Colonies were transferred to Matrigel-coated 24-well plates to grow up for several days then passaged using 0.5 mM EDTA onto Matrigel-coated 12-well plates. Biallele SNAI1 knockout colonies were first screened by PCR on genomic DNA with the primer F2 and R2 (Supplementary Fig. 3b) and further confirmed by real-time RT-PCR analysis ( Supplementary Fig. 3c). The sequences of sgRNA and PCR primers used are listed in Supplementary Table 4.
Periodic acid Schiff staining. Periodic acid Schiff (PAS) staining was performed using the PAS staining kit (Polysciences, 24200-1) according to the manufacturer's instructions.
LDL uptake. Cells were washed with PBS and incubated in culture medium containing 4 mg ml À 1 LDL (Invitrogen, L23380) for 30 min at 37°C. Cells were then fixed with 4% formaldehyde and stained with DAPI (Sigma, D9542). LDL uptake by cells was examined under a fluorescence microscope.
Indocyanine green uptake and release. Indocyanine green (ICG) (Sigma, 1340009) was dissolved in DMSO at 5 mg ml À 1 . When cells were ready, ICG was diluted freshly in culture medium to 1 mg ml À 1 . Diluted ICG was added to cultured cells for 30 min at 37°C. After washing with PBS, the cellular uptake of ICG was examined under a microscope. Then cells were refilled with the culture medium and incubated for 6 h and the release of cellular ICG was examined.
ALB and urea secretion. ALB and urea secretion of differentiated hepatocytes were analysed using fully automatic chemistry analyzer (SHIMADZU CL-8000).
Statistical analyses. In general, experiments were done from three biological repeats when possible. Data were presented as mean ± s.d. calculated using Microsoft Excel. Statistical differences were determined by unpaired two-tailed Student's t-test. P-valueso0.05 were considered statistically significant. No statistical method was used to pre-determine sample size. No samples were excluded for any analysis.
Data availability. The RNA-Seq data discussed in this publication have been deposited with GEO under the accession number GSE70741. All other relevant data are available from the corresponding authors upon request. | 5,790.4 | 2017-05-03T00:00:00.000 | [
"Biology"
] |
Performance Investigation of a Two Link Manipulator Stability in the Presence of Torque Disturbance using Optimal Sliding Mode Controller
In this paper, a two-link manipulator system stability performance is designed and analyzed using Optimal control technique. The manipulator system is highly nonlinear and unstable. The system is modelled using Lagrangian equation and linearized in upward unstable position. The closed loop system is designed using optimal sliding mode controller. The system is compared with a known PID controller with an impulse applied and disturbance torques and a promising results has been obtained.
Introduction
In robotics, a manipulator is a system used to manipulate items without any help by the operator. The stubbornness was originally for behavior with radioactive or biohazardous materials, using robotic arms, or they were used in inaccessible places. In more recent development they have been used in diverse pedestal of application including welding automation, robotic surgery and in space. It is an arm-like system that consists of a design of segments, usually sliding or jointed called cross-slides, which nelson and protocol aim with a amounts of degree of freedom. In industrial ergonomics a manipulator is a lift-assist contrivance used to help laborer lift, maneuver and position articles in tendency that are too heavy, too hot, too large or otherwise too difficult for a single worker to manually handle. As opposed to simply vertical lift assists (cranes, hoists, etc.) manipulators have the expertise to sweeps in to tight spaces and remove work pieces. A good form would be banishment large stamped parts from a press and arranging them in a rack or similar dunnage. In welding, a rods boom manipulator is used to reprieve ejection rates, reduce human inaccuracies and other costs in a manufacturing setting. Additionally, manipulator tooling gives the lift assist the aptitude to pitch, roll, or spin the parts for appropriate placement. Figure 1 shows the physical model of a two-link manipulator, with each joint equipped with a motor for providing input torque disturbance, an encoder is used to measure the joint position. The 2 objective of the of this system design is to make the joint positions 12 and to be stable to the vertical position with the presence of T1 and T2 disturbance inputs, which are specified by the vertical system design of the manipulator..
Linearizing the System
In this paper, the system linearizing method is done for vertical unstable equilibrium by taking.
The parameters of the system are shown in Table 1 The value of the matrix S, N and W becomes Rearranging Equation (9) So the state space representation of the system becomes Here, the input term is not present in the objective function (16), and the constraints are that the system is on the intersection on m sliding hyperplanes. Furthermore, the matrix G is not specified a priori and will come out as a solution to the problem.
PID Controller.
A proportional-integral-derivative controller (PID) is a mechanism employing feedback that is widely used in industrial control organization and a variety of other implementation requiring continuously modulated control. A PID controller continuously calculates an inaccuracies values as the unlikeness between a desired set point (SP) and a measured process variable (PV) and applies a adjustment based on proportional, integral, and derivative terms (denoted P, I, and D respectively). In practical terms it automatically applies accurate and responsive change to a control function. The controller's PID algorithm restores the measured output to the desired input with minimal deferment and overshoot by increasing the ability of the system. The distinguishing feature of the PID controller is the skill to use the three control terms of proportional, integral and derivative pertinence on the controller output to apply accurate and optimal control.
The proportional, integral, and derivative terms are summed to calculate the output of the PID controller. Defining u(t) as the controller output, the final term of the PID controller is:
Tuning
The part of these effects is achieved by loop tuning to whip the optimal control function. The tuning constants are denoted as "K" and must be derived for each control application, as they depend on the response wood of the complete loop external to the controller. These are dependent on the behavior of the final control element.
Result and Discussion 4.1 Comparison of the Two Link Manipulator with Optimal Sliding Mode and PID Controllers for an Impulse Input Torque 1
The simulation results of 12 and for the comparison of the two link manipulator with optimal sliding mode and PID controllers for an impulse input torque 1 of 0.1 Nm are shown in Figure 2 and The simulation result of the impulse response of theta 1 and theta 2 to torque 1 disturbance shows that the manipulator with optimal sliding mode controller minimizes the overshoot and the settling time better than the PID controller.
Comparison of the Two Link Manipulator with Optimal Sliding Mode and PID Controllers for an Impulse Input Torque 2
The simulation results of 12 and for the comparison of the two link manipulator with optimal sliding mode and PID controllers for an impulse input torque 2 of 0.1 Nm are shown in Figure 4 and Figure 5 respectively. The simulation result of the impulse response of theta 1 and theta 2 to torque 2 disturbance shows that the manipulator with optimal sliding mode controller minimizes the overshoot and the settling time better than the PID controller.
Conclusion
In this paper, stability control of a two link manipulator has been done using optimal sliding mode and proportional integral derivative controllers. The stability performance of the system has been analyzed using comparison simulation between the proposed controllers. The comparison simulation of the two link manipulator with optimal sliding mode and proportional integral derivative controllers has been done for an impulse input of the applied and disturbance torques and the simulation results prove the effectiveness of the proposed optimal sliding mode controller in minimizing the overshoot with a moderate settling time better than the proportional integral derivative controller. | 1,398.6 | 2020-10-09T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
GKZ-hypergeometric systems for Feynman integrals
Tai-Fu Feng, Chao-Hsi Chang, Jian-Bin Chen , Hai-Bin Zhang Department of Physics, Hebei University, Baoding, 071002, China Hebei Key Laboratory of High-precision Computation and Application of Quantum Field Theory, Baoding, 071002, China Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Science, Beijing, 100190, China CCAST (World Laboratory), P.O.Box 8730, Beijing, 100190, China School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China Department of Physics, Taiyuan University of Technology, Taiyuan, 030024, China and Department of Physics, Chongqing University, Chongqing, 401331, China Abstract Basing on the systems of linear partial differential equations derived from Mellin-Barnes representations and Miller’s transformation, we obtain GKZ-hypergeometric systems of one-loop self energy, one-loop triangle, two-loop vacuum, and two-loop sunset diagrams, respectively. The codimension of derived GKZ-hypergeometric system equals the number of independent dimensionless ratios among the external momentum squared and virtual mass squared. Taking GKZ-hypergeometric systems of one-loop self energy, massless one-loop triangle, and two-loop vacuum diagrams as examples, we present in detail how to perform triangulation and how to construct canonical series solutions in the corresponding convergent regions. The series solutions constructed for these hypergeometric systems recover the well known results in literature.
I. INTRODUCTION
A central target for particle physics now is to test the standard model (SM) and to search for new physics (NP) beyond the SM [1][2][3] after the discovery of the Higgs boson [4,5].
In order to predict the electroweak observables precisely with dimensional regularization [6,7], one should evaluate the Feynman integrals exactly in the time-space dimension D = 4−2ε at first. Nevertheless each method presented in Ref. [8] has its blemishes since it can only be applied to the Feynman diagrams with special topologies and kinematic invariants.
It was proposed long ago to consider Feynman integrals as the generalized hypergeometric functions [9]. Certainly Feynman integrals satisfy indeed the systems of holonomic linear partial differential equations (PDEs) [10] whose singularities are determined by the Landau singularities. Recently the author of Ref. [11] shows that the D−module of a Feynman diagram is isomorphic to Gel'fand-Kapranov-Zelevinsky (GKZ) D−module [12][13][14][15][16].
Some Feynman integrals are already expressed as the hypergeometric series in the corresponding parameter space. In Ref. [17] the massless C 0 function is presented as the linear combination of the fourth kind of Appell function F 4 whose arguments are the dimensionless ratios among the external momentum squared, and is simplified further as the linear combination of the Gauss function 2 F 1 through the quadratic transformation [18] in the literature [19]. With some special assumptions on the virtual masses, the analytic expressions of the scalar integral C 0 are given by the multiple hypergeometric functions in Ref. [20] through the Mellin-Barnes representations. Taking the massless C 0 function as an example, the author of Ref [21] presents an algorithm to evaluate the scalar integrals of one-loop vertex-type Feynman diagrams. Certainly, some analytic results of the C 0 function can also be extracted from the expressions for the scalar integrals of one-loop massive N−point Feynman diagrams [22,23]. The Feynman parametrization and Mellin-Barnes contour integrals can be applied to evaluate Feynman integrals of ladder diagrams with three or four external lines [24]. In addition, the literature [25] also provides a geometrical interpretation of the analytic expressions of the scalar integrals from one-loop N−point Feynman diagrams.
Using the recurrence relations respecting the time-space dimension, the papers [26,27] formulate one-loop two-point function B 0 as the linear combination of the Gauss function 2 F 1 , one-loop three-point function C 0 with arbitrary external momenta and virtual masses as the linear combination of the Appell function F 1 with two arguments, and one-loop four-point function D 0 with arbitrary external momenta and virtual masses as the linear combination of the Lauricella-Saran function F s with three arguments, respectively. The expression for the scalar integral C 0 is convenient for analytic continuation and numerical evaluation because continuation of the Appell functions has been analyzed thoroughly. Nevertheless, how to perform continuation of the Lauricella-Saran function F s outside its convergent domain is still a challenge. Expressing the relevant Feynman integral as a linear combination of generalized hypergeometric functions in dimension regularization, the authors of Ref. [28] analyze the Laurent expansion of these hypergeometric functions around D = 4. The differential-reduction algorithm to evaluate those hypergeometric functions can be found in Refs. [29][30][31][32][33]. A hypergeometric system of linear PDEs is given through the corresponding Mellin-Barnes representation [34], where the system of linear PDEs is satisfied by the Feynman integral in the whole parameter space of the independent variables. Some irreducible master integrals for sunset and bubble Feynman diagrams with generic values of masses and external momenta are explicitly evaluated via the Mellin-Barnes representation in Ref. [35].
Taking some special assumptions on the virtual masses and external momenta, Ref. [36] presents some GKZ-hypergeometric systems of Feynman integrals with codimension= 0 or codimension= 1 through the Lee-Pomeransky parametric representations [37]. Using the triangulations of the Newton polytope of the Lee-Pomeransky polynomial, the authors of Ref. [38] present GKZ-hypergeometric system of the sunset diagram of codimension= 6. They also construct canonical series solutions which contain three redundant variables besides three independent dimensionless ratios among the external momentum squared p 2 and three virtual mass squared m 2 Actually it is a common defect of GKZ-hypergeometric systems originating from the Lee-Pomeransky polynomial of the corresponding Feynman diagrams that codimension is far larger than the number of independent dimensionless ratios among the external momentum squared and virtual mass squared. To construct canonical series solutions with suitable independent variables, one should compute the restricted D-module of GKZ-hypergeometric system originating from the Lee-Pomeransky representations on corresponding hyperplane in the parameter space [39][40][41].
Some holonomic systems of linear PDEs are given through Mellin-Barnes representations of concerned Feynman integrals in Refs. [34,42,43]. Performing the Miller's transfor-mation [44,45], one derives GKZ-hypergeometric system of concerned Feynman integrals, whose codimension equals the number of independent dimensionless ratios among the external momentum squared and virtual mass squared. Using those holonomic systems given in Refs. [34,42,43], we present here relevant GKZ-hypergeometric systems for Feynman integrals of one-loop self-energy, two-loop vacuum, two-loop sunset, and one-loop triangle diagrams. Taking Feynman integrals of one-loop self-energy, two-loop vacuum, and massless one-loop triangle diagrams as examples, we illuminate how to construct canonical series solutions from those relevant GKZ-hypergeometric systems [46], and how to derive the convergent regions of those series with Horn's study [47]. To shorten the length of text, we don't state those mathematical concepts and theorems that have been used in our analyses here, because they can be found in some well-known mathematical textbooks [46,[48][49][50][51][52][53][54][55].
Basing on Mellin-Barnes representations of one-loop Feynman diagrams or those multiloop diagrams with two vertices, we can derive GKZ-hypergeometric systems through Miller's transformation, whose codimension of GKZ-hypergeometric system equals the number of independent dimensionless ratios among the external momentum squared and virtual mass squared. Nevertheless for generic multiloop Feynman diagrams such as that presented in Refs [56,57], the corresponding codimension of GKZ-hypergeometric system derived is far larger than the number of independent dimensionless ratios, whether using Mellin-Barnes or Lee-Pomeransky representations. In order to construct canonical series solutions properly, we constrain corresponding GKZ-hypergeometric system on restricting hyperplane in the parameter space.
The generally strategy for analyzing Feynman integral includes three steps here. First we obtain the holonomic system of linear PDEs satisfied by corresponding Feynman integral through its Mellin-Barnes representation, next find GKZ-hypergeometric system via Miller's transformation, and finally construct canonical series solutions. The integration constants, i.e. the combination coefficients, are determined from the Feynman integral with some special kinematic parameters. To make the analytic continuation of those canonical series solutions from their convergent regions to the whole parameter space, one can perform some linear fractional transformations among the complex variables.
Our presentation is organized as following. Through Miller's transformation, we derive GKZ-hypergeometric systems of Feynman integrals of one-loop self energy, massless one-loop triangle, and two-loop vacuum diagrams by using the holonomic systems of linear PDEs in Refs. [34,42,43] in section II. Then we present in detail how to perform triangulation and how to construct canonical series solutions from those GKZ-hypergeometric systems in section III. Actually some well-known results are recovered with the approach presented here. In section IV, we present GKZ-hypergeometric systems for the sunset diagram with three differential masses, C 0 function with one nonzero virtual mass, and C 0 function with three differential virtual masses, respectively. The conclusions are summarized in section V. Adopting the notation in Ref. [42,43], we write the scalar integrals of one-loop self energy, massless one-loop triangle, and two-loop vacuum diagrams respectively as Generally those holonomic systems presented above originate from the Mellin-Barnes representations of corresponding Feynman integrals [34,42,43]. Using the systems of linear PDEs in Eq.
(2), one derives the following relations between f i (i = B, C, V ) and their contiguous functions (θ x i +θ y i + a i )f i the equations in Eq. (6) are changed as where Correspondingly the universal Gröbner basis of the toric ideal associated with A is The operators from Eq. (10) and Eq. (12) compose the generators of a left ideal [51] in the Weyl algebra D = C z i,1 , · · · , z i,6 , ∂ z i,1 , · · · , ∂ z i, 6 where C denotes the field of complex numbers [55]. Defining the isomorphism between the commutative polynomial ring and the Weyl algebra [46] Ψ : C[z i,1 , · · · , z i,6 , ξ i,1 , · · · , ξ i,6 ] → D, one obtains the state polytope [53] of the preimage of the universal Gröbner basis in Eq. (12) as on the hyperplane In Eq. (13) we take multi-index notation for abbreviation, i.e.
where α, β ∈ N 6 , and N = {0, 1, 2, · · ·} denotes the set of non-negative integers. The normal fan of the state polytope in Eq. (14) is the Gröbner fan of the left ideal generated by the operators in Eq. (10) and Eq. (12). Because codimension= 2 for all GKZ-hypergeometric systems, the Gröbner fan equals the hypergeometric fan. These two fans are indispensable in the construction of canonical series solutions of corresponding GKZ-hypergeometric systems.
A. Triangulation
The following matrix G A is a Gale transform of A in Eq. (11): whose column vectors compose the secondary fan Σ A of GKZ-hypergeometric system in Eq. (10). Actually the state polytope in Eq. (14) of universal Gröbner basis indicates that the hypergeometric fan H A and the Gröbner fan G A all equal the secondary fan of GKZhypergeometric system here: with e 1 = (1, 0) T , e 2 = (0, 1) T . The cones are defined as [52] Cone({e 1 , e 2 }) = {λ 1 e 1 + λ 2 e 2 | λ 1 , λ 2 ∈ R + } , where R + denotes the non-negative reals.
For a generic weight vector ω ∈ Cone({e 1 , e 2 }), the corresponding triangulation [53] △ ω = {σ a 1 , σ a 2 , σ a 3 , σ a 4 } is unimodular, and supports the toric ideal which corresponds to the initial monomial ideal in Certainly four standard pairs [46] (1, σ a j ), (j = 1, 2, 3, 4) produce the following exponent vectors of initial monomials of series solutions For a generic weight vector ω ∈ Cone({e 2 , −e 1 − e 2 }), the corresponding triangulation is also unimodular, and supports similarly the toric ideal which corresponds to the initial monomial ideal in Correspondingly four standard pairs (1, σ b j ), (j = 1, 2, 3, 4) induce the following exponent vectors of initial monomials for series solutions Finally for a generic weight vector ω ∈ Cone({e 1 , −e 1 − e 2 }), the corresponding triangu- is unimodular, and supports the toric ideal which corresponds to the initial monomial ideal in Correspondingly four standard pairs (1, σ c j ), (j = 1, 2, 3, 4) give the following exponent vectors of initial monomials for series solutions The secondary fan of GKZ-hypergeometric system in Eq. (10), ω in each cone is a representative weight vector.
B. Construction of canonical series solutions
The integer kernel ker Z (A) of the matrix A is defined as The vector u ∈ ker Z (A) can be decomposed into positive and negative part, where u + and u − are non-negative vectors with disjoint supports. In order to construct canonical series solutions of GKZ-hypergeometric system, we define the negative support of Furthermore we introduce the following subset of ker Z (A) With an exponent vector p of the initial monomial, the corresponding canonical series solution of the hypergeometric system Eq. (10) is well-defined: where the abbreviations For one-loop self energy and two-loop vacuum, p σ a Using Eq. (32), one derives Then where F 4 denotes the fourth Appell function [49]. Similarly we have For the fourth Appell function (a) n 1 +n 2 (b) n 1 +n 2 the adjacent ratios of the coefficients are To investigate the absolutely and uniformly convergent region of the double series in Eq.(38), one takes with t = n 2 /n 1 , r x = |x|, r y = |y|, respectively. The generator of the principal ideal where C[t x , t y ] denotes the polynomial ring of t x , t y on the field C. Since t x , t y ≥ 0 the equation g(t x , t y ) = 0 gives the Cartesian curve of the double power series in Eq.(38) as • For |x i | ≤ 1, |y i | ≤ 1, the Feynman integral is which is convergent in the region |x i | + |y i | < 1.
• For |x i | ≥ 1, |y i | ≤ 1, the Feynman integral is which is convergent in the region 1 + |y i | < |x i |.
• For |x i | ≤ 1, |y i | ≥ 1, the Feynman integral is which is convergent in the region 1 + |x i | < |y i |.
In order to determine those integration constants, i.e. the combination coefficients A σ,i , B σ,i , C σ,i , D σ,i with σ = a, b, c and i = B, V , we utilize expressions of the Feynman integrals at some special points of the parameter space. For the Feynman integral of one-loop self energy diagram B 0 (p 2 , m 2 1 , m 2 2 ), we employ the following expressions Using above expressions, one derives the combination coefficients as In a similar way, the combination coefficients involved in the Feynman integral of the twoloop vacuum diagram are written as For the triangulation △ ω = {σ a 1 , σ a 2 , σ a 3 , σ a 4 } of the massless one-loop triangle diagram, the canonical series solutions are constructed as Similarly the canonical series solutions corresponding to the triangulation △ ω = which is convergent in the region |x C | + |y C | < 1.
Using the expressions of Feynman integral of the massless triangle diagram at some special kinematic points (Λ 2 , Λ 2 , 0), (Λ 2 , 0, 0) etc, one obtains those integration constants as Actually the Feynman integrals presented here can be written in terms of the Gauss function 2 F 1 [19] by the well-known reduction of the Appell function of the fourth kind [18], then the analytic continuation of those Feynman integrals is made to the whole parameter space through the transformations of the Gauss functions.
IV. GKZ-HYPERGEOMETRIC SYSTEMS OF OTHER FEYNMAN INTEGRALS
A. Sunset diagram with three differential masses In order to make the notation less cluttered, we adopt the multi-index convention [55], and write the Feynman integral of the two-loop sunset diagram as with the multi-index notations a = (a 1 , Certainly the dimensionless function T p 123 complies with the third Lauricella's system of linear PDEs [42,43] x i + a 2 ) T p 123 = 0 , (k = 1, 2, 3) . (57) Using the system of linear PDES presented in Eq. (57), one derives the following relations between T p 123 and its contiguous functions as Where n 2,j ∈ R 2 , (j = 1, 2) denotes the row vector whose entry is zero except that the j−th entry is 1, and the row vector n 2 = (1, 1). In addition n 3,k ∈ R 3 , (k = 1, 2, 3) denotes the row vector whose entries are zero except that the j−th entry is 1 and n 3 = (1, 1, 1). To proceed with our analysis, we define the auxiliary function as Where the row vectors u = (u 1 , u 2 ) ∈ R 2 , v = (v 1 , v 2 , v 3 ) ∈ R 3 , and the multi-index Obviously there are the relationŝ In addition, the contiguous relations of the function defined in Eq. (59) are given aŝ where the operatorsÔ i n (n = 1, · · · , 8) arê Those operators above together withθ u j ,θ v k define the Lie algebra of the hypergeometric system [44,45] in Eq. (57). Through the transformation of indeterminates the equations in Eq. (60) are changed as where , ϑ T = (ϑ z 1 , ϑ z 2 , ϑ z 3 , ϑ z 4 , ϑ z 5 , ϑ z 6 , ϑ z 7 , ϑ z 8 ) , Correspondingly the universal Gröbner basis of the toric ideal associated with A ⊖ is The operators from Eq. (64) and Eq. (66) compose the generators of a left ideal in the Weyl algebra D = C z 1 , · · · , z 8 , ∂ z 1 , · · · , ∂ z 8 . Defining an isomorphism between the commutative polynomial ring and the Weyl algebra [46] Ψ : one obtains the state polytope [53] of the preimage of the universal Gröbner basis in Eq. (66) on the hyperplane ξ 1 = ξ 2 , ξ 3 = ξ 6 , ξ 4 = ξ 7 , The [43,58,59].
In order to make analytic continuation of Lauricella functions from their convergent regions to the whole parameter space, we should perform some linear fractional transformations among the complex variables z 1 , · · · , z 8 . We will release our calculation results further elsewhere.
B. C 0 function with one nonzero mass
In this case, the scalar integral where the row vectors a = (a 1 , a 2 , respectively. Additionally the parameters a 1 = 4 − D, a 2 = 3 − D/2, a 3 = 1, b 1 = b 2 = 3 − D/2, p 2 3 = (p 1 + p 2 ) 2 , and the dimensionless ratios ξ 33 = −m 2 /p 2 3 , x ij = p 2 i /p 2 j , (i, j = 1, 2, 3). The dimensionless function F a,p 3 satisfies the holonomic hypergeometric system of linear PDEs Defining the auxiliary function where k = 1, 2, 3 and j = 1, 2, respectively. In addition, the contiguous relations of the auxiliary function are given aŝ where the operatorsÔ i n (n = 1, · · · , 8) are defined aŝ Certainly one can calculate the state polytope [53] corresponding to the universal Gröbner basis U Aa whose normal fan coincides with the Gröbner fan, then construct canonical series solutions in the convergent regions which are presented in Ref. [43]. In order to perform the analytic continuation of canonical series solutions from the convergent regions to the whole parameter space, one utilizes some linear fractional transformations among the complex variables z a 1 , · · · , z a 8 , then chooses u k = 1, v j = 1 (k = 1, 2, 3, j = 1, 2) finally.
V. SUMMARY
Using the system of linear PDEs satisfied by the corresponding Feynman integral in
Refs [42,43], we present GKZ-hypergeometric systems of one-loop self energy, one-loop triangle, two-loop vacuum, and the two-loop sunset diagrams, respectively. In those GKZhypergeometric systems the codimension equals the number of independent dimensionless ratios among the external momentum squared and virtual mass squared.
Actually one can derive GKZ-hypergeometric systems from Mellin-Barnes representations for the one-loop Feynman diagrams and those multiloop diagrams with two vertices, whose codimension equals the number of independent dimensionless ratios among the external momentum squared and virtual mass squared. Nevertheless for the generic multiloop Feynman diagrams, the corresponding codimension of GKZ-hypergeometric system is far larger than the number of independent dimensionless ratios, whether using Mellin-Barnes or Lee-Pomeransky representations. In order to construct canonical series solutions properly, one should constrain GKZ-hypergeometric system on restricting hyperplane in the parameter space.
Taking GKZ-hypergeometric systems of one-loop self energy, one-loop massless triangle, and two-loop vacuum diagrams as examples, we present in detail how to perform triangulation and how to construct canonical series solutions in the corresponding convergent regions.
The analytic continuation of those series solutions is performed through some well known reduction of the Appell function of the fourth kind. In order to make analytic continuation of those series solutions of GKZ-hypergeometric systems of the massive sunset and massive one-loop triangle diagrams etc., one can perform the linear fractional transformations among the complex variables.
One of the techniques not involved here is how to project GKZ-hypergeometric system to the restricting hyperplane. Another calculation not contained here is how to make analytic continuation of those canonical series solutions from their convergent regions to the whole parameter space through linear fractional transformation among the complex variables. Algorithm for the first problem has been presented in literature already, and the second problem is attributed to a problem of integer programming [60] in principle. We will release our results relating to those topics in near future elsewhere. | 5,184.2 | 2019-12-04T00:00:00.000 | [
"Physics"
] |
The twisted gradient flow coupling at one loop
We compute the one-loop running of the $SU(N)$ 't Hooft coupling in a finite volume gradient flow scheme using twisted boundary conditions. The coupling is defined in terms of the energy density of the gradient flow fields at a scale $\tilde{l}$ given by an adequate combination of the torus size and the rank of the gauge group, and is computed in the continuum using dimensional regularization. We present the strategy to regulate the divergences for a generic twist tensor, and determine the matching to the $\overline{\rm MS}$ scheme at one-loop order. For the particular case in which the twist tensor is non-trivial in a single plane, we evaluate the matching coefficient numerically and determine the ratio of $\Lambda$ parameters between the two schemes. We analyze the $N$ dependence of the results and the possible implications for non-commutative gauge theories and volume independence.
Introduction
In recent years, the continuous smoothing procedure known as the gradient flow [1][2][3] has received considerable attention. One of its most common applications has been, in combination with finite-size scaling techniques, the determination of the non-perturbative scale dependence of the gauge coupling constant. Examples of the usefulness of this approach range from precise, non-perturbative determinations of the QCD coupling constant and Λ parameter [4][5][6][7][8] to the study of Yang-Mills theories with near conformal behavior [9][10][11], or with large number of colors [12]. Several coupling renormalization schemes based on gradient flow techniques have been proposed to that end [3,[13][14][15][16]. These schemes can be related through a perturbative calculation to more traditional ones such as the MS scheme, a step often required to make contact with experiment. However, and despite their importance, perturbative calculations in such a set-up are scarce. In infinite volume, the matching has been determined up to both next-to-leading order (NLO) [3] and next-tonext-to-leading order (NNLO) [17], while for finite volume it has been done up to NNLO in the Schrödinger functional scheme using numerical stochastic perturbation theory [18,19].
In the scope of this work, we will focus on a particular gradient flow finite volume scheme for SU (N ) pure gauge theory, introduced by A.Ramos in ref. [16]. We will be presenting results for the matching at NLO of this scheme to the MS one, determining the coupling scale in terms of the size of a 4-dimensional torus endowed with twisted boundary conditions (TBC) [20]. From the point of view of perturbative calculations, TBC have an enormous advantage over periodic ones (PBC), as using TBC turns the set of zero-action solutions into a discrete one, and avoids the quartic nature of the fluctuations around A µ = 0 present with PBC [21]. The usefulness of TBC for perturbation theory was first formulated in the context of volume reduction in large N Yang-Mills theory [22,23], and was then extended to various other contexts at finite and large N [24][25][26][27][28][29][30][31][32][33][34][35][36][37][38]. Despite these advantanges, as we will show along this work, perturbative calculations in the twisted gradient flow scheme remain challenging, although so far an analogous perturbative calculation in the case of periodic boundary conditions has not been obtained.
Our interest in this calculation goes beyond the particular applicability of the results, and connects with theoretical ideas related to the concept of volume independence in gauge theories. An essential ingredient of the construction has to do with the dependence of the coupling on the number of colors. We will follow the finite-volume prescription adopted in [12], and set the SU (N ) running coupling scale to be proportional to an effective sizel combining the torus size and the number of colors. In perturbation theory, this effective scale is expected to jointly capture the dependence of the coupling on the volume and on N , once an angular variable depending on the choice of twist is fixed. For the purposes of this paper, we will keep N finite but use our results to analyze the dependence of the coupling on N , at large values of N . Setting the energy scale of the renormalized 't Hooft coupling to be µ = 1/(cl), we will consider two different types of N → ∞ limits, a thermodynamic limit in which the effective size is sent to infinity as c goes to zero while µ is kept constant, and a second one, called singular in [39], in which N is sent to infinity as the torus size is shrunk to zero at constant c in such a way as to keepl fixed [12,34,37,40] 1 . In ref. [12] this last limit was used to compute the SU (∞) running coupling through a step scaling procedure in which the step size was modified via changes in the rank of the gauge group.
This particular prescription for scale setting was inspired by the idea of volume reduction in lattice gauge theories. Originally formulated by Eguchi and Kawai [45], volume reduction states that in the (thermodynamic) large N limit, SU (N ) theory becomes independent of the physical size of the torus. Proof for this statement relies on the independence of the large N Schwinger-Dyson equations from lattice volume, which in turn requires center symmetry to be preserved. As the symmetry was shown not to hold with PBC [46], several alternative proposals were formulated [42,[46][47][48][49], one of which was the use of twisted boundary conditions [22,23], which has proven very successful provided the twist tensor is judiciously chosen [35,36,[50][51][52][53][54][55]. The idea of volume reduction with TBC was extended to the continuum theory in ref. [56], constituting the first formulation of the Feynman rules of a non-commutative Yang-Mills theory [57]. The singular limit in ref. [39] was first formulated precisely in the context of such non-commutative theories, with the effective torus sizel arising in a natural way through the Morita duality. Thus, ordinary gauge theories on a twisted torus are related to non-commutative theories with a rational value of the dimensionless non-commutativity parameter, the effective size corresponding to the one of the non-commutative torus. In 2+1 dimensions, several recent works [37,40] have analyzed the possibility of defining non-commutative gauge theories at irrational values of the non-commutativity parameter as the limit of a sequence of ordinary twisted gauge theories of an increasing number of colors. These works have shown that, if one wishes to avoid tachyonic instabilities [58], such a construction can only be achieved for an uncountable, zero-measure set of values of the non-commutativity parameter. We will, in light of these results, analyze the behavior of the coupling in the singular large N limit.
In this paper, we present a perturbative calculation in the continuum of the running 't Hooft coupling constant at NLO. The layout of the paper is as follows: in sec. 2, we introduced the twisted gradient flow (TGF) scheme, presenting the gradient flow observable used to define the running coupling (i.e. the energy density evaluated at a positive flow time proportional to an effective sizel), along with some specifics about the implementation of TBC in our setting. In particular, we detailed the orthogonal twist used throughout the paper. Sec. 3 then presents the perturbative expansion, along with the regularization and renormalization schemes. This is the longest, most technical section, and contains a fair bit of algebraic manipulation. The calculation is analogous to the one performed by Lüscher in infinite volume [3], though many particularities to the twisted finite volume scheme appear. We will simply mention that it contains the expansion of the observable in powers of the coupling, a reformulation of the NLO contribution as the sum of several integrals, the identification of the divergent terms entering the calculation, and a procedure to regularize them by relating them to infinite volume expressions that can be evaluated in dimensional regularization. Expressions for the observable at LO and NLO are provided within the section, but any reader interested in the final expression for the matching at one-loop order of the TGF coupling to the MS scheme may skip directly to sec. 4, which contains both the matching to the MS scheme and the ratio of Λ parameters (which need to be computed numerically). Results for the case of a 2-dimensional non trivial twist and several SU (N ) groups are presented in subsection 4.2. Sec. 5 discusses the dependence of the coupling on the number of colors, following similar arguments to those in [36,37]. A summary of results is presented in sec. 6. Many technicalities were moved for clarity to appendices A-D, including details on the algorithms used to compute the Λ parameter.
The twisted gradient flow coupling
One of the applications of the gradient flow method has been the computation of the Yang-Mills running coupling, using the energy density E(t) of the gradient flow field as the defining observable. At positive, non-zero flow time t, t 2 E(t) is a renormalized quantity and, at leading order in perturbation theory, is proportional to the MS coupling at a scale µ = 1/ √ 8t, which leads to a natural definition of a renormalized coupling constant [3]. In this work, we focus on a particular gradient flow scheme that makes use of finite size scaling on a torus with TBC. As discussed in the introduction, our set-up is based on the one introduced by A. Ramos in ref. [16], but differs slightly from it for reasons that will become clear in what follows.
The definition of the coupling
The gradient flow is based on the introduction of a parameter t, known as flow time, in such a way as to define a t-dependent gauge field B µ (x, t) matching the Yang-Mills one A µ (x) at t = 0. As flow time passes, this gauge field is smeared down towards the minimum action solutions, its evolution driven by the so-called flow equations: where D µ and G µν respectively stand for the covariant derivative and field strength tensor of the flow fields: This scheme is particularly useful, as observables built from the expectation values of products of B fields at positive flow time have been shown to be renormalized quantities [59]. The renormalized gradient flow coupling can then be defined in terms of the energy density of the flowed field: In infinite volume, this quantity can be used to define a renormalized SU (N ) 't Hooft coupling at an energy scale µ, given by [3]: , (2.5) where d stands for the number of space-time dimensions of the theory. Finite volume gradient flow schemes [12][13][14][15][16] use a formulation in which the gauge theory is defined on a finite torus instead, with each scheme differing in specific details such as, for instance, different boundary conditions. The most common choice is to use a symmetric torus, with all directions of equal length l, while setting the scale for the renormalized coupling in terms of l by fixing µ = 1/(cl), with c an arbitrary constant. Each specific choice of c, always taken to be smaller than unity, is an intrinsic part of the definition of the scheme. The SU (N ) TGF coupling used in this paper is inspired in the finite-volume schemes proposed in refs. [12,16,60]. We will leave the specifics of the scheme for the next subsection, but mention that our SU (N ) gauge theory will be defined on an 4-torus with TBC [20], and such that the torus has a period l in d t (twisted) directions, andl = lN 2/dt in the remaining 4 − d t ones, with d t being either two or four. The reasons behind our choice of an asymmetric torus will become clear in what follows. In this scheme, the twisted gradient flow 't Hooft coupling is defined by [12]: where F(c) is a constant defined in such a way as to have λ TGF (l) = λ 0 + O(λ 2 0 ), in terms of the bare 't Hooft coupling λ 0 .
The choice of boundary conditions and torus size
In this subsection, we will discuss the particular definition of the TGF scheme used in this paper. The main idea is to have a perturbative set-up that is as symmetric as possible [12,60]. To achieve this, we will look at the quantization of momenta in our particular setting, and select the torus size accordingly. We will begin with a generic discussion of the quantization of momenta in the presence of TBC, leading to the introduction ofl as the relevant length scale.
Let us start by considering a SU (N ) gauge theory defined on a d-dimensional torus of length l µ in each direction, and focus our analysis in the specific case of four dimensions for a gauge potential that satisfies 't Hooft TBC [20]. We will work with an orthogonal twist, for which the gauge potential can be fixed to be periodic in each direction up to a constant gauge transformation: where Γ ν are four SU (N ) matrices known as twist eaters which satisfy: with Z µν an element of the center of the gauge group, written in terms of an antisymmetric tensor of integers n µν as: The twist tensor n µν is preserved under gauge transformations, and uniquely characterizes the boundary conditions. It is said to be orthogonal when κ(n) = µνρσ n µν n ρσ /8 = 0 (mod N ). Among such tensors, we will focus only in the so-called irreducible twist tensors, which are the subset for which the only matrices that commute with all Γ µ are the ones proportional to the identity in SU (N ). Irreducible twist tensors have been known to be advantageous for perturbative calculations, as the class of gauge-inequivalent zero-action solutions is discrete [61,62] and zero-modes are eliminated, making computations in perturbation theory much easier. A detailed discussion of the conditions under which a twist is irreducible can be found in [63]. For the scope of this work, we will focus on two types of irreducible twist tensors (detailed below), which are non-trivial in either a single plane or in all of them. For the sake of clarity in the description, we will use gauge freedom to impose strict periodicity for the gauge potential in all directions except for a number d t of them, dubbed "twisted directions", taken to be either two or four, though the specific form of the twist matrices is irrelevant as long as eq. (2.8) is satisfied. We will write our orthogonal twist tensor in the form: where l g = N 2/dt depends on both the number of colors and the number of twisted directions, and k and l g are two coprime integers that guarantee that the irreducibility condition is satisfied. The choice to have a non-trivial twist in only the (0,1) plane is made by setting d t = 2 and 01 = − 10 = 1, and by choosing µν = 0 in any other plane, whereas to twist all planes non-trivially one must instead take d t = 4, and set µν to be antisymmetric and equal to 1 whenever µ < ν. With this choice: A non-trivial twist, such as the one above, will affect the quantization of momenta in the finite box.
The solution to the boundary conditions on such twisted tori in the continuum is well known [23], as one can see for instance in [36] for the general treatment when the torus is discretized on a lattice, or in [34] for an example in 2+1 dimensions in continuum perturbation theory. We will, in what is left of this subsection, recall some known results necessary to implement perturbation theory with TBC. We start by defining: with s µ (q) ∈ Z. Provided k and l g are coprime integers, there will be N 2 independent SU (N ) matrices of this type, of which the only non-traceless one is the one proportional to the identity matrix, i.e. the one for which s µ (q) = 0 (mod l g ) in all twisted directions.
Excluding it, the remaining N 2 − 1 matrices constitute a basis for the SU (N ) Lie algebra. IfΓ(q) satisfies: Γ νΓ (q)Γ † ν = e iqν lνΓ (q) , (2.13) with no summation over ν implied, the boundary conditions in (2.7) are trivially implemented through the Fourier expansion: where V ≡ µ l µ , and the prime in the sum denotes the exclusion of the momenta for whichΓ(q) ∝ I. In the periodic directions, for which Γ ν ∝ I, the momenta are as usual quantized in units of 2π/l ν . This is however not the case for the twisted directions, where a solution is provided by: wherek and˜ µν are given by: The momentum along the twisted µ directions is thus quantized in units of 2π/l µ , with l µ ≡ l µ l g . For this choice of s µ , the group structure constants in theΓ(q) basis become momentum dependent and are given by: and The tracelessness of theΓ(q) matrices thus forbids momenta such thatl µ q µ = 0 (mod 2πl g ) in all twisted directions, and so in particular it forbids zero momentum in the twisted box.
The previous analysis implies that momentum is quantized differently in periodic and twisted directions: it is quantized in terms of the inverse torus size for the former, and in terms of an effective size combining the torus period and the number of colors of the gauge group,l µ = l µ l g = l µ N 2/dt , for the latter. This observation has led us to a specific choice of torus size to define the TGF coupling in eq. (2.6), picked in such a way as to impose the same momentum quantization in all directions. When d t = 2, this will be achieved by considering an asymmetric torus of length l in the twisted directions andl in the periodic ones, whereas for d t = 4 we will instead pick a symmetric 4-torus of period l in all directions. This way, all momenta will always be quantized in units of 2π/l, and we will use this effective sizel as the renormalization scale for the running coupling.
Perturbative expansion
The procedure to determine the perturbative expansion of the coupling follows closely the one developed by Lüscher in infinite volume in ref. [3]. The main difference arises from the quantization of momentum on the torus, as momentum integrals become sums over an infinite set of discrete momenta, and from the change in the group structure constants due to the different choice of SU (N ) Lie algebra basis. Divergent momentum sums, however, can still be treated via dimensional regularization -see for instance [64] -in a way that will be detailed in this section.
Perturbative expansion of the energy density
As a first step towards obtaining the perturbative expansion of the observable, we will fix the gauge in such a way that the following periodicity conditions are satisfied: where Γ µ satisfies eq. (2.8), and with a twist tensor of the form shown in eq. (2.11). This restricts the set of allowed gauge transformations Ω(x) down to those preserving the form of the twist matrices, i.e. those satisfying: These boundary conditions are implemented through the Fourier expansion of the gauge field given in eq. (2.14). In the specific case of the asymmetric torus that we are considering, the torus volume is given by V = l dtl d−dt , and momenta in all directions are quantized in terms of the effective sizel. As we recall, the prime in the sum in eq. (2.14) denotes the exclusion of all momenta for whichlq µ = 0 (mod 2πl g ) in all twisted directions, which in particular excludes zero modes. With this, we may begin the perturbative expansion, which we perform around the A µ = 0, zero-action solution. We start in d = 4−2 dimensions by scaling the original gauge potential with the bare coupling, A µ (x) → g 0 A µ (x). The full Feynman rules in momentum space, given in the Feynman gauge and derived using the boundary condition-preserving Fourier representation mentioned in the previous section, can be found in appendix A.
It will be convenient to henceforth use a set of modified flow equations:
5)
ξ being a gauge parameter to be set to unity. At fixed t, the field derived from this modified flow equation can be related to the solution of the original one by a gauge transformation [3], and hence the modification does not affect gauge invariant observables such as the one we are considering in this paper. It can be shown that the corresponding (flow-time dependent) gauge transformation preserves the boundary conditions (3.1) at any given flow time [16].
These modified flow equations can be solved order by order in g 0 by expanding the flow field in powers of the coupling: The flow field satisfies the same boundary conditions as the original gauge potential and can be Fourier expanded, at any given order, in the same way: The expansion of the energy density in powers of g 0 can now be obtained by expanding the fields in E(t) directly. Dropping for clarity the arguments of the fields in position space, B µ (x, t), one gets, up to order g 4 0 : The corresponding expression in momentum space, however, is specific to the TGF set-up. In particular, the SU (N ) structure constants f abc appearing in infinite volume are replaced by the momentum dependent functions F (p, q, r) appearing in the commutation relations of theΓ(q) matrices -see eqs. (2.17), (2.18). For the sake of completeness we give below the seven different terms contributing to the expectation value of E(t) arising at order g 4 0 , with an additional 1/N normalization factor added for later convenience. Each term can be identified with one of the lines in eq. (3.8): (3.14) (3.15) The shorthand notation p i in the δ functions was used to denote the sum over all present momenta for each term. The E 0 term will turn out to be a combination of a leading O g 2 0 term and an O g 4 0 correction, whereas all other terms will turn out to be O g 4 0 . Then, the next step is to relate the flow fields to the actual gauge fields A µ (x), for which we will need to obtain an order-by-order solution to the flow equations.
Solving the flow equations in the TGF scheme
Let us consider the flow equation (3.5) with the gauge parameter ξ set to unity. This was already solved by Lüscher for the infinite volume case [3], but the results in finite volume are slightly different. Expanding the fields in perturbation theory, as in eq. (3.6), and dropping for clarity of notation the arguments of the fields in position space, the equations to solve order by order are of the form: (3.16) The first three orders will be enough to obtain the observable at order O g 4 0 : We may define a momentum space version of R (i) µ : under which the R µ terms read: In terms ofB µ (q, t), the flow equation in momentum space becomes: whose solution is immediate at first order: and which can be solved for the next two orders by directly integrating R (i) µ : Higher order terms, while increasingly tedious, can be obtained through the same iterative procedure. From these expressions, and using the Feynman rules from appendix A, we derived the expressions of the contributions from eqs. (3.9)-(3.15) in terms of sums over momenta. Introducing for the sake of readability the symbol: and after quite a bit of algebra, we ended up with: where the bare coupling and the volume have been replaced by the bare 't Hooft coupling and the effective length, and we have defined an auxiliary momentum p = q + r. The primes from the sums in the O(λ 2 0 ) terms have been discarded, as the F 2 (q, r, −q − r) factors automatically vanish for such momenta.
The energy density at LO
As we recall, our aim in this paper was to obtain a perturbative expansion of the observable E(t)/N at NLO, which in powers of the bare 't Hooft coupling can be parametrized as: We will begin by deriving the leading order term from the formulas in the previous subsection, given by: It will be convenient to introduce a few auxiliary variables and functions: in terms of which we may write: This A function can be expressed in terms of Jacobi theta functions θ 3 as: where we used Poisson resummation to rewrite the theta functions: The leading order infinite volume expression is retrieved in the c → 0,l → ∞ limit, taken in such a way as to keep cl fixed. In that limit: leading to: in agreement with the results in ref. [3].
The energy density at NLO
As for the subleading O(λ 2 0 ) term coming from eqs. (3.29) -(3.35), we found that, after a fair bit of algebra, it can be expressed in terms of a handful of integrals. By rewriting the momenta in denominators as exponents using Schwinger's parametrization, and the momenta in numerators as derivatives with respect to the flow time variables, we were able to recast the expression for the energy density at NLO as: where the I i are twelve relatively simple integrals to be detailed later on. As the computations and manipulations are rather long and tedious, we will illustrate the procedure using one of the simplest contributions, E 4 in eq. (3.33), and show the remaining E i contributions in terms of the basic integrals in appendix B. The E 4 contribution is given by: Using Schwinger parametrization to lift the momenta from the denominator, we then defined an integral: so as to rewrite: with the structure constants entering this expression through the definition of the symbol q,r given in eq. (3.27).
The presence of the structure constants in each E i will let us formulate the integrands in terms of Siegel theta functions. We may indeed rewrite N F 2 as: a substitution under which a generic integrand of the form: becomes: where we rescaled the variables s ≡ 4πl −2s , u ≡ 4πl −2ũ , v ≡ 4πl −2ṽ , and where we used the quantization of momenta in the twisted finite box to rewrite q and r in terms of integers. The connection to Siegel theta functions becomes clear by introducing the function In this expression I d denotes the d×d identity matrix and the sum over M denotes the sum over the corresponding integers m µ , n ν , regrouped into a 2d-dimensional column vector.
Recalling the definition of the Siegel theta functions: this matricial expression takes the form: (3.56) Using this notation, the integral entering E 4 reads: With this, only one last bit of manipulation is left in order to have the integrals ready for the calculation of the energy density at NLO. In terms of the variables t andĉ defined in eq. (3.38) and rescaling z appropriately we have: Introducing an auxiliary function Φ(s, u, v,θ) that incorporates the normalization factor in front of the integral: Φ(s, u, v,θ) = N G(ĉs,ĉu,ĉv,θ), (3.59) with: we rewrote the integrals in a fairly basic form allowing us to evaluate them numerically. For instance, for the integral in E 4 : A similar procedure can be followed for all the terms contributing to the energy density at NLO, leading to the result in eq. (3.46) for E (1) (t), where the twelve intervening integrals are:
Structure of UV divergences
As some of the integrals defined in the previous subsection are UV divergent in 4 dimensions, in this subsection we will discuss how to parametrize their asymptotic behavior. We will show that, in all of our cases, the divergent contributions can be expressed in terms of an infinite volume integral that can be regularized through analytic continuation in d.
The relation to the existing infinite volume calculation from ref. [3] will be presented in section 3.3.
The UV singularities are tied to the structure of the Siegel theta functions entering the definition of the Φ function: The real part of the matrix A(ĉs,ĉu,ĉv,θ), obtained by settingθ = 0 in eq. (3.54), is a positive definite symmetric matrix as long as det A(ĉs,ĉu,ĉv,θ = 0) > 0, i.e. when (su−v 2 ) > 0, which ensures that the series defining the theta function converges uniformly. It will be useful to define a new quantity: which is always positive definite in our integration ranges, and hence the determinant will be positive definite as well, except at the points for which u = 0. 2 The analysis of the asymptotic behavior of the integrals is much clearer once we apply Poisson resummation as in eq. (3.43) to each component of n in the definition of Θ: (3.76) Wheneverθ˜ m / ∈ Z d , the corresponding term will be asymptotically finite at u = 0. However, in the case in which we have a vector of integers, we will be able to remove theθ dependence by shifting n, thus leaving the asymptotic behavior to be driven by the shifted n = 0 terms. Such terms go, as we approach u → 0, as: This observation allows us to isolate the asymptotic divergence by identifying the cases for whichθ˜ m ∈ Z d . The first case in which this occurs is wheneverθ ≡k/l g = 0, for any value of m. For nonzeroθ, it will happen whenever˜ m = 0 (mod l g ). Since the vector m has nonvanishing components only along the twisted directions, this will be the case whenever m µ = 0 (mod l g ) simultaneously for all twisted directions. The terms responsible for the UV divergences at u = 0 have therefore been identified, and come in two categories: • Forθ = 0, terms with n = 0 and any value of m.
With this, we may begin the discussion on how the divergent integrals can be regularized.
Regularization
The regularization strategy will be based on splitting each integral into the sum of a finite piece that can be directly evaluated at d = 4 and integrated numerically, and an asymptotic term to be handled analytically using dimensional regularization. The way to implement such a strategy will be discussed in this subsection. We start by introducing a function H(s, u, v,θ) given by: with the usual meaning for the prime in the sum over m. The Φ function entering the integrals can then be rewritten as: which is quite advantageous, as the explicit exclusion of the momenta m proportional to l g from the sum automatically makes the term inθ = 0 finite at u = 0. All UV divergences at d = 4 thus come, in this parametrization, from the H(s, u, v, 0) term, and are of the form: with α defined as in eq. (3.75), and, as we recall, positive definite everywhere in the integrals. Hence, the sum over m is convergent, and the leading asymptotic behavior at u = 0 is controlled by the u −d/2 factor (times the additional powers of u appearing in the integrand prefactor). It will be useful to write the function Φ (0) in terms of the function A(x) from eq. (3.39): For reasons that will become clear later, we define: in terms of which we may rewrite: This formulation will be useful to analyze the asymptotic UV behavior of the integrals resulting from replacing the original function Φ in the integrand by Φ (0) . Before discussing the general treatment, we will deal with I 1 as a representative example. In this case the integral diverges at x = 0 as u = 2x. The piece containing the divergence thus reads: which, substituting in the expression for Φ ∞ , yields: The asymptotic behavior at small x can then be obtained by expanding A (2ĉt −ĉx/2) around x = 0. The integrand of the leading term goes as x 1−d/2 , whereas the next to leading term is convergent in d = 4. Hence, the integral will behave asymptotically as: Notice that in this expression the entire momentum dependence has been factorized into the normalization constant A(2ĉt ), which happens to be the same factor that appeared at leading order -see eq. (3.40). The integral I 1 (Φ ∞ , t ) can then be evaluated in dimensional regularization with d = 4 − 2 , leading to: The asymptotic expansion of all other integrals (except for I 9 , which we will address separately) is obtained in the same way: we expand the function A(ĉα) appearing in the definition of Φ (0) around u = 0, retain the leading term, and then use it to define: Remarkably, the integrals I i (Φ ∞ , t ) match the ones appearing in the infinite volume calculation (up to a factor depending on N ), which we will present in sec. 3.3.
We are now in a position to summarize, still keeping I 9 aside, the regularization strategy. The idea is to decompose the finite volume integrals into two pieces, one that is finite in four dimensions: and another one, shown in eq. (3.88) above, that requires analytic continuation to four dimensions and is proportional to each corresponding infinite volume integral. The ultraviolet divergences of the original integral are contained in this last piece, and appear as poles in 1/(d − 4), though only I 1 , I 4 , I 5 and I 7 turned out to have such 1/ poles. As for the strategy to regularize I 9 , some modifications, described in detail in appendix C, are required. The initial integral is decomposed as: with the Heaviside step function θ restricting the interval of integration over z. The first term on the right hand side is finite in four dimensions, while the other two have to be analytically continued to d = 4. Denoting I reg 9 these analytic continuations, we end up with: And therefore: and:
Infinite volume limit
The expression of the energy density in infinite volume can be easily retrieved (see [36]) by making the following substitutions in eqs. (3.28) -(3.35): The resulting expressions for the contributions to the energy density, after integrating over the d-dimensional momenta, can once again be rewritten in terms of twelve basic integrals, much like what happened in the finite volume case. We will first present the case of E 4 as an illustrative example, and then present the results for the general case. The infinite volume expression for E 4 is obtained making the substitutions from eqs. (3.95) and (3.96) in eq. (3.33). After integrating over momenta, we have: Setting t =ĉl 2 t /(4π) and recalling the definition of Φ ∞ from (3.82), one trivially derives: Comparing this with the finite volume expression: we can relate the finite and infinite volume expressions for E 4 . In fact, the infinite volume expression can be obtained from the finite volume one by taking theĉ → 0,l → ∞ limit at fixed t , as I fin 8 (t ) vanishes and A(2ĉt ) becomes N 2 −1 N 2 . A detailed discussion on that limit can be found in section 5.
Similar results hold for the other integrals, and thus the infinite volume energy density can be reproduced by performing a simple change in the finite volume formula from eq. (3.46): where I ∞ 9 (t ) = 0 (see appendix C), and I ∞ i (t ) = I i (Φ ∞ , t ) for the rest. Computing the infinite volume integrals in dimensional regularization with d = 4 − 2 , one derives the energy density: which agrees with the result obtained by Lüscher in ref. [3].
't Hooft coupling at one-loop
As we provided in the previous section a regularized expression for the expectation value of the energy density, we are now finally able to focus on several interesting results. Namely, we will in this section derive the running of the coupling, its relation to the MS coupling, obtain the Λ parameter, and present our numerical results for the case of the d t = 2 two-dimensional twist.
Perturbative matching to the MS coupling at one-loop order
Let us begin by recapitulating what has been achieved so far. As we recall, we expanded the observable E(t)/N up to NLO in powers of the 't Hooft coupling: with the leading order term being given by: The function A(x) was defined in eq. (3.41), and the variablesĉ = πc 2 /2 and t = 8t/(cl) 2 were introduced to make the expression more compact. The NLO contribution is written in terms of twelve integrals given in eqs. (3.62)-(3.73), regulated through analytic continuation in d = 4 − 2 . The leading asymptotic behavior of each integral was identified, and a subtraction procedure was implemented, allowing us to write the energy density at NLO as: All of the 1/ poles arising in dimensional regularization are contained in E (1) div (t), a quantity that can be trivially rewritten in terms of the infinite volume result E (1) ∞ (t) as: Gathering all of these pieces, our results for the expectation value of the energy density can be summarized in the following expression: where C 1 (t) is given by: The perturbative relation to the MS coupling at one-loop order is obtained by simply introducing the expression of the bare coupling in terms of the MS one: leading to: Setting the MS scale to µ = 1/ √ 8t = 1/(cl), the relation at one-loop order between the two couplings reads: with the following matching coefficient at one-loop order: and where we introduced the one-loop constant C 1 : The ratio between Λ parameters in both schemes is then determined, as usual, in terms of the finite one-loop constant c 1 : The purpose of the rest of this section will be to evaluate C 1 numerically, in the case of a single non-trivially twisted plane.
The matching coefficient for a two-dimensional twist
The ingredients required in order to compute the finite constant C 1 , entering the ratio Λ TGF /Λ MS , have been provided in sec. 3.2. In the specific case of d t = 2, the computational effort that has to be invested in order to determine C 1 is considerably smaller than for d t = 4, as the 8 × 8 matrices entering the expression for Φ are reduced to, at most, 4 × 4.
In particular, we have: × Re Θ 0|iB ĉs,ĉu,ĉv,θ − Θ 2 0|iA 0 ĉsl 2 g ,ĉu,ĉvl g , where we defined a 2 × 2 matrix A 0 : as well as a 4 × 4 matrix B containing theθ dependence given by, denoting the twodimensional Levi-Civita symbol: The starting point for the numerical calculation of C 1 will then be given by eqs. (3.89) and (3.93), defining I fin i . All these integrals have been built to be finite, so d can be set to four, and l g to N , in all intervening expressions. The calculation will come in two steps, the first of which will involve using a short Mathematica program to evaluate: for i = 1, · · · 8 and i = 10, · · · 12, (4.17) The required Jacobi theta functions are part of the standard Mathematica package, and for the integration we used the numerical integrators provided by the program by default. The derivatives appearing in some of the integrals were computed using finite differences. The second step is far more complex from a numerical viewpoint, as it encompasses the calculation of: for i = 1, · · · 8 and i = 10, · · · 12, (4.19) Two independent codes were prepared for this second step, one of them written in Mathematica 3 and the other in C++. The former, much like in the first step, made use of the standard Mathematica packages, numerical integrators, and finite differences to compute the integrals, whereas the full details of the inner workings of the latter can be found in (6) appendix D. We will simply mention here that different errors were used for each of the integrals, depending on computation time. The relative errors ranged from 10 −8 in the best cases (for the single integrals), to 10 −3 at worst for I 9 , which was by far the bottleneck.
The value of c also had significant effects, with lower values taking longer times to compute.
Two key aspects are particularly interesting in the analysis of the results: the dependence on c of the coupling at constantθ, and the general dependence inθ.
For an example of the former, we analyzed in detail the case of SU (3) withk = 1, with c ranging from 0.18 to 0.8. The results for C 1 are shown in table 1. Figure 1 displays log(Λ TGF /Λ MS ) as a function of c. In a few points we plot the results obtained with both the Mathematica and the C++ codes, which are perfectly compatible (errors in the data points are smaller than the size of the symbol). The yellow horizontal line shows the result obtained when the gradient flow coupling is evaluated at infinite volume. A detailed analysis on the approach to the infinite volume and the dependence on the number of colors is presented in sec. 5, but for now we will simply mention that at constant energy scale µ = (cl) −1 and fixed N , taking c → 0 is equivalent to taking the large volume limit, in which log(Λ TGF /Λ MS ) should approach the yellow line in the plot.
As for the study of the general dependence onθ, we considered a series of coprime values ofk and (small) N such thatθ ranged from 0.14 to 0.5. The full results for C 1 are shown in table 2 and fig. 2, in which they are plotted as a function ofθ for several values of c. We observe that the dependence onθ is rather smooth for the considered values ofk, N . A discussion about theθ-dependence for larger values of N will be presented in sec. 5.
Dependence on the number of colors and the magnetic flux
In this section, we will analyze the dependence of λ(cl) on the number of colors N and the angular variableθ =k/l g . We will consider two different limits, both of them taken at fixed value of the renormalized 't Hooft coupling. The first is a singular large N limit in the spirit of those introduced in ref. [39], in which N is sent to infinity while the torus size is sent to zero in such a way as to keepl fixed, and the second is the thermodynamic limit, achieved by simultaneously sending c to zero andl to infinity while keeping cl fixed. The idea that the infinite volume limit can be attained atl → ∞ by sending either the torus size or the number of colors to infinity is implicit in our construction.
Singular large N limit andθ-dependence
Singular large N limits such as the one described above have been employed in various contexts. In ref. [12] the non-perturbative running of the SU (∞) 't Hooft coupling was computed through a step scaling procedure implemented by changing the rank of the gauge group. The calculation was done in the extreme case of TEK reduction on a one-site lattice with an effective size given byl = a √ N , where a denotes the lattice spacing. The continuum limit at fixedl was achieved by sending N to infinity, allowing the authors to compute the evolution of the coupling constant through a wide range of scales, and matching the two-loop perturbative formula at small coupling rather well.
These type of limits have also been considered in the framework of non-commutative field theory. The gauge theory we are considering is equivalent, through the Morita duality, to a non-comutative gauge theory whose rational adimensional non-commutativity parameter is given precisely byθ, a mapping through which the effective torus sizel corresponds directly to the size of the non-commutative torus in the dual theory. One of the proposals raised in ref. [39] was to define non-commutative gauge theories at irrational values ofθ through a sequence of ordinary SU (N i ) twisted Yang-Mills theories with increasing number of colors andθ i =k i /N i →θ. In 2+1 dimensions, ref. [37] has shown that this is only possible, avoiding tachyonic instabilities, for an uncountable zero-measure set of values of θ, such as for instance a sequence of values ofk and N defined throughk i /N i = F i−2 /F i , where F i denotes the ith term in the Fibonacci sequence. In that case, instabilities in the large N limit are avoided and the limiting sequence tends toθ = (3 − √ 5)/2. In 2+1 dimensions, the condition required to avoid instabilities has been shown to be given in terms of a quantity dubbed Z min : where the symbol ||x|| is used to denote the distance from x to the nearest integer [37,40]. Tachyonic instabilities and symmetry breaking transitions can be avoided as long as Z min > 0.1. Remarkably, this parameter also controls, in 4-dimensional perturbation theory, the size of the contribution of non-planar diagrams to the expectation value of Wilson loops [36]. The limiting procedure to define non-commutative gauge theories at irrational values of the non-commutativity parameter relies on the asumption of continuity inθ. The one-loop matching constant C 1 depends on the choice of the parameter c defining the renormalization scheme, the rank of the group, and the magnetic flux k, and, in particular, given a fixed value of c, one should analyze under which conditions the k and N dependence is fully encoded in the dimensionless ratiok/N definingθ. While a detailed analysis of theθ dependence is beyond the scope of this paper, we did look at the integrals I 1 and I 2 entering the definition of C 1 as representative examples of integrals that are respectively UV divergent and finite after dimensional regularization. Figures 3 and 4 show how the I 1 and I 2 contributions to C 1 depend onθ for c = 0.15 and c = 0.30 respectively. We have explored many values of N ranging from N = 2 to N = 75025, the latter as part of the aforementioned Fibonacci sequence. For c = 0.3, we noticed that the dependence onθ of both integrals is continuous, with the exception of the point N = 2 in the case of I 1 . As c decreases, however, several other points corresponding to small values of N deviate from the general curve, and, in the case of I 1 , we observe a steep dependence onθ for sequences approaching rational values, in particular fork/N = 0, 1/4, 1/3 and 1/2. A similar dependence onθ has been observed in lattice perturbation theory when considering the contribution at second order of non-planar diagrams to the expectation values of Wilson loops [36], which can be understood in terms of the parameter Z min introduced earlier. Let us take a look at how the dependence in this Z min quantity enters in the I 1 contribution to C 1 . Theθ-dependent term comes from the function H(s, u, v,θ) defined in eq. (3.78). This contribution is finite in the UV and given by: As all terms included in the sum have a non-zero value ofθ˜ m, UV-finiteness is guaranteed. However, in the limit in which this quantity tends to zero, one would retrieve the divergence present in theθ = 0 term. We will in what follows show that such a limit is approached logarithmically in Z min . Let us begin by considering the leading asymptotic behavior for wheren denotes the integer closest toθ˜ m. Integrating over x, we get: where Z 2 (m) = m 2 ||θm|| 2 . If the argument of the incomplete Γ function is small, this goes as:ĉ 2 3A(2ĉ) θ 2 3 (0, 2iĉ) m∈Z 2 e −2πĉm 2 +iπ mn γ E + log πZ 2 (m) 2ĉm 2 + · · · (5.5) The logarithmic dependence in Z is tamed by the exponential damping inĉm 2 , but at small enoughĉ this suppression disappears, giving rise to the behavior presented in fig. 4a. This is more clearly seen in fig. 5 where we show the contribution of I 1 to C 1 as a function of log Z min (N, k). The left plot shows the points for which the minimal value is attained at m = (1, 0), and the right one those with the minimum at m = (2, 0), with the red vertical line in the plots corresponding to Z min = 0.1. Sequences approachingθ = 0 in the left plot andθ = 1/2 in the right one are deep in the region with small Z min , where a tiny change in the value ofθ translates into a large change in the integral. As a final remark, we will point out that the value of Z min stays almost constant along the Fibonacci sequence mentioned earlier, meaning that the results of the integrals will depend almost exclusively on the value of c. Therefore, as expected, the singular large N limit can be taken safely along such a sequence, making it optimal, for instance, for the determination of the SU (∞) running coupling using the reduction techniques employed in ref. [12].
Large volume limit
So far, we have been discussing the dependence of the matching constant C 1 on the number of colors and the flux-dependent parameterθ for a fixed value of c, the parameter defining the TGF scheme. In contrast, in this subsection we will be looking at a different type of limit, namely the one in which c tends to zero while the effective size is sent to infinity in such a way as to keep flow time fixed (thus fixing the energy scale of the coupling as well). This limit can be taken in two different ways, either by sending the smallest torus period l to infinity while keeping the rank of the group N fixed, or by sending N to infinity at fixed l. If volume independence holds true, in both cases the infinite volume expression should be recovered, and correspondingly C 1 should vanish. As we recall, at fixed value of t, C 1 is a function of three parameters: c, N and the magnetic flux k. In particular, all of the dependence on the boundary conditions (i.e. the dependence on k) is contained in C 1 , and will vanish in the thermodynamic limit provided C 1 does as well. We will therefore analyze in what follows the behavior of the matching constant in the approach to the thermodynamical limit, along with the size of the finite volume (or finite N ) corrections.
To prepare for such a discussion, we will first take a look at the LO term in the expansion of the energy density, eq. (3.40), with t set to (cl) 2 /8. As we recall, the dependence on c and N came from: where: In the infinite volume limit, understood in the sense of c → 0 at fixed l g , one has F 0 (0, d) = 1 and therefore: leading to a LO term in agreement with the results found in ref. [2]. The leading correction is exponentially suppressed with the square of the volume as: If the large N limit (i.e. large l g ) at fixed l and constant cl g is taken instead, one gets A(πc 2 ) = 1 + O(1/N 2 ), which does indeed correspond to the infinite volume large N limit. The approach to the limit is in that case powerlike, with 1/N 2 corrections. The discussion of the NLO term, on the other hand, is more involved and requires some previous steps to be properly considered. As we recall, the different contributions to C 1 can be written in a compact way as 4 : 11) and it is given by: (5.12) with: In order to analyze the approach to the infinite volume limit, it is more convenient to look at the expression resulting after Poisson resummation in m. We will, for simplicity's sake, focus on the case of the two-dimensional twist, d t = 2, and will move the full detail of the computations to appendix E for clarity. We will separate each of the contributions to C 1 intoθ−independent andθ−dependent terms, given by: , (5.14) where the functionĤ is obtained by subtracting the zero modes fromĤ after Poisson resummation (see appendix E for the details), and: T D , and depends on two quantities:ĉα and cαu/s.
The simplest case corresponds to integralsĪ 1 ,Ī 2 andĪ 4 , for which bothĉα andĉαu/s tend to zero in theĉ → 0 limit in all of the integration range. The leading contribution, derived in appendix E, is given by: Integrals for which the infinite volume contribution I ∞ i is UV-divergent at d = 4, such asĪ 1 andĪ 4 , have a leading correction that goes as ∼ log(c 2 N 2 ) exp(−1/(cN ) 2 ). I ∞ 2 is UV-finite and the leading correction has a purely exponential decay in the thermodynamic limit, given by exp (−1/(cN ) 2 ). We show in fig. 6 the dependence of these integrals on cN for several values ofk and N , plotting their value multiplied by the factor (N 2 − 1) exp(1/(cN ) 2 ). The continuous lines in the plot are given by the formulas presented above and describe very accurately the data for small cN . In the limit obtained by sending N to infinity and c to zero at small, fixed cN , the three integrals also go to zero with corrections of order 1/N 2 .
The general dependence of C 1 on cN as cN → 0 is in fact well described by a formula analogous to eq. (5.20); an example of this for the case of SU(3) is shown in fig. 7, where C 1 is displayed as a function of (cN ) 2 . The continuous line in that plot is the result of a fit to the functional form f (cN ) = exp(−1/(cN ) 2 )(α + β log(cN ) + γcN + δc 2 N 2 ). In order to push the calculation of C 1 to smaller values of c, we split it into two pieces, represented by the open blue circles and the yellow squares in the plot. The most relevant part comes from the contributions ofĪ 3 ,Ī 7 ,Ī 10 ,Ī 11 andĪ 12 , which we were able to compute down to values of (cN ) 2 ∼ 0.1. Asymptotically, this piece is described quite well by the function f (cN ), with a leading dependence on c of the form log(cN ) exp(−1/(cN ) 2 ).
In the rest of this section, we will explore how the infinite volume limit is approached for the remaining integrals (excludingĪ 1 ,Ī 2 , andĪ 4 ). The discussion is a bit more complex in their case, as the leading correction goes as c 2 for each of the integrals, but the corrections cancel out when all contributions to C 1 are considered. We will first analyze the case ofĪ 3 in detail to see how the cancellation takes place, and then generalize it to all other cases. For this integral, in theĉ → 0 limit,ĉαu/s goes to zero in the full integration range, and the leading dependence is given by: in the cN → 0 limit is: with a 1 = −1.76508480122121275 and, for instance, a 2 (N = 3) = 3.59085631503990722. The quantity a 2 (N )/N 2 grows logarithmically with N 2 , as shown in fig. 8. One can show that, in the infinite volume limit, all remaining integralsĪ i converge in the same manner, being proportional to I 0 with a proportionality coefficient of +1 for i = 5, 6, 7, of -1 for i = 10, 11, 12 and of 4 and -2 in the cases ofĪ 8 andĪ 9 respectively. Combining eq. (3.46) with these coefficients, it is easy to show that the total contribution of the leading (cN ) 2 term to C 1 vanishes. We did not analyze in detail how the different integrals approach zero after subtracting the quadratic piece in c, but, based on the results presented in fig. 7, we expect other possible power like corrections to cancel out as well when combined to form C 1 , the final result exponentially decaying towards zero with a leading dependence on c of the form ∼ log(cN ) exp −(cN ) −2 /(N 2 − 1). A preliminary analysis was performed for the case of I 3 , with the quantityĪ 3 − I 0 times the factor (N 2 − 1) exp((cN ) −2 ) being shown in fig. 9 as a function of cN for several values of N . Each point in that plot was obtained from the exact expression forĪ 3 , and the continuous lines correspond to the approximate expression obtained combining eqs. (5.16) and (5.17). This decomposition is quite useful towards analyzing the N dependence of the integral, and so we displayed each of the two pieces in figs. 10a and 10b as a function of the appropriate scaling variable cN . Theθ-independent term is presented in fig. 10a, multiplied by the factor (N 2 − 1) exp((cN ) −2 ) scaling away most of its N dependence. For cN → 0, the integral decays exponentially as ∼ exp(−(cN ) −2 ), whereas in the large N limit at fixed value of cN it goes to zero with quadratic corrections in 1/N 2 . The analysis of theθ-dependent part is more complicated, as one needs to take into account the dependence on the magnetic flux k. The decay ofĪ 3 towards zero is in this case faster than exponential, going as ∼ (cN ) 2 exp(−(cN ) −2 ). This is shown in fig. 10b, where we plottedĪ (0) 3T D multiplied by the inverse of this factor times (N 2 − 1) as a function of cN for various values of N and the magnetic flux. In the large N limit taken at fixed cN , this term also scales to zero as 1/N 2 . It would be interesting to study theθ-dependence for large values of N in more detail for both this integral and the others, but such an analysis goes beyond the scope of this paper.
Summary and conclusions
We computed the perturbative expansion at one-loop order of the SU (N ) twisted gradient flow coupling, including the matching to the MS infinite volume scheme at a renormalization scale µ = 1/(cl) given by a combination of the size of the torus and the rank of the gauge group. The corresponding one-loop finite piece was determined numerically in the case of a two-dimensional non-trivial twist for whichl = lN . The computation was done for a range of values of c (the number relating the energy scale to the size of the torus), of the magnetic flux, and for several values of the rank N of the gauge group, allowing us to obtain the ratio of Λ parameters between the TGF scheme and the MS one.
Moreover, we deemed it interesting to explore the dependence of the coupling on the number of colors and the magnetic flux in a bit more depth, and so we analyzed the dependence of λ T GF in two different limits. First, we studied the limit in which N and the torus size are sent to infinity and zero respectively in such a way as to keepl, and hence the renormalized 't Hooft coupling at scale µ = 1/(cl), fixed. This is a singular large N limit in the spirit of those introduced in [39], albeit a rather non-standard one since non-planar, θ-dependent diagrams survive the limit as long asl is finite. The connection of this case to non-commutative Yang-Mills theory is straightforward through the use of the Morita duality: the non-commutative dual torus is of lengthl and has a dimensionless noncommutativity parameter given byθ =k/N . Our analysis also supports the observation, first presented in [37,40], that the avoidance of tachyonic instablities when taking the singular limit is only possible for a zero-measure, though uncountable, set of values ofθ. Curiously, one of the successful cases, of limiting parameterθ = (3 − √ 5)/2, relies on a sequence of Fibonacci numbers with k = F i−2 and N = F i with F i denoting the i-th element of the Fibonacci series [40].
The second limit at which we looked was the thermodynamic limit, in which c is sent to zero andl is sent to infinity while keeping the energy scale µ constant. This leads to the one-loop expression of the 't Hooft gradient flow coupling at infinite volume [3]. Our results give support to the reduction idea, in the sense that the SU (∞) coupling in the thermodynamic limit can also be recovered at fixed torus size by sending N , and hencel, to infinity, in which case the limit is approached with 1/N 2 corrections.
A The Feynman rules with twisted boundary conditions
The Feynman rules for the set of irreducible twist tensors used in this work have been derived in various contexts both in the continuum (see for instance [65] and references therein for a review) and in the lattice regularized version of the theory [23,26,33,36]. In this appendix, we will summarize the ones relevant to our work, derived in the continuum.
The set of allowed gauge transformations in our theory will be restricted to those preserving the form of the boundary conditions in eqs. (3.1), (3.2), using the irreducible twist given in eq. (2.11), and the remaining gauge degrees of freedom will be fixed using a generalized covariant gauge of parameter ξ consistent with the boundary conditions. After scaling the gauge potential with the bare coupling g 0 , the Lagrangian density, including the gauge fixing terms, reads: where D µ ≡ ∂ µ + ig 0 A µ is the covariant derivative and c,c denote the ghost fields. One may then obtain the propagators of the gauge and ghost fields using the Fourier expansion of the gauge potential given in eq. (2.14), along with an analogous one for the ghost fields: where the momenta appearing in these expressions are quantized in units of the effective sizel. The Feynman rules for the vertices are then obtained from the commutation relations in eq. (2.17), and are expressed in terms of the momentum-dependent structure constants F (p, q, −q − r). The terms contributing to minus the gauge fixed action are the following: • 3-gluon term: with: • 4-gluon term: with: • Ghost-gluon term: These rules can be easily used to derive different quantities, such as the one-loop correction to the propagator. At order O(g 2 0 ) and in the Feynman gauge (ξ = 1), the vacuum polarization tensor can be obtained as shown in ref. [65]: B Integral form of the energy density at NLO As we recall, the energy density at NLO in the twisted gradient flow scheme can be expressed in terms of several integrals. In section 3.1.3, we chose for both clarity and concision to show a single example of the derivation of these integrals, and left the expression of the full seven O λ 2 0 contributions to the observable E/N in terms of the integrals for this appendix. The contributing E i terms are the following: When summing all of the terms contributing to E (1) (t), the I i terms cancel out, and thus: which is the NLO result given in eq. C Regularization of I 9 We will present here the full details of the procedure to regularize the integral I 9 , defined in eq. (3.70), which differs slightly from the general treatment described in sec. 3.2.1. As we recall, the initial integral is split into three terms: with the Heaviside function θ restricting the integration intervals in z. The first term on the r.h.s. of this expression is already finite in four dimensions, whereas the other two will be shown to be so as well after analytical continuation to d = 4. Let us start by discussing the treatment of I 9 (Φ (0) , t ). The original integral can be rewritten as: which, after some algebraic manipulation, becomes: This integral I is in d = 4 − 2 dimensions of order times zero. The asymptotic behavior at z = 0 is then obtained by expanding A(ĉ(2t + z)) around z = 0, leading to: The integral over z presents a pole in 1/ , but it is cancelled when multiplied by I, leading to a final result that is identically zero for d = 4. This I ∞ 9 is precisely the integral appearing in the infinite volume calculation, and vanishes, as we have just seen, for d = 4 in dimensional regularization.
The remaining I 9 (θ(z − 1)Φ (0) , t ) term can be treated in a similar way. One takes the initial expression: and rewrites it in a form identical to eq. (C.2), only with the integral over z restricted to the interval [1, ∞]. After some manipulation, the regularized result becomes: which is finite in d = 4 dimensions, and which we were able to evaluate numerically.
D Numerical implementation of the integration algorithm
In order to perform the numerical computation required to obtain the results presented in sec. 4.2, we prepared a code in C++ to compute the values of the Φ(s, u, v,θ) functions and their derivatives at any point, and integrate them along the corresponding ranges using the trapezoidal rule up to a target precision. We will in this section begin by explaining how the computation of each Φ(s, u, v) is performed, and then detail how the integration algorithm works.
D.1 Momentum Sums
As we recall, we had to compute the following quantity: where we introduced a generic matrix X to denote either A 0 or B from eqs. (4.15) and (4.16). We used Poisson resummation to write the sums in terms of both X and its inverse, allowing us to simultaneously compute several equivalent versions of the three terms of Φ fin , which let us exploit the fact that convergence speed depends on the (s, u, v) point being considered to speed up the program. We defined eight quantities to be computed: where we used the shorthands B θ ≡ B(ĉs,ĉu,ĉv,θ), B 0 ≡ B(ĉs,ĉu,ĉv, 0), andB ≡ B(ĉsN 2 ,ĉu,ĉvN, 0) for clarity. Several of these expressions are redundant: allowing us to rewrite the observable Φ fin (s, u, v) in four equivalent forms: We derivated these four equivalent expressions in the integrals in which it was required, simply using the chain rule and computing the derivatives of each E i function when needed. We will skim over the details of the algorithm used to generate the momenta in the sums, simply mentioning that we defined a four-dimensional integer vector M t = (m 1 , n 1 , m 2 , n 2 ) and generated the corresponding combinations of integers m i , n i , using the M → −M symmetry in the integrand to shorten the computation time. The momentum tetrads were generated in an orderly manner, starting with all contributions of the tetrads with |m i |, |n i | = 0, 1, then adding the ones with some |m i |, |n i | = 2, then |m i |, |n i | = 3 and so on and so forth, adding terms with momenta of increasing order until the sum converges (in the sense that we will detail below).
Thus, the code simply runs through momentum tetrads of increasing order, and passes them through a filter that checks whether or not m is proportional to N and whether or not n = 0, computing and adding the relevant exponential terms to each of the eight E i terms. Once every tetrad of a particular order has been processed, the program computes the value of Φ fin up to that order in the four equivalent ways shown earlier, and checks whether the variation of each term between the previous order and the new one is smaller than a set quantity times the value of the function. If that turns out to be the case for any of the four expressions, the sum is considered to have converged and that particular Φ fin is returned as the result. To avoid early spurious convergences, we set a minimum order of four for the sum. The same relative error was also used as the convergence criterion for the integration algorithm (see next subsection), and ranged between 10 −3 and 10 −8 depending on the integral, due to differences in runtime between them.
D.2 Integration Algorithm
Now that we explained how the integrand is computed at each point, we may focus on the integration algorithm, for which we chose to use a fairly standard trapezoidal rule for multiple integrals in which the integral along each coordinate is approximated using an increasing number of trapezoids until a target precision is reached. We will begin by quickly illustrating how a generic single-dimensional integral works in our code, generalize it to the multiple ones, and then mention a few specific choices of strategy.
Consider thus a single integral over a finite interval, say for instance the interval z ∈ [0, 1]. The code begins by computing the value of the integrand, which in this case would be the Φ function, at the beginning and end of the interval, and approximates the integral with a trapezoid. The integrand is then determined at the middle point z = 0.5, and the integral is approximated with the two z ∈ This subdivision generated by computing the integrand at the midpoints goes on, until the variation in the approximated integral between one order and the next is smaller than a set target (the same that we used for the Φ functions above) times the value of the integral at that order, at which point we consider that convergence has been reached and the integral is finished. As we mentioned earlier, in our runs ranged between 10 −3 and 10 −8 .
Multiple integrals are trivial in such a setting: one simply starts with the integral over the outermost coordinate, z, but at every point in which the integrand needs to be determined instead of computing the Φ function, one recursively calls the integration routine to obtain the integral over the next coordinate.
To allow for easier parallelization, and since the integrand tends to have more structure near z = 0, we chose to split the integral in z into a set of pre-chosen subintervals, with a shorter step size at smaller values of z, and treated the integration along each of these subintervals separately. To avoid spurious convergences, we imposed a minimum of eight points in each integration subinterval. Moreover, in the cases in which the integrals went up to infinity in the z coordinate, we ran the integration code up to z max = 10 4 and extrapolated the result by fitting the results of the last ten subintervals to a simple shifted exponential of the form I f it = a 0 − a 1 e −a 2 (z−zmax) , using the fitted a 0 as the final result of the integral. A simple least squares method algorithm was used to perform the fits.
There were a couple of peculiarities worth mentioning regarding integrals I 8 and I 9 . For the former, and after performing a change of variables so that the second integral runs up to x = 1, we noticed that the contribution to the integral is concentrated around z = 0, with the profiles of the integrand over x peaking at small values of z and vanishing after a range ∼ z −1 . This means that the strategy to keep dividing the integration interval into halves in the x coordinate is quite inefficient, as the contribution is concentrated in a small region and one is throwing many points into areas that are effectively zero. To avoid this issue, we chose to subdivide the inner integral into 1,5,50,500 and 1000 equal subintervals as z runs up to 1,10,1000 and 10000 respectively. As soon as the integral over two consecutive subintervals in the x axis vanishes for z > 1, the subintervals that follow are ignored entirely, greatly speeding up the computation without affecting the result.
The case of I 9 is a bit special in that the regularization was different from the other integrals, with a Heaviside θ(1 − z) function being introduced in the integrand (see the end of sec. 3.2.1 and app. C for the specifics) and separating the bits before and after z = 1. For the numerical computation, we performed the same change of integral as in I 8 to make the second integral run up to y = 1, but then the Heaviside function became a θ(1 − y z ) function, with the integrands being different before and after this point. As convergence turned out to be painfully slow when both integrands were considered jointly, we simply forced the integrals in y to be split from z = 1 onwards into two subintervals [0, 1/z ] and [1/z , 1], with the convergence of each side being considered separately.
Due to the procedure we used to determine the convergence of the integrals, for a given integral I, and dubbing the number of integrals to perform n i (single, double or triple), the final error of the integral is: This comes from the fact that both the error of the Φ functions and the convergence criterion for the integrals is given by the same , so for a single integral: Additional integrals simply add extra 1+ factors, which end up generating the (1+n i ) term. In the cases where the integrals ran up to infinity in z and had to be fitted, we presented as the final error either ∆I or the error from the fit itself, whichever was larger.
Moreover, some issues were caused by some computed quantities hitting machine precision, slowing down the computation while leaving the results effectively unaffected. To deal with them, we introduced several hard cuts in the integrals, integrands and determinants. In particular, we made it so that any Φ function returning a value under 10 −12 , any inner integral returning any value under 5 × 10 −12 (or 10 −10 in the cases of a few intervals in which using 5 × 10 −12 led to severe slowdowns), and any exponential returning a result over 10 −13 is automatically set to be exactly zero. The cut in the integrals is also used in the convergence checks we mentioned earlier: whenever the value of the integral times becomes smaller than the precision cut, the precision cut is used as the convergence criterion instead.
Lastly, we need to mention that, despite the integrals computed being finite, convergence near the point (s, u, v) = (2, 0, 0) can become quite slow, as the integrand approaches machine precision. To address this issue, a cut in u was introduced, setting the integrand to zero when u < 0.01 in the integrals in which such point is part of the integration region Figure 11. We display several examples of the profile of the integrand as a function of u near (s, u, v) = (2, 0, 0) for several integrals for c = 0.7, to illustrate that the cut introduced in u (displayed as a vertical line near the origin) has no effect on the resuts.
(namely, in I i for i = 1, 4,5,7,9,11,12). This cut does not appreciably change the results, as the contribution of the excluded area is well below the uncertainty of the total result. To illustrate this, we show in fig. 11 some examples of the profile of the integrand near the aformentioned (s, u, v) = (2, 0, 0) point, in which one can both see that the integrand is indeed finite and that the area excluded by the cut is negligible compared to the rest of the integrand.
E The infinite volume and large N limits
In this appendix, we will derive the formulas mentioned in sec. 5, which were used to analyze the N andθ dependence of C 1 at NLO in the coupling for the case of a two dimensional twist (d t = 2). As we recall, the contributions to C 1 , barring the one from I 9 which is slightly different, can be written in the form: The functionĤ can be expressed as: It is convenient, in order to analyze the infinite volume limit, to look at the expressions resulting after Poisson resummation in m for both theθ-dependent andθ-independent parts. For the latter, Poisson resummation yields: In theθ-dependent case, on the other hand, we begin by rewriting m =ml g + m c , with the components of m c µ taking values in the intervals [−l g /2, l g /2) or [−(l g − 1)/2, (l g − 1)/2] when l g is respectively even or odd. Poisson resummation is then performed with respect tom only, leading to: where we introduced a d t -vector χ whose components are given by χ µ = ||θ˜ m c µ ||, the symbol ||x|| denoting the distance from x to the nearest integer. Introducing χ µ = n c µ /l g and inverting the relation between m c and n c to write m c = k n c (mod l g ), we obtain: We will now split the original integral into two pieces, settingθ = 0 in one part to confine all of theθ dependence to the other one. As we want both of them to be well behaved both in the IR and in the UV, it will be convenient to first isolate the terms corresponding to zero-modes at each step of the calculation, both before and after Poisson resummation. In the original definition ofĤ, given by eq. (3.78), the terms with m = 0 were already subtracted, so we simply need to take away the terms corresponding to n = 0. Doing so leads to: whereĤ denotes the resulting function after subtracting those zero modes. The same can be done for the term containing A(ĉs), whose zero mode contribution is given by The termĤ(s, u, v,θ) containing theθ dependence requires a bit more work, but the idea is the same. We begin by rewriting the components of the 4-vector n along the twisted directions as n µ =ñ µ N + n c , with n c a 2-dimensional vector of integers taking values for N even or odd in the respective intervals [−N/2, N/2) or [−(N − 1)/2, (N − 1)/2], and then subtract the terms corresponding to n µ = 0 along periodic directions andñ µ = 0 along the twisted ones. Subtracting the m = 0 terms as well, and adding back once more the doubly subtracted ones, we end up with: where z µ = µν n cν k/N + in cµ v/(ĉN 2 αu). We may then rewrite each of the integrals contributing to C 1 as the the sum of two components I = I T I + I T D , the latter containing all of theθ dependence: and with n c and z µ as defined above. From this expression, one can analyze theĉ → 0 limit, whose approach is driven by two variables:ĉα andĉαu/s. In all contributing integrals but I 8 and I 9 , one of the two variables vanishes for all of the integration range when taking such a limit. The first thing worth noting is the fact that zero modes have already been subtracted from all terms not included in I T D , and hence the leading order in theĉ → 0 limit for them will be proportional to: (uα) −2 exp − π cN 2 α − πŝ cN 2 αu + · · · , (E. 18) which approaches zero at least exponentially in theĉN 2 → 0 limit, and goes, in the large N limit taken keepingĉN 2 constant, as 1/N 2 . In most cases, the leading contribution in theĉ → 0 limit is hence given by I T D . The simplest cases are those ofĪ 1 ,Ī 2 andĪ 4 , for which bothĉα andĉαu/s tend to zero in the whole integral range. Starting from the expressions of I T I and I T D , it is easy to derive the leading correction to the large volume limit. In the three cases it is given by: All three integrals can be analytically approximated with this, leading to: e −(cN ) −2 1 + 3γ E − 3 log 3c 2 N 2 − 3c 2 N 2 , (E.20) e −(cN ) −2 1 − 6c 2 N 2 , (E.21) e −(cN ) −2 1 − γ E + log 9c 2 N 2 − 3.544907702 cN + c 2 N 2 . (E.22) We will now consider the remaining integrals, looking first at theĉ dependence of I To leading order all these integrals go to zero as ∼ c 2 , with a coefficient depending on N that is identical in absolute value for all of them.
We will take a look atĪ 3 as an illustrative example. The leading contribution in thê c → 0 limit for this integral is given by: which allows us to separate A into two parts, one that depends on N and another that does not: Rescaling z to z =ĉz in the first expression and to z =ĉN 2 z in the second, we can decompose the integral into the difference of two piecesĪ The leading order result in the cN → 0 limit is thus given by: T D . The cases ofĪ 8 andĪ 9 are shown in the plot as well, which also turn out to be proportional to I 0 with respective coefficients 4 and -2. | 19,335.4 | 2019-03-01T00:00:00.000 | [
"Mathematics"
] |
Electron tomography of ( In , Ga ) N insertions in GaN nanocolumns grown on semi-polar ( 11 2 ̄ 2 ) GaN templates
We present results of scanning transmission electron tomography on GaN/(In,Ga)N/GaN nanocolumns (NCs) that grew uniformly inclined towards the patterned, semi-polar GaN(1122) substrate surface by molecular beam epitaxy. For the practical realization of the tomographic experiment, the nanocolumn axis has been aligned parallel to the rotation axis of the electron microscope goniometer. The tomographic reconstruction allows for the determination of the three-dimensional indium distribution inside the nanocolumns. This distribution is strongly interrelated with the nanocolumn morphology and faceting. The (In,Ga)N layer thickness and the indium concentration differ between crystallographically equivalent and non-equivalent facets. The largest thickness and the highest indium concentration are found at the nanocolumn apex parallel to the basal planes.
We present results of scanning transmission electron tomography on GaN/(In,Ga)N/ GaN nanocolumns (NCs) that grew uniformly inclined towards the patterned, semipolar GaN (11 22) substrate surface by molecular beam epitaxy.For the practical realization of the tomographic experiment, the nanocolumn axis has been aligned parallel to the rotation axis of the electron microscope goniometer.The tomographic reconstruction allows for the determination of the three-dimensional indium distribution inside the nanocolumns.This distribution is strongly interrelated with the nanocolumn morphology and faceting.The (In,Ga)N layer thickness and the indium concentration differ between crystallographically equivalent and non-equivalent facets.The largest thickness and the highest indium concentration are found at the nanocolumn apex parallel to the basal planes.C 2015 Author(s).All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.[http://dx.doi.org/10.1063/1.4914102]Semiconductor nanowires and nanocolumns (NCs) are discussed in the literature as potential building blocks for the fabrication of future nano-scaled optoelectronic or photovoltaic devices.A crucial prerequisite is, however, a precise control on their three-dimensional geometry, shape, and composition.Selective area growth (SAG) of nanowires and -columns has been proven as a promising technique to achieve these requirements. 1Accordingly, the SAG by molecular beam epitaxy on patterned substrates has demonstrated the successful realization of well-ordered arrays of GaN NCs including homogeneous axial (In,Ga)N/GaN heterostructures. 2 Furthermore, the selection of the interface orientation is crucial.The growth of (In,Ga)N on non-or semi-polar facets is pursued to circumvent the consequences of strong internal electric fields along the polar ⟨0001⟩ direction of the GaN wurtzite structure. 3The major approaches suggest using either a corresponding substrate orientation 4,5 or radial NC heterostructures with preferential incorporation of indium on the side-facet of the NCs grown on polar substrates. 6,7The microstructure and local chemical composition of those perfectly aligned NCs can be analyzed by high-resolution and analytical transmission electron microscopy (TEM).As long as the one-dimensional (1D) nanostructures reflect a high symmetry with respect to the substrate and along the growth direction, standard plan-view and cross-sectional TEM investigations are sufficiently applicable. 8On the other hand, a higher complexity in the NC geometry as well as in the internal structure demands a more complete three-dimensional structural and chemical information that cannot be extracted from various TEM projections only.Here, electron tomography has been recently applied to measure for instance the variations in surface morphology 9,10 or the three-dimensional distribution of the electrostatic potential in III-V nanowires. 11In this report, the complex morphology of ordered (In,Ga)N/GaN NCs inclined to the substrate and the spatial elemental distribution of indium is disclosed by scanning transmission electron tomography which tackles the challenge of the three-dimensional problem on the nanometer scale.The tomography conclusively reveals a core-shell like GaN/(In,Ga)N/GaN heterostructure of the columns with a (In,Ga)N insertion along the c-plane close to the apex.
The investigated NCs have been grown on a semi-polar (11 22) GaN template (fabricated on a (1 100) Al 2 O 3 substrate) by molecular beam epitaxy (MBE).The growth started with a GaN deposition for 3 h followed by the deposition of (In,Ga)N for 160 s.A final step of GaN growth for 5 min has been applied to cap the (In,Ga)N.The SAG process proceeded on a Ti nanomask deposited onto the substrate. 4The preferential growth direction is found to be perpendicular to the c-plane, what forces an inclined growth of the NCs in respect to the substrate (see Fig. 1). 5 Growth details are described in the publication of Bengoechea-Encabo et al. 5 The scanning electron microscopy (SEM) images in Fig. 1 The investigation of the (In,Ga)N insertions by electron tomography necessitates to fit the specimen geometry to the NCs and to select an adequate imaging mode.The first prerequisite is realized with a focused ion beam (FIB) integrated in a dual-beam microscope (JEOL IB4501) equipped with a versatile sample stage and a micro-manipulator (Kleindiek).The sample is mounted on a tomography holder (Fischione model 2050) with its [0001] direction approximately aligned to the TEM goniometer tilt axis (cf.Fig. 2).This geometric condition allows to minimize the thickness that has to be transmitted by electrons and enables imaging along the low indexed 1 100 and 11 20 directions in contrast to conventionally prepared cross-section samples which only allow the imaging along the low indexed [1 100] or [ 1100] direction as shown in Fig. 1(d).The challenge to machine GaN with a gallium ion beam and isolating Al 2 O 3 with a charged particle beam in general has to be noted at this point.
The second requirement is met by the selection of scanning transmission electron microscopy (STEM) imaging using the high-angle annular dark-field (HAADF) signal.HAADF STEM contrast is dominantly sensitive to the sample thickness and its chemical composition (Fig. 2(c)).In contrast, bright-field images in conventional transmission electron microscopy (CTEM) and STEM mode shown in Figs.1(d) and 2(a), respectively, are dominated by diffraction contrast.These examples highlight defects parallel to the basal planes and provide complementary information.On the other hand, contrast features perpendicular to the basal planes in Fig. 1(d) are thickness fringes.Furthermore, the monotonic variation of HAADF intensity with thickness is required for the tomography reconstruction which is not fulfilled for diffraction contrast. 12Therefore, HAADF is the preferred signal for electron tomography in materials science 13 although BF STEM images provide a better signal to noise ratio.STEM and CTEM measurements have been carried out using a JEOL 2100F microscope.
A HAADF tilt-series in steps of 3 • from −81 • to +84 • has been acquired.Figure 2(c) shows one image from this series taken along the [11 20] direction.The 270 nm thick lamella comprises an ensemble of NCs.The original template surface is inclined to the image plane.Particles at the surface are related to parasitic growth.The NC of interest (marked by an arrow) has little overlap with the template during tilting to reduce the thickness of GaN that has to be penetrated by the electron beam.This is important because the intensity I -as a prerequisite for the tomographic reconstruction -has to linearly increase with thickness t which does not hold for the lamella at the position of the template.Furthermore, the HAADF signal is sensitive to the chemical composition, i.e., I ∝ Z 1.7 , where Z is the atomic number. 14Fig. 2(c) shows, consequently, an (In,Ga)N inclusion or a higher indium concentration, respectively, at the apex of the NC (marked by the dashed ellipse).
The analysis of the spatial distribution of indium is based on the three-dimensionally reconstructed volume from the tilt series and the adequate representation of the volume both realized with the IMOD software package. 15The visualization of only one intensity value of the reconstructed data set is called an isosurface.The isosurface presentations in Fig. 3 (left column) allow the description of the NC morphology and the facet determination.A rough surface parallel to the (0001) plane is clearly resolved (Figs.3(b)-3(d)).In contrast, there are smooth and extended {10 10} side facets and several pyramidal {10 11} facets that appear near the top and at the bottom of the NC and occasionally as steps interrupting the {10 10} facets (Fig. 3(d)).Besides, the influence of the inclined growth of the NC and, consequently, the shadowing of one side from the molecular beam is reflected by the non-uniformly developed pyramidal facets.We observe distinct facets of ( 1101 The presentation of two isosurface values allows to describe the shape of the (In,Ga)N inclusion with respect to the morphology of the NC.For this purpose, the isosurface of the GaN is presented semitransparent, and the intensity value corresponding to the (In,Ga)N part is added as a red opaque isosurface. 16,17The different views of this semitransparent isosurface representation in the right column of Fig. 3 reveal the morphology of the (In,Ga)N inclusion and thus verify that the (In,Ga)N layers replicate the NC morphology, i.e., they appear as pyramidal and prism planes as well as on the basal plane.Furthermore, it is clearly recognizable that the (In,Ga)N is completely capped by a thin GaN layer.As a consequence, the GaN/(In,Ga)N/GaN NC is arranged as a core-shell like heterostructure with the limitation of being not completely closed at the part inclined to the substrate.
The detailed investigation of the spatial indium distribution is qualitatively analyzed by slices through the reconstructed volume and is summarized in considered to be a consequence of the shadowed facets due to their inclination towards the substrate and an insufficient diffusion of indium on the surface during growth.
Our electron tomography results demonstrate the spatial distribution of (In,Ga)N inclusions in GaN NCs resulting in a core-shell like structure.These findings allow to interpret the spatially resolved cathodoluminescence measurements of the NCs 5 and contribute to a more fundamental understanding of the growth mechanisms.
In summary, the qualitative indium distribution in GaN/(In,Ga)N/GaN NC heterostructures is directly observed in the tomography data and, hence, reveals the formerly expected (In,Ga)N insertions. 5The thickness of the (In,Ga)N insertions and the spatial indium distribution is strongly related to the NCs morphology.The incorporation of indium depends on the crystallographically different lattice planes.Furthermore, the particular geometry of the NCs with respect to the substrate leads to unequal thicknesses of the (In,Ga)N layers parallel to crystallographically equivalent lattice planes.Eventually, the (In,Ga)N does not form a closed shell around the NC.All these results on complex 3D structures and their spatial chemical element distribution are uniquely accessible by electron tomography in combination with an advanced sample preparation method based on FIB.
We thank Stefan Fölsch for critical reading of the manuscript.
1
Paul-Drude-Institut für Festkörperelektronik, Hausvogteiplatz 5-7, 10117 Berlin, Germany 2 ISOM and Departamento de Ingeniería Electrónica, ETSI Telecomunicación, Universidad Politécnica de Madrid, Ciudad Universitaria s/n, 28040 Madrid, Spain (Received 22 January 2015; accepted 24 February 2015; published online 3 March 2015) (a) along the [000 1] direction and (b) in a bird's eye view along with the scheme in Fig. 1(c) illustrate the geometric situation.The NCs are homogeneously inclined with their axis toward the sample surface due to the preferred growth along the [0001] direction.The SEM image in Fig. 1(a) depicts the hexagonal symmetry of the bordering faces around the [0001] axis.The orientation relationship between the (11 22) surface and the [0001] direction is illustrated in Fig. 1(c) along with the morphological notion from the SEM results.The angle between the basal and the surface plane amounts to 58.4 • .
FIG. 1 .
FIG. 1. SEM images (a) and (b) show the hexagonal shape and the homogeneous inclination of GaN/(In,Ga)N NCs, respectively.The schematic (c) illustrates the orientation relation of NCs towards the substrate surface.(d) The CTEM bright-field image of a cross-section sample viewed along the [1 100] zone axis is sensitive to defects on basal planes.
FIG. 2 .
FIG. 2. (a) The bright-field STEM image along the [11 20] zone axis provides information about defects on basal planes based on diffraction contrast which is suppressed in (c) the HAADF STEM image in the same direction.The latter imaging mode is exploited for its chemical sensitivity to reveal the distribution of (In,Ga)N in the NC marked by an arrow.(b) The scheme depicts the used allocation of crystallographic directions and lattice planes to the analyzed NC.
FIG. 3 .
FIG. 3. Isosurface visualization: the opaque representation on the left column illustrates the morphology of the nanoobject.The right column reflects the red opaque (In,Ga)N shell that resembles the outer morphology which is presented semitransparent in this montage of two isosurfaces.
Fig. 4 .
FIG. 4. Slices through the reconstructed volume: the position of the slices 1-6 is marked in the isosurface representations on the left. | 2,840.6 | 2015-03-03T00:00:00.000 | [
"Materials Science"
] |
High-fat diet induces cardiac remodelling and dysfunction: assessment of the role played by SIRT3 loss
Mitochondrial dysfunction plays an important role in obesity-induced cardiac impairment. SIRT3 is a mitochondrial protein associated with increased human life span and metabolism. This study investigated the functional role of SIRT3 in obesity-induced cardiac dysfunction. Wild-type (WT) and SIRT3 knockout (KO) mice were fed a normal diet (ND) or high-fat diet (HFD) for 16 weeks. Body weight, fasting glucose levels, reactive oxygen species (ROS) levels, myocardial capillary density, cardiac function and expression of hypoxia-inducible factor (HIF)-1α/-2α were assessed. HFD resulted in a significant reduction in SIRT3 expression in the heart. Both HFD and SIRT3 KO mice showed increased ROS formation, impaired HIF signalling and reduced capillary density in the heart. HFD induced cardiac hypertrophy and impaired cardiac function. SIRT3 KO mice fed HFD showed greater ROS production and a further reduction in cardiac function compared to SIRT3 KO mice on ND. Thus, the adverse effects of HFD on cardiac function were not attributable to SIRT3 loss alone. However, HFD did not further reduce capillary density in SIRT3 KO hearts, implicating SIRT3 loss in HFD-induced capillary rarefaction. Our study demonstrates the importance of SIRT3 in preserving heart function and capillary density in the setting of obesity. Thus, SIRT3 may be a potential therapeutic target for obesity-induced heart failure.
Introduction
Obesity is prevalent in the Western and developing worlds. According to The International Union of Nutritional Sciences, obesity rates could be as high as 45-50% in United States by the year 2025 [1]. Obesity is considered an independent risk factor for heart failure [2]. Mortality because of cardiovascular diseases such as stroke, coronary heart disease, congestive heart failure and cardiomyopathy are strongly associated with obesity [3]. Studies have shown the important role of obesity in cardiac dysfunction, LV hypertrophy and dilatation [4,5]; however, the mechanisms by which obesity causes adverse remodelling and impairs performance of the heart are poorly understood.
Obesity has been linked to a Western diet high in fat. The heart exhibits a high rate of fatty acid oxidation to meet the tremendous need for adenosine triphosphate (ATP) for contraction. In fact, mitochondria comprise %30% of cardiac myocytes by volume [6].
Consequently, mitochondrial dysfunction negatively impacts on contractile performance of the heart. Recently, we reported that cardiomyopathy observed with another metabolic disorder, obese diabetes was associated with increased reactive oxygen species (ROS) formation in the heart because of a reduction in SIRT3 expression [7]. The NAD + -dependent protein deacetylase SIRT3 is important for mitochondrial function, in part by regulating redox balance and anti-oxidant defenses [8]. More recently, SIRT3 has gained attention in its particular roles in metabolic syndrome. SIRT3 is highly expressed in metabolic tissues such as liver and skeletal muscle. We also, observed that loss of SIRT3 in diabetic hearts was associated with capillary rarefaction [7]. Here, we tested the hypothesis that high-fat diet (HFD) impairs cardiac performance by a mechanism involving impairment of SIRT3 signalling in the heart. Wild-type (WT) and SIRT3 knockout (SIRT3 KO) mice were fed with HFD for 16 weeks to develop a diet-induce obesity (DIO) model. Using this DIO mouse model, we have examined the effects of SIRT3 deficiency on the HFD-induced cardiac dysfunction. Moreover, we have explored the potential mechanisms by which SIRT3 deficiency regulates HFD-induced loss of capillaries.
Animals
Male SIRT3 KO (#012755) and WT (#002448) mice of the same strain (129S1/SvImJ) were obtained from The Jackson Laboratory (Bar Harbor, ME, USA). Mice were housed in the institutional laboratory animal facility (LAF) with free access to food and water. Mice were maintained on a 12 hrs light-12 hrs dark cycle. All procedures conformed to the Institute for Laboratory Animal Research Guide for the Care and Use of Laboratory Animals and were approved by the University of Mississippi Medical Center Animal Care and Use Committee (Protocol ID: 1280). The investigation conforms with the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (NIH Publication No. 85-23, revised 1996).
High fat DIO model
Male SIRT3 KO or WT mice (8 weeks of age) were fed with normal chow diet (8640 Teklad 22/5 Rodent Diet; Harlan Laboratories (Indianapolis, IN, USA)) or a high-fat (60% kcal) diet (D12492; Research Diets (New Brunswick, NJ, USA)) for 16 weeks to produce a diet-induced obesity model. Mice were housed in the LAF and were given free access to water throughout the study.
Fasting glucose levels
Blood was obtained from mice by tail snip and blood glucose levels measured using the One Touch SureStep meter. Glucose levels were expressed as mg/dl. Mice were fasted overnight before taking blood.
Blood glucose was measured at the start of the study [all mice on normal diet (ND)] and then again after 16 weeks of either ND or HFD.
Echocardiography
Mice were anaesthetized using a mixture of isoflurane (1.5%) and oxygen (0.5 l/min.). Transthoracic two-dimensional M-mode echocardiography was performed using a Visual Sonics Vevo 770 Imaging System (Toronto, ON, Canada) equipped with a 707B high frequency linear transducer. Short-axis imaging was taken as M-mode acquisition for 30 sec. End-systolic and end-diastolic dimensions, end-systolic and end-diastolic volumes (ESV and EDV), stroke volume, were recorded to calculate the per cent fractional shortening (FS%) and ejection fraction (EF%). Data were analysed using Vevo 770 Analytic Software (FujiFilm VisualSonics Inc., Toronto, Ontario, Canada).
Immunostaining
Ventricular sections were prepared using a cryostat on microscope slides. Formaldehyde fixed sections were rinsed in PBS and incubated with methanol for 10 min. at À20°C. Slides were washed and then incubated with blocking buffer (5% goat serum/0.3% Triton X-100 in 19 PBS) for 1 hr. Sections were incubated with primary antibody anti-NG2 (1:200) and IB4 (1:50) overnight at 4°C in PBS. Slides were washed and incubated with secondary AlexaFluor488-conjugated anti-rabbit IgG antibody for the NG2 antibody for 1 hr at room temperature in the dark. Sections were incubated with DAPI, washed and cover-slipped with Prolong Gold Antifade Reagent. Images were captured at 209 using an EVOS fl digital inverted fluorescence microscope. Fluorescence was quantified using image-analysis software (Image J, NIH Bethesda, Maryland, USA). Ten randomly chosen images from three sections per heart were analysed.
Cardiac lipid staining
Cut frozen sections were fixed in formalin, rinsed with 60% isopropanol and stained with freshly prepared Oil Red O working solution for 15 min. After rinsing with isopropanol, nuclei were stained with haematoxylin and sections were cover-slipped using Lerner Aqua-Mount.
Cell proliferation assay
Microvascular endothelial cells were isolated by cloning from lungs of WT and SIRT3 KO mice and confirmed using VWF and IB4 antibodies. For the proliferation measurement, endothelial cells were plated at the same density (3000 cells per 0.32 cm 2 ) and cultured in complete endothelial cell growth media with 10% FBS for 72 hrs. The proliferative capacity of cultured endothelial cells was assayed using a cell proliferation (MTT) kit (11465007001) according to the manufacturer's instructions (Roche Diagnostic). for 15 min. at room temperature in the dark. Slides were washed and cover-slipped with prolong gold antifade reagent. Relative density of DHE (red) fluorescence was quantified by measuring six random microscopic fields per mouse heart using image-analysis software (Image J, NIH).
Western blot analysis
Ventricles were collected and sonicated in a RIPA lysis buffer supplemented with protease inhibitor cocktail. Homogenates were centrifuged at 16,200 9 g at 4°C for 15 min. Equal amounts of proteins in the supernatants were separated by SDS-PAGE and transferred to polyvinylidene difluoride (PVDF) membranes. Membranes were blocked with 5% w/v powder milk in PBS with 0.1% v/v Tween 20 (PBST) for 1 hr and then incubated with primary antibodies for SIRT3 (1:1000), HIF-1a (1:1000), HIF-2a (1:1000) in 1% w/v BSA in PBST overnight at 4°C. After washing, blots were incubated with horseradish peroxidase conjugated secondary antibody (1:1000) and protein bands visualized using the enhanced chemiluminescence substrate. Blots were stripped using Restore Western Blot Stripping Buffer following the manufacturer's protocol and reprobed (1:1000) for GAPDH antibody as a loading control. Protein bands were quantified by densitometry with image acquisition and analysis software (TINA 2.0, University of Manchester, United Kingdom).
Statistical analysis
Unless otherwise noted, data are expressed as mean AE SEM. Statistical significance of differences between two values was determined by a Student's t-test. Two-way ANOVA followed by Tukey's multiple comparisons test was used for multiple comparisons and to assess interaction between SIRT3 KO and HFD, the significance of which is denoted as P I . The significance of the effect of diet or SIRT3 loss is denoted P D and P S , respectively. P ≤ 0.05 was considered statistically significant. Nonsignificance (P > 0.05) is denoted as ns.
Results
High-fat diet reduces SIRT3 expression in mouse heart and leads to cardiac hypertrophy We first examined SIRT3 expression in the hearts of HFD mice. Feeding mice with HFD for 16 weeks led to a significant reduction in SIRT3 expression in the heart. SIRT3 levels were decreased by 32% in the HFD group compared to the ND group (Fig. 1A). We therefore proceeded to compare the consequences of HFD on the heart of WT to SIRT3 KO mice. As shown in Figure 1B, we confirmed that SIRT3 KO mice do not express this protein in heart whole tissue or mitochondria extracts.
High-fat diet caused comparable increases in heart weight and cardiac hypertrophy, as indexed by the heart weight to tibia length ratio, in both WT and SIRT3 KO mice (Table 1). For WT mice, there was an increase in both left ventricular end-diastolic and end-systolic dimensions (LVDD and LVSD), and the calculated EDV and ESV. This finding along with the increase in heart weight/tibia length ratio is consistent with an eccentric pattern of LV hypertrophy. Although hearts of SIRT3 KO mice on ND were not hypertrophied, LVDD, LVSD, EDV and ESV, were greater in the SIRT3 KO mice fed ND than the WT mice on the same diet, suggesting fundamental differences in heart structure between WT and SIRT3 KO mice. SIRT3 influenced LVDD and LVSD, as well as EDV and ESV; HFD and absence of SIRT3 each increased these parameters and there was no further increase in mice when these factors were combined (P I ≤ 0.05); in fact, there appears to be a more complex LVDD and EDV response when SIRT3 KO mice are fed a HFD. This interpretation is based on a comparison of the LVDD and EDV values for WT and SIRT3 KO mice fed HFD. Relative to the values seen in WT mice on ND, there was a tendency for the increase in these values to be less in SIRT3 KO mice fed a HFD than WT mice fed a HFD (Table 1). There were no differences in heart rate because of loss of SIRT3 or diet.
Loss of SIRT3 exacerbates HFD-induced ROS formation
Feeding mice a HFD for 16 weeks lead to an accumulation of lipids in the hearts of both WT and SIRT3 KO mice (Fig. 2). In addition, hearts from mice fed with HFD for 16 weeks exhibited a significant increase in DHE staining, indicating increased ROS levels, compared to mice fed with ND ( Fig. 3A and B). There was a trend A B Fig. 1 HFD decreases SIRT3 expression in the heart. (A) WT mice were fed ND or HFD for 16 weeks. Hearts were extracted and Western blot analysis was performed on ventricular lysates. The blot was probed for SIRT3 expression. The blot was stripped and reprobed for GAPDH as a loading control. Bar graph shows quantification of expression. Values are mean AE SEM, n = 6 mice per group. **P < 0.01. (B) Confirmation that KO mice do not express SIRT3. Protein extracts from the ventricles and isolated mitochondria of WT and SIRT3 KO mouse hearts were probed by Western analysis for SIRT3 (28 kD). For heart extracts, GAP-DH (37 kD) was probed for to demonstrate equal protein loading; for mitochondria extracts, Complex I subunit NDUFS1 was used as the loading control (~75 kD) for mitochondrial samples. Results are from 3 WT and 3 KO mice. towards increased ROS levels in hearts of SIRT3 KO mice on ND compared to WT mice on ND, but this did not reach significance. However, SIRT3 KO mice fed HFD exhibited a significant increase in DHE staining in the heart (Fig. 3A and B). No interaction was observed between HFD and SIRT3 loss on ROS formation in the heart; however, the consequences of HFD on oxidative stress in the heart were enhanced by SIRT3 loss.
High-fat diet-induced obesity but not a diabetic state
A diet of 60% fat caused a notable increase in bw in WT mice over the course of 16 weeks (Fig. 4A). Body weight increased as well in SIRT3 KO mice on HFD, although to a lesser degree (P I ≤ 0.01). In contrast, others reported that feeding SIRT3 KO mice a so-called (10) Western diet of high fat (42% kcal) and high carbohydrate (42.7% kcal), more than 29 the carbohydrate content of our diet, accelerated obesity and led to development of the metabolic syndrome [9]. Mitochondrial dysfunction at basal conditions in SIRT3 KO may explain why SIRT3 KO mice gained less bw with HFD (mitochondrial stress) than WT mice as fatty acid oxidation would be more prominent in our study. No difference was seen over the 16 weeks in bw between WT and SIRT3 KO mice fed ND. Neither HFD nor SIRT3 KO induced diabetes. Fasting glucose levels were not increased in either WT or SIRT3 KO mice on HFD compared to ND (Fig. 4B). Others have noted that the ability of HFD to induce diabetes in mice is strain-dependent [10]. Of note, SIRT3 KO mice exhibited significantly lower fasting blood glucose levels than WT mice on the same diet (P ≤ 0.01); lower blood glucose levels may explain as well why SIRT3 KO mice gained less weight with HFD than WT mice.
SIRT3 deficiency promotes high-fat diet-induced cardiac dysfunction
Mice fed a HFD exhibited a modest decline in cardiac function. As seen in Figure 5, EF and FS were significantly decreased in WT mice fed HFD compared to mice fed ND for 16 weeks. Knocking out SIRT3 under ND also decreased cardiac function compared to WT mice fed ND. High-fat diet treatment further reduced cardiac performance in SIRT3 KO mice (Fig. 5) to an extent that on average was greater than for WT mice; however, no interaction between HFD and SIRT3 loss was observed.
High-fat diet and SIRT3 loss reduce HIF-1a and HIF-2a expression in the heart
We previously reported that impaired HIF signalling is associated with reduced myocardial capillary density in obese diabetic mouse model [11]. Hence, we examined HIF levels in diet-induced obesity model. As seen from Fig. 6A and B, cardiac HIF-1a and HIF-2a levels were reduced in mice fed HFD compared to mice fed ND for 16 weeks. Hypoxia-inducible factor-1a and -2a levels were also significantly reduced in hearts of SIRT3 KO mice compared to WT mice ( Fig. 6A and B). Hypoxia-inducible factor-1a levels were further decreased in SIRT3 KO-DIO mice compared to SIRT3 KO mice ( Fig. 6A and B). Levels of HIF-2a tended to be further reduced by HFD in SIRT3 KO hearts, but this did not reach significance. No interaction between HFD and SIRT3 loss on HIF-1a and -2a levels was demonstrated.
High-fat diet and SIRT3 KO cause capillary rarefaction and loss of pericytes
We next explored whether impairment of HIF-a signalling affected the vasculature of the heart as cardiac dysfunction is associated with decreases in capillary density and coronary blood flow. Wild-type mice fed with HFD showed a marked decrease in capillary density compared to WT mice fed ND ( Fig. 7A and B). Intriguingly, greater loss of capillary density was observed in SIRT3 KO mice, which was not further enhanced by HFD (P I ≤ 0.01). This finding indicates that SIRT3 loss contributes to the decrease in capillaries seen with HFD. As Figure 7C shows, we observed that deletion of SIRT3 markedly reduced proliferation of endothelial cells, which likely contributed to the reduction in capillaries with HFD.
Loss of pericytes has been shown to promote capillary rarefaction in pancreatic islets and skeletal muscle, thus leading to or exacerbating glucose intolerance [12,13]. As Figure 8A and B show, HFD also resulted in a dramatic loss of pericytes in WT mice. The loss of pericytes was even greater with SIRT3 KO mice and no further loss of pericytes was observed in SIRT3 KO-DIO mice (P I ≤ 0.01), suggesting a critical contribution of reduced SIRT3 in the actions of HFD on pericytes.
Discussion
In our study, we report the following novel observations: (i) HFD reduced SIRT3 levels in the heart and caused cardiac systolic dysfunc- tion; (ii) either HFD or SIRT3 KO increased ROS levels, which was associated with impaired HIF signalling and reduced myocardial capillary density and (iii) combined HFD and SIRT3 KO synergistically elevated ROS levels and caused greater cardiac dysfunction. However, no interaction (i.e. antagonism or synergism) was observed between HFD and SIRT3 loss on cardiac ROS production, systolic function or hypertrophy. This observation indicates that HFD acted also by means other than SIRT3 repression to adversely affect heart function and structure. Notably, both HFD and SIRT3 loss produced a marked reduction in capillary and pericyte density that was not further increased by their combination, suggesting a common underlying mechanism.
Most studies have reported that HFD induces cardiac dysfunction, but the mechanism by which this occurs is unclear. In this study, we hypothesized that HFD reduces SIRT3 expression, which would lead to increased ROS production and cardiac dysfunction. Loss of SIRT3 was correlated with increased ROS levels and mitochondrial dysfunction in the heart [14,15], while overexpression of SIRT3 was shown to protect cardiac myocytes from oxidative stress and apoptosis [16][17][18]. We tested our hypothesis by using SIRT3 KO mice and found that loss of SIRT3 showed similar cardiac phenotype as HFD with increased ROS production and modest cardiac dysfunction. We therefore further tested our notion that impaired mitochondrial function exacerbates diet-induced heart failure by feeding SIRT3 KO mice a HFD, creating a model of metabolic/mitochondrial stress. This resulted in even higher ROS levels and a greater decline in cardiac function. Our data indicate that HFD only induces modest cardiac dysfunction and additional factors such as mitochondrial dysfunction are required to promote heart failure in obesity.
While we were doing our study, another group reported that HFD switches cardiac metabolism from glucose to fatty acid oxidation and showed reduced SIRT3 expression as the underlying cause [19]. The basis for the decrease in SIRT3 levels with HFD is not defined; however, a recent study showed that ROS-mediated NF-jB activation down-regulates SIRT3 levels in cardiomyoblasts [20]. Thus, decreased SIRT3 could promote further loss of SIRT3 via a positive feedback mechanism involving ROS.
SIRT3, a NAD + dependent deacetylase, belongs to class III histone deacetylases. SIRT3 is a mitochondrial protein whose increased expression has been shown to be associated with longevity of humans [21,22]. Older individuals have about a 40% reduction in SIRT3, and the health benefits of older patients were accompanied by elevated levels of SIRT3 [23]. Loss of SIRT3 has been related to cardiac hypertrophy in ageing [14,24]. Thus, diet-induced obesity SIRT3 KO (SIRT3 KO-DIO) mice may be useful as a novel model to study HFD-induced heart failure in ageing.
In our study, HFD and SIRT3 KO mice showed increased levels of ROS in the heart (Fig. 3). In addition to a direct damaging effect on the heart, increased ROS may also impair HIF signalling in the heart. HIFs are transcription factors that are activated under hypoxic condition. Two isoforms, HIF-1a and HIF-2a, have similar structure and function (bind to the same hypoxia responsive element). Although they vary in their tissue specific expression pattern, both are expressed in the heart. We observed that HFD or SIRT3 KO reduced HIF-1a and -2a signalling in the heart (Fig. 6). Increased ROS levels and a-ketoglutarate because of reduced activity of Krebs cycle, both associated with SIRT3 deficiency, have been linked to reduced HIF signalling activation [25]. However, our evidence indicates that HFDreduced HIF signalling in the heart is largely independent of SIRT3 loss (Fig. 6).
High-fat diet or SIRT3 KO reduced density of endothelial cells and pericytes in the heart (Figs 7 and 8). Reduction in HIF-1a and -2a levels may explain in part the myocardial capillary loss in mice fed with HFD, as HIF signalling is important for angiogenesis and endothelial cell survival [26]. Impaired HIF signalling would down-regulate angiogenesis, ultimately resulting in reduced capillary density in the heart. Recently, histological analysis of non-ischaemic myocardium from patients undergoing coronary artery bypass surgery showed that obesity was associated with lower coronary microvascular density [27]. A previous study from our lab showed that overexpression of angiopoietin 1 improved capillary density in ischaemia-induced obese db/db mouse hearts via up-regulating HIF-1a expression [11]. In subsequent studies, we demonstrated that overexpression of apelin improved myocardial capillary density and alleviated diabetic cardiomyopathy via SIRT3 up-regulation in db/db mice [7].
Overall, our results indicate that HFD induces an inappropriate response in the heart as there is a reduction in pro-angiogenic HIF signalling in the face of a loss in capillary density. But ROS and loss of HIF signalling alone cannot explain impaired angiogenesis seen with HFD. Our evidence indicates that HFD impairs angiogenesis via repression of SIRT3 ( Fig. 7A and B). Impaired microvascular endothelial cell proliferation (Fig. 7C) and loss of pericytes (Fig. 8) were likely contributing factors.
A correlation between reduced vascular density and cardiac dysfunction has been previously reported. For instance, a recent study showed reduced capillary density and cardiac dysfunction in mice subjected to transverse aortic constriction [28]. However, while our study results suggest that loss of capillary density contributed to cardiac dysfunction, other factors such as ROS production likely contributed as well. This is because HFD in SIRT3 KO mice did not further reduce capillary density, but cardiac function was made worse. Our data from the SIRT3 KO mouse indicate the relationship between (total knockout of) SIRT3 and cardiac dysfunction is complex. For example, evidence for the ROS component of the pathway suggests there is a subset of SIRT3 KO mice that developed exacerbated increases in ROS levels, and further reductions in HIF protein levels. Increased ROS and impaired HIF signalling resulting from SIRT3 loss may have contributed to impaired vascularization; however, cell-type specific responses are likely involved as well, including impaired endothelial cell proliferation and loss of pericytes. Nor does there seem to be a straightforward link between impaired contractile function with SIRT3 KO and reduced vascularization. Here also celltype specific factors in cardiac myocytes, such as the status of energy stores and contractile proteins, must be considered. Nevertheless, our data highlight the role of a reduction in SIRT3 as a contributing factor to the pathological effects of HFD on the heart.
A limitation of our study is that we did not assess the metabolic components that may have contributed to the decline in contractile function along with increased oxidative stress. In this regard, SIRT3 KO mice did gain slightly less weight on HFD than WT mice (Fig. 4A). Those experiments exceed the focus of this study and are currently being undertaken. It seems unlikely though that HFD by itself would compromise the ability of the heart to generate sufficient ATP to sustain contractile function. An additional consideration is that the SIRT3 KO is a whole body knockout. The finding that fasting glucose is lower in the KO versus WT mice suggests cells outside the myocardium are likely playing a role in the KO phenotype and possibly the response to HFD. Another limitation is that the data also do not eliminate the possibility for a role of cytoplasmic or nuclear SIRT3.
In conclusion, our findings suggest that HFD and SIRT3 loss compromise heart function and increase production of ROS levels in the heart. However, although HFD reduced SIRT3 levels in the heart, the adverse effects of HFD on cardiac remodelling and function cannot be attributed solely to SIRT3 loss. Our results also show that applying metabolic stress by feeding HFD to SIRT3 KO mice further impairs cardiac function. This is likely because of enhanced ROS generation resulting from increased mitochondrial oxidation with HFD, exacerbated by the absence of SIRT3. Our study thus identifies SIRT3 as a potential therapeutic target for preventing obesity-induced cardiac dysfunction. | 5,678.4 | 2015-03-17T00:00:00.000 | [
"Biology",
"Medicine"
] |
p -TsOH Catalyzed synthesis of 2-arylsubstituted benzimidazoles
p -TsOH (20 mol%) was used to be a catalyst for the synthesis of
Introduction
Benzimidazole derivatives exhibit significant activity against several viruses such as HIV, 1 herpes (HSV-1), 2 RNA, 3 influenza, 4a and human cytomegalovirus (HCMV) 1 .The widespread interest in benzimidazole-containing structures has promoted extensive studies for their synthesis.While many strategies are available for benzimidazole synthesis, [7][8][9][10][11][12][13][14][15][16] there are two general methods for the synthesis of 2-substituted benzimidazoles.One is the coupling of phenylenediamines and carboxylic acids 4b or their derivatives (nitriles, imidates, or orthoesters), 5 which often requires strong acidic conditions, and sometimes combines with very high temperatures or microwave irradiation. 6The other way involves a two-step procedure that includes the oxidative cyclo-dehydrogenation of Schiff bases, which are often generated from the condensation of phenylenediamines and aldehydes.Various oxidative and catalytic reagents such as sulfamic acid, 7 I 2 , 8 DDQ, 9 Air, 10 Oxone, 11 FeCl 3 •6H 2 O, 12 In(OTf) 3 , 13 Yb(OTf) 3 , 14 Sc(OTf) 3 , 15 KHSO 4 , 16 IL, 17 have been employed.Because of the availability of a vast number of aldehydes, the condensation of phenylenediamines and aldehydes has been extensively used.While many published methods are effective, some of these methods suffer from one or more disadvantages such as high reaction temperature, prolonged reaction time, and toxic solvents etc.Therefore, the discovery of mild and practicable routes for synthesis of 2-substituted benzimidazoles continues to attract the attention of researchers.
In recent times, we have shown KHSO 4 16 and IL 17 can be used as promoters and catalysts for the synthesis of benzimidazoles.So we tried to synthesize benzimidazoles using an organocatalyst.In this paper, p-TsOH 18 was used for the synthesis of 2-arylsubstituted benzimidazoles by the condensation of aryl aldehyde with o-phenylenediamine (Scheme 1).
Scheme 1
In order to establish the optimum condition for this reaction, various ratios of p-TsOH were examined.Using o-phenylenediamine and p-chlorobenzaldehyde as a model, p-TsOH was added in various ratios in DMF at 80°C.As shown in Table 1, very little of the desired products was obtained in the absence of p-TsOH and the best yields were obtained with 20% p-TsOH.Next the effect of solvent was examined.As shown in Table 2, different solvents can result in different yields.Clearly DMF stands out as the solvent of choice, with its fast conversion, high yield and low toxicity.
To test the general scope and versatility of this procedure in the synthesis of a variety of 2substituted benzimidazole, we examined a number of differently substituted arylaldehydes.We were pleased to find that moderate to high yields were obtained in the condensation of ophenylenediamine with aldehydes.As Table 3 shows that aryl aldehydes bearing both electrondonating and electron-withdrawing substituents gave the desired benzimidazoles in good yields.Heteroaryl aldehyde (Entry 8) and α,β-unsaturated cinnamicaldehyde (Entry 9) also gave acceptable yields.Whereas little product was obtained when aliphatic aldehydes were used.In conclusion, we have developed a simple, one-pot synthesis of 2-arylsubstituted benzimidazoles by the condensation of o-phenylenediamine with arylaldehyde catalyzed by p-TsOH.Simple and convenient procedure, easy purification and shorter reaction time are the advantageous features of this method
Table 2 .
Effect of solvent in condensation of o-phenylenediamine with 4-chlorobenzaldehyde a All yields refer to isolated product.b Reaction time: 10 min.
Table 3 .
Synthesis of benzimidazoles catalyzed by p-TsOH All yields refer to isolated product, characterized by melting points, 1 H NMR. a | 789.6 | 2007-06-28T00:00:00.000 | [
"Chemistry"
] |
Positional cues and cell division dynamics drive meristem development and archegonium formation in Ceratopteris gametophytes
Fern gametophytes are autotrophic and independent of sporophytes, and they develop pluripotent meristems that drive prothallus development and sexual reproduction. To reveal cellular dynamics during meristem development in fern gametophytes, we performed long-term time-lapse imaging and determined the real-time lineage, identity and division activity of each single cell from meristem initiation to establishment in gametophytes of the fern Ceratopteris richardii. Our results demonstrate that in Ceratopteris gametophytes, only a few cell lineages originated from the marginal layer contribute to meristem initiation and proliferation, and the meristem lacks a distinguishable central zone or apical cell with low division activity. Within the meristem, cell division is independent of cell lineages and cells at the marginal layer are more actively dividing than inner cells. Furthermore, the meristem triggers differentiation of adjacent cells into egg-producing archegonia in a position-dependent manner. These findings advance the understanding of diversified meristem and gametophyte development in land plants.
T he lifecycle of land plants alternates between two generations: the asexual sporophyte and the sexual gametophyte 1,2 . In seed plants, the sporophytes, representing the dominant generation, develop pluripotent apical meristems (including shoot apical meristems and root apical meristems), which sustain the growth and development of plant body during their life span 3,4 . The gametophytes of seed plants are greatly reduced in size, devoid of a meristem, and dependent on their sporophytes 1,5,6 . By contrast, in seed-free vascular plants, including ferns, the gametophyte and sporophyte are mutually independent generations [7][8][9] . Fern gametophytes develop meristems that renew themselves through continuous cell division and produce new cells that differentiate into a photosynthetic prothallus or into cells that form the gametangia (egg-bearing archegonia and sperm-bearing antheridia) [10][11][12] . The timing of meristem initiation and maintenance plays a key role in shaping gametophyte morphology 10,13 . Compared to the wellcharacterized cell behaviors and regulatory circuits identified in the meristems of sporophytes in seed plants, especially in Arabidopsis 3,14-18 , the mechanisms underlying meristem development in fern gametophytes are just beginning to be understood in only a few fern species [9][10][11]13,[19][20][21][22][23][24][25] .
The homosporous fern Ceratopteris richardii (hereafter 'Ceratopteris') has been developed and widely used as a model system for studying many aspects of evolutionary and developmental questions in ferns 11,[20][21][22][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40] . Like many other homosporous ferns, the sex of Ceratopteris gametophytes is determined by a pheromone called antheridiogen 11,37,41 . A spore germinates and then develops into a hermaphroditic or male gametophyte, depending on the absence or presence of antheridiogen 11,37 . Male gametophytes are ameristic, differentiating multiple antheridia that produce sperm 11,19,22,39 . The hermaphroditic gametophyte develops one multicellular meristem, which was also called the lateral meristem, marginal meristem, or notch meristem 11,13,19,22,39 . Once the multicellular meristem is established, the egg-bearing organ archegonia initiate next to the meristem notch until fertilization 11,19,22 . To date, the dynamic cell behaviors responsible for meristem initiation and maintenance in Ceratopteris gametophytes, and the cellular mechanism by which the meristem promotes organogenesis (e.g., archegonium formation) have yet to be identified. Cell lineages and spatiotemporal patterns of cell divisions during gametophyte development are also completely lacking. For these reasons, we have generated Ceratopteris stable transgenic plants that allow the labeling of each individual cell (nucleus) and performed long-term time-lapse confocal imaging during meristem initiation and proliferation in haploid gametophytes. We then established a computational pipeline to quantitatively determine the lineage, identity, and division activity of each cell throughout the growth of gametophytes. Through mechanical perturbations, we also revealed cell fate re-specification and cell-cell communications during the de novo formation of meristems and archegonia. Our work reveals the cellular basis of a multicellular meristem in gametophytes and help in understanding the diversified meristem development and organ formation in land plants.
Results
Stably transformed Ceratopteris lines expressing a fluorescent nuclear marker. To determine cellular dynamics and cell cycle progression during meristem initiation and proliferation in gametophytes, we generated a fluorescent reporter that marks each nucleus in Ceratopteris gametophytes except for gametes ( Fig. 1; Supplementary Fig. 1). Specifically, we stably transformed and identified Ceratopteris transgenic plants with a Histone 2B-GFP (H2B-GFP) reporter 42 under the control of endogenous 5' promoter and 3' terminator of the Ceratopteris HAIRY MERISTEM (CrHAM) gene 43,44 (See Methods for details). Through laser scanning confocal imaging, we found that this H2B-GFP reporter was uniformly expressed in the nuclei of both male and hermaphroditic prothalli in the transgenic lines, throughout their developmental stages ( Fig. 1; Supplementary Fig. 1; Supplementary Movies 1 and 2). Starting from 6 days after inoculation (DAI), the meristem in hermaphroditic gametophytes actively and continuously proliferated, resulting in a notch at one side of the prothallus (Fig. 1a-d). The notch separated two wings of the prothallus, with one developed first, evident in Fig. 1b and c. At these stages, the H2B-GFP reporter was highly expressed in every single nucleus of the multicellular meristems ( Fig. 1a-d). The H2B-GFP reporter also clearly labeled all the nuclei from the differentiated cells that compose archegonia ( Fig. 1e-g; Supplementary Movie 3), antheridia ( Fig. 1h-j), and rhizoids (Fig. 1b, c). In addition, the morphology of gametophytes from these transgenic lines was comparable to that of wild-type hermaphroditic gametophytes ( Supplementary Fig. 2), suggesting that the transgene did not interfere with normal growth and development of gametophytes. At least three independent transgenic lines showed comparable expression levels and patterns in the Ceratopteris gametophytes, and one line was included in the following experiments ( Fig. 1; Supplementary Fig. 3). Together these results demonstrated that the reporter is suitable for studying cell fates and lineages in fern gametophytes.
Long term time-lapse imaging reveals dynamic cell behaviors in Ceratopteris gametophytes from formation to maintenance of a multicellular meristem. With the established transgenic reporter line, a non-invasive time-lapse imaging experiment was performed to reveal dynamic cell behaviors during the formation and proliferation of the multicellular meristem. Spores of the H2B-GFP transgenic reporter line were inoculated and germinated on solidified growth medium (FM plates, see Methods for details). At 5 days after inoculation (DAI) that corresponded to an intermediate stage between the G3 and G4h defined by Conway and Di Stilio 19 , hermaphroditic gametophytes were imaged as the first time point (0 h) using laser scanning confocal microscopy (Fig. 2a). The full stack of optical sections from top to bottom of the living gametophytes were taken and the Z-projection of the sections was generated to visualize all GFPlabeled nuclei from each prothallus (Fig. 2a). After the first time point was taken, the FM plates of gametophytes were returned to the growth chamber, cultured under the same conditions, and the same samples were imaged again six hours later as the second time point (6 h) (Fig. 2b). This live-imaging process was continuously repeated until a prothallus fully developed (Fig. 2c-v), with a total of 22 time points acquired for each gametophyte, all at six-hour intervals (Fig. 2a-v). Three independent hermaphroditic gametophytes were live-imaged at the same time with the same interval and duration ( Fig. 2
; Supplementary Figs. 4 and 5).
A cell lineage map of the gametophyte and meristem. To determine the fate of each cell of the gametophyte (starting at 5 DAI) as the prothallus developed, we performed two-dimensional (2D) image analysis to segment and detect each nucleus from the images taken at different time points and automatically label each segmented nucleus with a unique ID. Following that, we traced the fate and descendants of each labelled nucleus of all the three gametophytes from the first time point (0 h) to the 19th time point (108 h), when the meristem had been fully established and two archegonia were evident in each gametophyte (Fig. 3a- As shown in the color-coded lineage map ( Fig. 3a-s), it appears that only a few progenitor cells (three in Fig. 3a, pointed by solid arrowheads) at the marginal/outermost layer of the young gametophyte (at 5 DAI, 0 h in the time-lapse) contributed to the vast majority of meristem cells of the fully developed prothallus (108 h in the time-lapse). Their lineage progression was indicated by the continuously expanding sectors over time ( Fig. 3a-s), resulting in the dominant yellow, red, and blue sectors at 108 h (Fig. 3s). In addition, a few adjacent cells (pointed by open arrowheads in Fig. 3a) also divided and contributed to prothallus proliferation (Fig. 3a-s) as their divisions were outpaced by the meristem progenitor cells (pointed by solid arrowheads) and their descendants. In contrast, cells from all the other lineages in each gametophyte did not or rarely divide over the 108h-time frame, representing the meiotically inactive region in the developing gametophyte ( Fig. 3a-s). Three independent gametophyte samples were analyzed ( Cell division activity within meristems is regulated in a lineage-independent and position-dominated manner. Our results demonstrate that the meristem is the active division site and localized source of new cells in developing hermaphroditic gametophytes. However, within the meristem, it is not clear whether the cells with high division activity originate from one or a few progenitor cells or whether cell division activities are independent of their origins. To address this question, we further analyzed the three progenitor cells that eventually contributed to the majority of meristem cells (or called meristem progenitor cells) (Figs. 3a-s and Many siblings derived from the same parental cells still showed distinct division activities (Fig. 6b). These results suggest that within the meristem, cell lineages do not directly control division activity of each individual cell. Furthermore, we analyzed the relationships between the cell position and division activity in the same population. Within each sector, marginal cells generally showed higher division activity than inner cells during the 30-108 h time period (Fig. 6b). For example, after one round of division, a new cell located at the marginal layer (solid circles) had higher division activity than its sibling located at the inner layer (circles filled with diagonal stripes) during the following time period (Fig. 6b). These results also aligned with the quantified division events within 54-108 h time frame, the maturation phase (Supplementary Fig. 10; Supplementary Data 5), suggesting that position plays a role in determining division activity of meristem cells during meristem proliferation. Such patterns were consistent among all the analyzed meristem progenitor cells from three independent gametophyte samples (Fig. 6a, b; Supplementary Figs. 11a, b, 12a, b), suggesting that cell division within the meristems is regulated by positional cues, likely initiated from the marginal layer of the meristem.
Origin and fate of cells forming differentiated archegonia. In Ceratopteris gametophytes, once a multicellular meristem is established, egg-forming archegonia always initiate near the meristem notch ( Supplementary Figs. 4p-v, 5o-v) initiated and then followed similar cell division patterns. We then examined the origins of these archegonia. To our surprise, although they both were specified next to the center of the meristem, these two sequentially formed archegonia had different origins (Fig. 3k, p; Supplementary Figs. 5k, q, 6l, p). The first archegonium of each gametophyte belonged to one cell lineage that was adjacent to the meristem but did not mainly contribute to meristem proliferation ( To further test the dynamics of archegonium initiation and maturation, we quantified the distance between the meristem notch and the archegonia during gametophyte development ( Fig. 7a-g; Supplementary Data 4). We found that the first archegonium was specified at the place close to the center of the meristem notch, with an average distance of 35.8 ± 3.5 μm (mean ± standard error, n = 3) away from the initiation site of the notch (indicated by 'a' in Fig. 7a). After that, the distance between the meristem notch and the center of the first archegonium gradually increased (Fig. 7b, d, f). Interestingly, the second archegonium also initiated adjacent to the center of the meristem notch, with an average distance of 36.7 ± 2.1 μm (mean ± standard error, n = 3) away from the notch, and then gradually moved away from the meristem (Fig. 7c, e, g). These quantitative results demonstrate that archegonia, regardless of their lineages, are initiated in close proximity to the meristem notch.
Cell fates and cell division dynamics during ablation-induced de novo formation of a new meristem. In addition to normal growth, we further dissected how perturbing the gametophyte Supplementary Fig. 19). b labels the clonally related cells with the same color in each family tree (as yellow, red and blue shown in (a) and highlights the positional information of each cell (as solid circles and circles filled with diagonal stripes shown in (a). During the 30-108 h time frame, cells from the same progenitor cell display variable division activities, and cells located at the marginal layer (solid circles) of the meristem show more division events than the cells located at the inner (or submarginal) layer (circles filled with diagonal stripes). All the meristem progenitor cells in the three independent gametophyte samples were analyzed (in Fig. 6 Earlier studies showed that, once their multicellular meristems were removed, prothalli in several fern species (such as Anemia phyllitidis, Pteris longigolia and Gymnogramme chrysophyll) were able to regenerate new meristems on the remaining meristem-less prothalli [45][46][47] . We performed microsurgical experiments to ablate only a few cells in the meristems or at the initiation sites of meristems in wild-type hermaphroditic gametophytes in Ceratopteris (n = 5). Once the meristem was ablated, at least one new meristem regenerated from each prothallus ( Supplementary Fig. 13). These results not only showed that Ceratopteris prothalli possess the regeneration capacity similar to other ferns, but, more importantly, demonstrated a robust approach for us to determine cell behaviors during de novo formation of new meristems in gametophytes.
We then examined real-time cell dynamics during the ablationinduced new meristem formation by live-imaging Ceratopteris gametophytes expressing the H2B-GFP reporter (Fig. 8a-p). At 8 Fig. 7 The dynamic distance between archegonia and the meristem notch during gametophyte development. a The calculation of distance between a meristem notch (or the location where the meristem notch will be formed) (point a) and the center of an archegonium (point b) in one representative image. The image in (a) is the part of one optical section of the sample shown in Fig. 2m. Scale bar: 50 μm. b-g Dynamic distances between archegonia and meristem notches. Y axis: Distance between a meristem notch and an archegonium. X axis: six consecutive time points starting from initiation (shown as 0 h in the graphs) of the archegonium. The distance in b and c was calculated based on the confocal images of the gametophyte shown in Fig. 2, and the first and second archegonia started to initiate at 54 h and 84 h, respectively. The distance in d and e was calculated based on the confocal images of the gametophyte shown in Supplementary Fig. 4, and the first and second archegonia started to initiate at 54 h and 90 h, respectively. The distance in f and g was calculated based on the confocal images of the gametophyte shown in Supplementary Fig. 5, and the first and second archegonia started to initiate at 60 h and 84 h, respectively. The source data for (b-g) are included in Supplementary Data 4.
DAI when the meristem was distinguishable, the hermaphroditic gametophyte (n = 3) was imaged before and immediately after ablating a few cells within the meristem (as 0 h). Time-lapse imaging was then performed at six-hour intervals, with full stacks of optical sections acquired (with one representative sample shown in Fig. 8). The Z-projection view showed that disruption of the established meristem led to de novo formation of a new meristem (indicated by arrows in Fig. 8g-p). As the control, a similar microsurgical perturbation was performed to ablate a few cells at a non-meristematic region of the hermaphroditic gametophyte (n = 3). After wounding, the time-lapse imaging was performed with the same interval and duration ( Supplementary Fig. 14a-p). The mechanical ablation of non-meristematic regions did not cause any noticeable change of meristem proliferation and notch formation in these gametophytes ( Supplementary Fig. 14a-p), demonstrating that the switch of growth patterns is not a response to wounding in general but is specific to ablation of the meristem.
We also performed nucleus segmentation, lineage analysis ( Supplementary Fig. 15) and quantification of division activities (Fig. 9a, b; Supplementary Fig. 16) on the representative Fig. 8. These results showed that the division activity of cells surrounding the ablated meristem dropped rapidly (Fig. 9a, b). As mentioned above, during normal developmental process of hermaphroditic gametophytes, cells located outside of the meristem gradually lost division activity once the meristem was well established (Figs. 4c-i, 5c, d). However, after ablating the meristem, a few marginal cells located on the non-ablated region of the prothallus re-gained division activity, forming a few actively proliferating lineages ( Fig. 9; Supplementary Figs. 15 and 16). Their descendants formed a new meristem, with the center of active dividing zones having shifted from the original meristem to the newly initiated meristematic region (Fig. 9a, b). Taken together, these results show that ablation of the meristem promotes non-meristematic cells to regain division activity and eventually form a new meristem.
Meristem provides a positional cue that determines initiation of archegonia. The live-imaging results ( Fig. 2; Supplementary Figs. 4 and 5) led to a hypothesis that the meristem promotes archegonium initiation in its surrounding cells, likely via a position dependent way. To test this hypothesis, we examined the patterns of archegonium initiation and maturation in the eight gametophyte samples after ablating their meristems, which included three transgenic gametophytes expressing the H2B-GFP reporter (with one representative gametophyte shown in Fig. 8) and five wild-type gametophytes (with one representative gametophyte shown in Supplementary Fig. 13). In line with the hypothesis, live-imaging results showed that once the original meristem was ablated, the initiation of new archegonia around the ablated meristem was rapidly abolished (Fig. 7; Supplementary Fig. 13); after the new meristem formed de novo at a different location, new archegonia initiated at the position adjacent to the new meristem and continued to develop through sustained cell division over time ( Fig.7; Supplementary Fig. 13). Interestingly, if there were already specified archegonia existing in the prothalli at the time of the ablation, these determined archegonium cells were able to continue the cell division and complete the maturation process after ablating the meristem (n = 3, with one representative gametophyte shown in Supplementary Fig. 17). However, the cells surrounding the ablated meristem no longer differentiated new archegonia regardless of whether any mature archegonia had existed nearby (n = 3, with one representative gametophyte shown in Supplementary Fig. 17), suggesting that archegonia themselves were not sufficient to promote the initiation of new archegonia. Collectively, these results indicate that the meristem provides a positional cue that is required for the initiation of archegonia from the cells next to the meristem notch.
Discussion
The life cycle of land plants alternates between the sexual gametophyte and asexual sporophyte phases 1,2 . Indeterminate meristems have evolved in gametophytes from bryophytes and ferns, and sporophytes from vascular plants 8 . In this study, through quantitative live imaging, we reconstruct cell lineage and real-time dynamics of cell division during initiation and proliferation of meristems in gametophytes of the model fern Ceratopteris richardii, suggesting both conserved and unique features and regulations in the meristem of Ceratopteris gametophytes, in comparison to other types of indeterminate meristems in land plants.
Our long-term time-lapse imaging of developing gametophytes and quantitative analysis demonstrate that the meristem of Ceratopteris gametophytes is devoid of a distinguishable central zone or apical cell with low cell division activity compared to the surrounding cells (Figs. 4,5;Supplementary Figs. 8,9; illustrated in Fig. 10). Within the meristem, marginal cells show significantly higher division activity than inner cells, regardless of their lineages (Supplementary Fig. 10; illustrated in Fig. 10). In addition, division of meristem progenitor cells in Ceratopteris gametophytes does not follow a predictable orientation or invariable pattern ( Fig. 6a; Supplementary Figs. 11a and 12a), which is similar to the post-embryonic cell division instead of embryonic cell division in flowering plants 48 . All these findings are different from the previous interpretation of Ceratopteris meristem development based on the observation of different fixed gametophyte samples harvested at different growth stages 22 .
Meristem organization in Ceratopteris gametophytes is different from that in the gametophytes of bryophytes and several other fern species and is also different from that in the sporophytes of lycophytes and ferns, in which wedge-shaped apical cells (ACs) play a key role in sustaining cell proliferation 9,10,23,25,[49][50][51][52][53][54][55] . Even though the meristems from several different species are morphologically comparable, the underlying cellular bases seem to be different. For example, ACs drive proliferation of notch meristems in thalli of the liverwort Marchantia polymorpha (hereafter 'Marchantia') and in young prothalli of the fern Colysis decurrens 25,56,57 , whereas the formation of meristem notch in Ceratopteris gametophytes is independent of an AC. In addition, meristem organization in Ceratopteris gametophytes is also different from that in the sporophytes of flowering plants (such as Arabidopsis). The sporophytes of flowering plants develop shoot apical meristems and root apical meristems, which do not contain a distinguishable AC. However, shoot apical meristems maintain a group of slowly dividing undifferentiated stem cells as the conserved central zone 3,15,17 , and root apical meristems also contain rarely dividing stem cells that compose the quiescent center 3,58 . Thus, the distinct organization and cell behavior identified in the meristem of Ceratopteris gametophytes advance our understanding of diversified meristem development in land plants.
In land plants, continuous organ initiation is largely dependent on meristem activity, despite that the process is achieved via different ways. In this study, quantitative live imaging during both normal and disturbed growth suggests that the multicellular meristem in Ceratopteris gametophytes promotes organogenesisarchegonium formation-in its surrounding cells, via a positiondependent manner (Figs. 7 and 8; Supplementary Figs. 13 and 15). The initiation of archegonia likely relies on positional cues triggered by the multicellular meristem instead of the lineage of initial cells, which is different from the AC-dominant organ formation reported in a few seed-free plants. For example, in gametophytes of the moss Physcomitrium (Physcomitrella) patens and in sporophytes of the lycophyte Selaginella kraussiana and the fern Nephrolepis exaltata, histological and clonal analyses show that a single or few ACs cleave in different facets to produce initial cells, and each of them follows predictable cell fates and finally develops into a whole organ (e.g., leaf-like organs or fronds) 9,53-55 . Interestingly, in gametophytes of the liverwort Marchantia, although the flattened thallus also grows from ACs in an apical notch, the apical notch specifies growth rates in surrounding cells, likely through a diffusive morphogen 57 . In addition, the formation of air chambers and gemma cups in Marchantia thalli likely relies on positional information rather than cell lineages 59 . Furthermore, the positiondependent organ differentiation is prevalent in sporophytes of flowering plants [60][61][62] . For example, the localized concentration of the phytohormone auxin dictates primordium initiation in shoot apical meristems with conserved phyllotactic patterns 63,64 . Thus, it seems that the position-dependent and lineage-independent organ formation mechanism is shared among several different systems, including Marchantia gametophytes, Ceratopteris gametophytes, and angiosperm sporophytes. Future studies are needed to better interpret the positional cue that determines archegonium initiation in Ceratopteris gametophytes.
This study also uncovers the spatial-temporal dynamics of archegonium development, from initiation to maturation, which provides insights into key strategies that ferns have developed to facilitate fertilization. Different from seed plants, the efficiency and success of fertilization in ferns are limited by many factors. For example, fern gametophytes grow independently of their sporophytes 9,11 , with limited protection against mechanical damage. In addition, in the absence of water or other media to swim through, the motile sperm is not able to fertilize the egg 11,12 . Moreover, in Ceratopteris, one archegonium produces one egg and each egg is only viable for less than 48 hours if it is not fertilized 11 . The time-lapse imaging and quantitative results in this study demonstrate that once a hermaphroditic gametophyte develops the multicellular meristem, it continuously and sequentially initiates multiple archegonia surrounding the meristem ( Fig. 2; Supplementary Figs. 4 and 5). The time interval between the initiation of first two archegonia is less than 36 h (Fig. 2j, o; Supplementary Figs. 4j, p, 5k, o). As the prothallus further proliferates, the interval between the initiation of the following two archegonia is even shorter. Thus, the meristem drives the indeterminate growth of prothalli and the constant formation of new archegonia, which ensures that there is always an egg available for fertilization throughout gametophyte development. In addition, once the meristem is damaged, the hermaphroditic gametophyte quickly reactivates cell proliferation in a non-wounded region and regenerates at least a new meristem there (Fig. 8). The newly initiated meristem then induces continuous initiation of archegonia (Fig. 8). This strategy also sustains egg formation until fertilization and helps to overcome hurdles of fertilization in response to potential damage to the hermaphroditic gametophyte.
The quantitative pipeline established in this work integrates long-term time-lapse imaging, reconstruction and visualization of lineage dynamics, and quantification of division activities at high spatiotemporal resolution, in both undifferentiated meristems and differentiated archegonia in Ceratopteris gametophytes. This pipeline can be broadly used and adapted for determining lineage and fate alterations in fern gametophytes in response to various developmental cues and environmental signals, or in mutants and transgenic lines with the genetic perturbation 20,21,28,29,31 . In addition, future studies incorporating cell growth dynamics into the cell division and lineage datasets will reveal the cellular basis of shape generation (e.g., notch formation) in Ceratopteris gametophytes and provide a comprehensive cell atlas of gametophyte development in ferns.
Methods
Plant materials and growth conditions. Ceratopteris richardii strain Hn-n 40 was used in this study to generate the transgenic plants. Gametophytes were grown on FM plates (pH 6.0) containing 0.5 × MS salts (PhytoTechnology Laboratories) and 0.7% (w/v) agar (Sigma-Aldrich). Sporophytes were formed on fertilized gametophytes and were transferred to soil, typically after 3-4 weeks of fertilization. Both The red and pink area represents the zone of cell division where cells undergo active mitotic division to either renew themselves or move away from the meristem notch (as pointed with the black arrowhead). The red area represents the zone of marginal cells within the meristem, maintaining higher division activity than inner cells. The dotted pink area represents the zone of division and archegonium initiation where the archegonium progenitor cells are specified adjacent to the meristem notch. Each progenitor cell undergoes multiple rounds of cell division to finally develop into a multicellular mature archegonium. The blue area represents the zone of cell expansion where most cells stop dividing (but increasing in size). In general, the multicellular meristem promotes quick proliferation of the second, less developed wing, eventually resulting in a heart-shaped prothallus.
gametophytes and young sporophytes were grown under continuous light at 28°C. Adult sporophytes were grown in the LILY greenhouse facility at Purdue for harvesting spores.
To generate transgenic lines, the pMOA34 pCrHAM::H2B-GFP::3'CrHAM vector was transformed into Ceratopteris calli through the microparticle bombardment following the detailed procedure described previously 65,66 . Bombardment was performed using the Bio-Rad Biolistic PDS-1000/He particle delivery system. Plasmid-coated tungsten microparticles were delivered at 1100 psi. The regenerated T 0 sporophytes from calli were selected based on their hygromycin resistance. The spores from each individual T 0 sporophyte were harvested and the stable transformation of the construct in these lines was confirmed through testing the hygromycin resistance in their T 1 gametophytes. The expression of H2B-GFP was also determined in the T 1 gametophytes using a Zeiss LSM880 upright confocal microscope. At least three independent transgenic lines (including the line 12, line 24 and line 45) showed comparable expression levels and patterns in the Ceratopteris gametophytes. As shown in Fig. 1 (for the line 24) and Supplementary Fig. 3 (for the line 12 and line 45), pCrHAM::H2B-GFP::3'CrHAM was highly and ubiquitously expressed in the transgenic gametophytes (except in gametes) at the indicated days after inoculation in this study, which was consistent with the previous report that the AtHAM2 (Arabidopsis HAM2) transcriptional reporter is constitutively expressed in Arabidopsis shoot apical meristems and primordia 42 . The reporter line 24 of pCrHAM::H2B-GFP::3'CrHAM (shown in Fig. 1) was used for all the cell division analyses and mechanical perturbation experiments.
Sample preparation and live imaging. Spores of Ceratopteris transgenic plants were surface sterilized and sown on FM plates to produce gametophytes. The FM plates were sealed in Ziploc bags to maintain humidity and grown in the Percival growth chamber with the settings of continuous light, 28°C and 80% humidity. Gametophytes from 5-16 DAI were imaged in this study (described specifically in figure legends). To visualize cell morphology, gametophytes were stained with propidium iodide (PI) for 1 min. Then, they were rinsed with sterilized water two or three times and transferred to new FM plates for imaging. To perform the mechanical ablation, a few cells in the meristem (as shown in Fig. 7; Supplementary Figs. 13 and 15) or at a non-meristematic region (as shown in Supplementary Fig. 14) from a hermaphroditic gametophyte were pierced under a Nikon SMZ1000 stereoscope, using a sterilized micro-needle (Electron Microscopy Sciences). The confocal images of the same sample immediately before and after the ablation were taken for the comparison. For the time-lapse imaging, gametophytes were transferred to new FM plates and imaged every 6 h, which was sufficient to capture each cell division event. After imaging, these FM plates were sealed in Ziploc bags again and moved back to the growth chamber (Percival) that was located next to the confocal microscope, and the samples were grown under the same condition until the next time point.
Gametophytes shown in Supplementary Figs. 2 and 13 were imaged on FM plates using a stereoscope with a digital camera MU1803. All the other gametophytes were imaged using a Zeiss LSM880 upright confocal microscope. The settings of confocal imaging in Zen black software (Zeiss) were described previously in detail 67 with a few modifications in this study. Specifically, all the gametophytes were live-imaged on FM plates, using a Plan-Apochromat 10×/0.45 objective lens. Scanning interval of confocal optical sections was set to 1.0 μm for all the samples except 0.45 μm for imaging Archegonia with high resolution (Fig. 1e-g). For the confocal snapshots, GFP was excited using a 488-nm laser line and the emission was collected from the 491-562 nm. The detector gain for the GFP signal was set within the range of 769-782 and the detector digital gain was 1.0. PI was excited using a 514-nm laser line and the emission was collected from 587-669 nm. The detector gain for the PI signal was set within a range of 569-620 and the detector digital gain was 1.0. For time-lapse imaging, GFP was excited using a 488-nm laser line with the detection wavelength from 491-562 nm. The detector gain was set within a range of 769-780 and the detector digital gain was 1.0. The confocal images were processed using the Fiji/ Image J software to generate maximum intensity projection (z-projection) views with slight adjustment of brightness and contrast.
DAPI stain and confocal imaging. To confirm nuclear localization of H2B-GFP protein, Ceratopteris gametophytes expressing the pCrHAM::H2B-GFP::3'CrHAM reporter were stained with 4′,6-diamidine-2′-phenylindole dihydrochloride (DAPI) and imaged through the ZEISS 880 confocal microscope (shown in Supplementary Fig. 1). Specifically, the gametophytes were briefly treated with ethanol for~3 min and then stained with DAPI (Sigma-Aldrich) for 3 min. After that, the stained gametophytes were rinsed with sterilized water and imaged on FM plates. DAPI was excited using a 405-nm laser line with the detection wavelength from 436-475 nm. GFP was excited using a 488-nm laser line with the detection wavelength from 490-553 nm. The DIC channel was also collected for the visualization of the gametophyte cell outline. Scanning interval of confocal optical sections was set to 0.8 μm. The merge of GFP and DAPI channels was generated in Fiji / Image J.
Statistics and reproducibility. The statistical significance between two groups (shown in Supplementary Fig. 10d) was evaluated by Student's two-tailed t-test. The sample sizes for each experiment are indicated in the figure legends. Source data files for each graph are included in Supplementary Data.
Nucleus segmentation and detection, cell lineage and division analysis. The pipeline using Matlab software for the nucleus segmentation and detection and for the quantitative analyses of cell lineage and division consists of three parts. First, Ceratopteris prothalli develop as a flat sheet of cells (Supplementary Movies 1 and 2), which are suitable for the 2D imaging analysis. Nucleus segmentation was carried out on confocal images with the maximum intensity projection (z-projection) of the H2B-GFP reporter signal, using an established watershed method with distance transform 68 . The watershed was performed using the built-in implementation of Matlab following the manual (MATLAB, MathWorks) and the code is available upon request. An immature (developing) archegonium consisted of only a few nuclei, which can be visualized from the z-projection view of the confocal stacks (as examples shown in Fig. 1e-g and indicated in Fig. 2j-k) and then segmented and labeled (as indicated in Fig. 3j-k). In contrast, as characterized previously 39 , a mature archegonium formed the complex 3D structure (after 108 h of the live imaging in this study, as shown in Fig. 2v-w), which was not analyzed in this study. One example of complete nucleus segmentation and identification from 0 h to 108 h was shown in Supplementary Fig. 19. Then, the unique label was automatically assigned to each segmented nucleus, and one small circle was placed at the center of each segmented nucleus to define and mark the nucleus location within the gametophyte. The errors in nucleus segmentation were corrected through the deletion, merging, or separation of nuclei using Matlab software. Second, for the lineage analysis, cell lineage files were manually generated for each of two consecutive time points over the whole-time frame (e.g., the first lineage file from 0 h to 6 h and the second lineage file from 6 h to 12 h over the 108-hour frame for the segmented sample shown in Supplementary Fig. 19). Based on the cell lineage files, all descendants from each progenitor cell were tracked and recorded at all the subsequent time points within the analyzed time frame. Different colors were randomly assigned to different cell lineages to generate the lineage maps, representing the progression of individual progenitor cells over time (as shown in Fig. 3). In the third part of the pipeline, the numbers of cell division events for the cells originated from the same progenitor cell were quantified as shown in Supplementary Data 1, based on the cell lineages at six-hour intervals. The total number of cell division events for each cell lineage was quantitatively indicated by color, with the range from blue (zero division event) to red (highest number of division events) (as shown in Fig. 5). The scales of each color bar and the time frames for each division map were specified in the figures and figure legends.
Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
The data that support the results and conclusions of this study are available within the paper, Supplementary Information and Supplementary Data 1-5. DNA sequence of the expression cassette for the pCrHAM::H2B-GFP::3'CrHAM reporter was deposited in NCBI with the accession number ON787967 and is also shown in Supplementary Fig. 18. Any other supporting information is available from the corresponding author upon request.
Code availability
The code is available from the corresponding author upon request. | 8,293.6 | 2022-07-01T00:00:00.000 | [
"Biology"
] |
Identification of Immune Infiltration and the Potential Biomarkers in Diabetic Peripheral Neuropathy through Bioinformatics and Machine Learning Methods
Diabetic peripheral neuropathy (DPN) is one of the most common chronic complications in diabetes. Previous studies have shown that chronic neuroinflammation was associated with DPN. However, further research is needed to investigate the exact immune molecular mechanism underlying the pathogenesis of DPN. Expression profiles were downloaded from the Gene Expression Omnibus (GEO) database. Differentially expressed genes (DEGs) were screened by R software. After functional enrichment analysis of DEGs, a protein–protein interaction (PPI) network analysis was performed. The CIBERSORT algorithm was used to evaluate the infiltration of immune cells in DPN. Next, the least absolute shrinkage and selection operator (LASSO) logistic regression and support vector machine-recursive feature elimination (SVM-RFE) algorithms were applied to identify potential DPN diagnostic markers. Finally, the results were further validated by qRT-PCR. A total of 1308 DEGs were screened in this study. Enrichment analysis identified that DEGs were significantly enriched in immune-related biological functions and pathways. Immune cell infiltration analysis found that M1 and M2 macrophages, monocytes, resting mast cells, resting CD4 memory T cells and follicular helper T cells were involved in the development of DPN. LTBP2 and GPNMB were identified as diagnostic markers of DPN. qRT-PCR results showed that 15 mRNAs, including LTBP2 and GPNMB, were differentially expressed, consistent with the microarray results. In conclusion, LTBP2 and GPNMB can be used as novel candidate molecular diagnostic markers for DPN. Furthermore, the infiltration of immune cells plays an important role in the progression of DPN.
Introduction
According to the tenth edition of IDF Diabetes Atlas 2021, 537 million people are suffering from diabetes, and this number is projected to be 783 million by 2045 [1]. DPN is one of the most prevalent chronic complications and the cause of limb amputations in diabetes mellitus (DM) [2]. Pain and numbness are typical and serious symptoms of patients with diabetic peripheral neuropathy (DPN). However, it shows no obvious clinical symptoms or manifestations in the inchoate stages. At present, the gold standard methods for diagnosing DPN are usually based on the electroneuromyography examination [3]. In practice, these diagnostic methods are difficult and impractical to implement as they are time-consuming and labor-intensive. Thus, there is still a lack of precise early diagnostic indicators of DPN. To improve the quality of life of patients with DPN, prevention by tight glucose control and lifestyle intervention is the best current treatment for DPN. Therefore, for further analysis. GSE70852 contains microarray measurements of dorsal root ganglia (DRG) and sciatic nerve (SCN) tissue from 26-week-old ob/+ and ob/ob mice (n = 5 in each group). The GSE27382 dataset contains 6 samples from 24-week-old BKS db/db mouse sciatic nerve and 7 samples from db/+ mouse sciatic nerves. We chose SCN samples in two datasets (n = 23) for further analysis. Then, GSE70852 and GSE27382 gene expression matrices were merged, and the batch effect was removed using the "sva" package of R software [22] (version 4.1.2, http://r-project.org/, accessed on 10 March 2022). The effect of removing batch effects was demonstrated using a box plot and a two-dimensional PCA cluster plot was used to evaluate the effect of inter-sample correction.
Differential Expressed Genes Screening and Analysis
Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG) functional enrichment analyses and gene set enrichment analysis (GSEA) were conducted based on the DEGs using the R package "clusterProfiler" [25]. The R package "GOplot" and "enrichplot" were used to visualize the results of enrichment analysis [26]. Disease Ontology (DO) enrichment analysis was performed on DEGs through the DisGeNET database [27]. A false discovery rate (FDR) < 0.05 and a p < 0.05 were considered significant enrichments.
Construction of the PPI Network of Differential Expressed Genes and Hub Genes Analysis
To explore the relationship of DEGs, a PPI network was constructed using the STRING online database (version 11.5, https://www.string-db.org, accessed on 10 March 2022) [28], with interactions with a combined score > 0.9 being used for network construction. Cytoscape v3.8.1 was used to visualize the PPI network. The cytoHubba plugin in Cytoscape was employed to identify hub genes based upon eight algorithms, including stress, radiality, MNC (maximum neighborhood component), MCC (maximal clique centrality), EPC (edge percolated component), EcCentricity, DMNC (density of maximum neighborhood component), degree, closeness, BottleNeck and betweenness [29]. We selected the top 70 node genes scored by each algorithm to screen hub genes in DPN. An UpSet plot was generated using the R package "UpSetR" [30].
Evaluation of Immune Cell Subtype Infiltration
The abundance of 22 types of infiltrating immune cells of each sample with DPN or normal was estimated by translating the gene expression matrix data into the relative proportion of immune cells [16]. The 22 types of immune cells include naive B cells, memory B cells, plasma cells, CD4+ T cells, CD8+ T cells, resting and activated NK cells, monocytes, M0/M1/M2 macrophages, dendritic cells, mast cells, eosinophils, etc. This was achieved with the CIBERSORT algorithm based on deconvolution using the R package "CIBERSORT" (http://cibersort.stanford.edu/, accessed on 20 March 2022). Analysis was performed by using the default signature matrix at 1000 permutations [31]. Then, PCA clustering analysis was performed and a correlation heatmap was drawn by the OmicStudio tool. R packages "ggplot2" and "ggpubr" were applied to visualize the results from CIBERSORT [32]. Here, only the data with p < 0.05 were used for subsequent analysis. The least absolute shrinkage and selection operator (LASSO) logistic regression model was conducted to analyze the different infiltrates of immune cells in DPN and normal samples with the R package "glmnet" [33].
Identification and Verification of Biomarkers
LASSO logistic regression and the support vector machine-recursive feature elimination (SVM-RFE) machine learning method were applied to identify the potential biomarkers associated with DPN [34,35]. RFE-SVM was implemented with the R package "e1071" (https://cran.r-project.org/web/packages/e1071/index.html, accessed on 11 April 2022). The RNA-Seq dataset GSE159059, which contains 10 non-diabetic db/+ mice and 10 DPN mice, was used as the validation dataset [36]. The gene expression matrix of the validation RNA-Seq dataset was downloaded from the GEO database. After combining the DEGs selected by the LASSO and SVM-RFE algorithms, potential biomarkers were identified by the two algorithms simultaneously. The receiver operating characteristic (ROC) curve was applied to evaluate the diagnostic value of biomarkers.
Correlation Analysis between Diagnostic Markers and Immune Cells
Spearman correlation analyses were performed to assess the correlation between the diagnostic markers and infiltrating immune cells [37]. The results were visualized by R package "ggplot2".
Animals
Male, 12-week-old, nondiabetic C57BL/ksJ-leprdb/lepr+ mice (db/+) and diabetic C57BL/ksJ-leprdb (db/db) mice (n = 5 per group) were purchased from Huafukang Company (Beijing, China). The animals were housed under standard conditions of a 12 h light/dark cycle and given unrestricted access to water and food. Mice were handled in accordance with the National Institutes of Health Guidelines and Regulations, and all experiments were approved by the Animal Ethics Committee of Huazhong University of Science and Technology.
Tissue Harvest and Quantitative Real-Time PCR
At 26 weeks of age, all the animals were killed by sodium pentobarbital overdose after random blood glucose level monitoring and behavioral tests were conducted [38]. Then, sciatic nerves from two groups were dissected and used for RNA extraction. Total RNAs were extracted using a QIAGEN RNeasy Mini Kit. qRT-PCR was performed to validate the expression level of DEGs based on the instruction of ChamQ SYBR qPCRMaster Mix (Vazyme, Nanjing, China). The RNA data were normalized to β-actin as the endogenous reference. Gene expression levels were calculated with relative expression levels by using delta-delta Ct method (2 − Ct ). The primer sequences were listed in Table S1.
Statistical Analysis
Data were expressed as the mean ± SEM. p < 0.05 was considered statistically significant. The unpaired Student's t-test and one-way analysis of variance with Bonferroni post hoc test were performed for comparisons between two groups. Statistical analysis was calculated with the GraphPad Prism v 9.3.1 software.
Data Preprocessing and DEGs Identification
The microarray data of the GSE70852 and GSE27382 datasets were merged, containing 12 DPN sciatic nerve samples and 11 normal sciatic nerve samples. The box plot shows that the batch effects between two gene expression profile datasets were removed (Figure 1a,b). After normalization and batch effect removal, principal component analysis was used to characterize the merged dataset (Figure 1c,d). |Foldchange| > 1.5 and p < 0.05 were used as the thresholds to screen differentially expressed genes in DPN after data preprocessing. A total of 1308 DEGs were obtained from the gene expression matrix using the R package "limma", including 628 upregulated and 680 downregulated genes (Table S2). A heatmap and volcano map are shown in Figure 2. 1a,b). After normalization and batch effect removal, principal component analysis was used to characterize the merged dataset (Figure 1c,d). |Foldchange| > 1.5 and p < 0.05 were used as the thresholds to screen differentially expressed genes in DPN after data preprocessing. A total of 1308 DEGs were obtained from the gene expression matrix using the R package "limma", including 628 upregulated and 680 downregulated genes (Table S2). A heatmap and volcano map are shown in Figure 2.
Functional Enrichment and Pathway Analyses
To determine functions associated with DEGs in DPN, GO analysis was performed based on the 628 upregulated DEGs and 680 downregulated DEGs. A GO circle plot highlights the top 10 GO biological process (BP) terms that are strong candidates for DPN (Figure 3a,b). The inner ring of the circle plot represents a bar plot, where the bar height indicates the negative log p value of the BP term described. The outer ring shows a scatter plot of the expression levels of DPN associated DEGs in each enriched GO term [39]. GO analysis results showed that upregulated DEGs were mainly related to the biological activity of inflammatory cells, such as leukocyte migration and neutrophil migration (Figure 3a). Downregulated DEGs were mainly related to neural functions ( Figure 3b). The above results suggested that the immune response plays an important role in DPN. KEGG pathway analysis and GSEA analysis were also used to reveal the changed biological pathways in DPN (Figure 3c,d,f). Overlapping the KEGG pathways analysis with the GSEA results produced three pathways, one of them was the IL-17 signaling pathway, which is also a key pathway in regulating immunity [40]. DisGeNET is a discovery platform integrating information on gene-disease associations from public data sources and the literature [27]. Furthermore, the diseases associated with DEGs were mapped to the DisGeNET database and the results are shown in Figure 3e.
Functional Enrichment and Pathway Analyses
To determine functions associated with DEGs in DPN, GO analysis was per based on the 628 upregulated DEGs and 680 downregulated DEGs. A GO circle plo lights the top 10 GO biological process (BP) terms that are strong candidates fo (Figure 3a,b). The inner ring of the circle plot represents a bar plot, where the bar indicates the negative log p value of the BP term described. The outer ring shows a plot of the expression levels of DPN associated DEGs in each enriched GO term [3 analysis results showed that upregulated DEGs were mainly related to the biolog tivity of inflammatory cells, such as leukocyte migration and neutrophil migration 3a). Downregulated DEGs were mainly related to neural functions ( Figure 3b). Th results suggested that the immune response plays an important role in DPN. KEG way analysis and GSEA analysis were also used to reveal the changed biological pa
PPI Network Analysis of DEGs
In order to identify potential links between DEGs, a PPI network with 381 nodes and 776 edges was generated using the STRING database. The confidence interaction score was set to 0.9 in Figure S2 for the network construction. Then, eight algorithms in the cytoHubba plugin were used to calculate the score of DEGs. As a result, seven hub genes were screened by the R package "UpSetR" (Figure 4a). The expression levels of the seven hub genes in the merged dataset are shown by a heatmap (Figure 4b). These results suggested that these hub genes may play important roles in the development of DPN. Furthermore, the diseases associated with DEGs were mapped to the DisGeNET database and the results are shown in Figure 3e.
PPI Network Analysis of DEGs
In order to identify potential links between DEGs, a PPI network with 381 nodes and 776 edges was generated using the STRING database. The confidence interaction score was set to 0.9 in Figure S2 for the network construction. Then, eight algorithms in the cytoHubba plugin were used to calculate the score of DEGs. As a result, seven hub genes were screened by the R package "UpSetR" (Figure 4a). The expression levels of the seven
Immune Cell Infiltration in DPN and Normal Tissues
PCA cluster analysis results of immune cell infiltration showed that there was a significant difference in immune cell infiltration between the DPN samples and the control samples ( Figure 5a). The correlation analysis results between different types of immune cells are represented by a correlation heatmap. It showed that monocytes had a significant positive correlation with M2 macrophages. Resting dendritic cells and M1 macrophages also had a positive correlation. On the other hand, resting dendritic cells had a significant negative correlation with monocytes and M2 macrophages (
Immune Cell Infiltration in DPN and Normal Tissues
PCA cluster analysis results of immune cell infiltration showed that there was a significant difference in immune cell infiltration between the DPN samples and the control samples ( Figure 5a). The correlation analysis results between different types of immune cells are represented by a correlation heatmap. It showed that monocytes had a significant positive correlation with M2 macrophages. Resting dendritic cells and M1 macrophages also had a positive correlation. On the other hand, resting dendritic cells had a significant negative correlation with monocytes and M2 macrophages (Figure 5b). The infiltrating levels of 22 immune cell types in each sample are presented in a histogram using the function of R package "RColorBrewer" (Figure 5c). The box plot of the immune cell infiltration difference shows eight types of immune cells with p < 0.05 (Figure 6a). Seven types of immune cells were selected by LASSO and the results are displayed in Figure 6b,c. Taking the intersection of the two methods, six types of immune cells were considered to be differentially expressed. Specifically, compared with the normal control sample, M1 macrophages and resting CD4 memory T cells infiltrated more in DPN samples, while M2 macrophages, resting mast cells, monocytes and follicular helper T cells infiltrated less.
Screening and Verification of Biomarkers Markers
The LASSO logistic regression algorithm and SVM-RFE algorithm were used to screen out novel diagnostic biomarkers most associated with DPN from DEGs. Ten and eight candidate genes were identified by LASSO and SVM-RFE algorithms, respectively (Figure 7a,b). Finally, two diagnostic related genes (LTBP2 and GPNMB) were obtained by taking the intersection of the two algorithms (Figure 7c). To make the validation more credible, further validation of the diagnostic efficacy of LTBP2 and GPNMB was performed in the validation dataset GSE159059. The expression levels of LTBP2 and GPNMB in the dataset GSE159059 are shown in Figure 7d,e. The ROC curve showed the diagnostic performance of LTBP2 and GPNMB in the verification dataset and the area under the ROC curve (AUC), which can summarize the overall diagnostic accuracy of the potential biomarkers, was 0.896 (Figure 7f), indicating that LTBP2 and GPNMB had high diagnostic value.
Screening and Verification of Biomarkers Markers
The LASSO logistic regression algorithm and SVM-RFE algorithm were used to screen out novel diagnostic biomarkers most associated with DPN from DEGs. Ten and eight candidate genes were identified by LASSO and SVM-RFE algorithms, respectively (Figure 7a,b). Finally, two diagnostic related genes (LTBP2 and GPNMB) were obtained by taking the intersection of the two algorithms (Figure 7c). To make the validation more credible, further validation of the diagnostic efficacy of LTBP2 and GPNMB was performed in the validation dataset GSE159059. The expression levels of LTBP2 and GPNMB in the dataset GSE159059 are shown in Figure 7d,e. The ROC curve showed the diagnostic performance of LTBP2 and GPNMB in the verification dataset and the area under the ROC curve (AUC), which can summarize the overall diagnostic accuracy of the potential biomarkers, was 0.896 (Figure 7f), indicating that LTBP2 and GPNMB had high diagnostic value.
Correlation Analysis between LTBP2 and GPNMB, and Infiltrating Immune Cells
The correlation analysis showed that LTBP2 was positively correlated with M1 Macrophages and resting CD4 memory T cells, and it was negatively correlated with M2 macrophages, monocytes and follicular helper T cells (Figure 7g). GPNMB was positively correlated with resting CD4 memory T cells, and it was negatively correlated with M2 macrophages, monocytes, follicular helper T cells and resting mast cells (Figure 7h).
Validation of DEGs by qRT-PCR
To further validate the above outcomes obtained from microarray analysis, qRT-PCR was carried out. We selected 20 DEGs with high fold changes or high weight in the network to validate the analysis results. The total RNA from individual mouse sciatic nerves (n = 5 per group) was extracted and then evaluated for these genes. As a result, 15 DEGs
Correlation Analysis between LTBP2 and GPNMB, and Infiltrating Immune Cells
The correlation analysis showed that LTBP2 was positively correlated with M1 Macrophages and resting CD4 memory T cells, and it was negatively correlated with M2 macrophages, monocytes and follicular helper T cells (Figure 7g). GPNMB was positively correlated with resting CD4 memory T cells, and it was negatively correlated with M2 macrophages, monocytes, follicular helper T cells and resting mast cells (Figure 7h).
Validation of DEGs by qRT-PCR
To further validate the above outcomes obtained from microarray analysis, qRT-PCR was carried out. We selected 20 DEGs with high fold changes or high weight in the network to validate the analysis results. The total RNA from individual mouse sciatic nerves (n = 5 per group) was extracted and then evaluated for these genes. As a result, 15 DEGs have been verified. The gene expression levels of UBD, UCP1, LTBP2, CCL2, S100A8, HSPB7 and GPNMB were increased in the DPN groups compared with the control groups. Furthermore, MMP9, PON1, CYP2F2, CDH1, TUBB3, MYT1L, CACNB4 and MGL2 were significantly downregulated in the DPN groups (Figure 8). Among these DEGs is the inflammatory response-related gene CCL2, which is reported to be associated with diabetic neuropathic pain [41]. A study has found that S100A8 expression levels increased in neurodegenerative disorders and inflammatory and autoimmune diseases [42]. Uncoupling protein 1 (UCP1) is a 32-kDa protein located in the inner membrane of mitochondria. It regulates the dissipation of excess energy via uncoupling oxidative phosphorylation from ATP synthesis [43]. Recent investigations have suggested that differential regulation of UCPs may be associated with diabetes and DPN [44]. Paraoxonase 1 (PON1) has been extensively evaluated as a genetic candidate for diabetic microvascular complications [45]. TUBB3 is primarily expressed in neurons and may be involved in neurogenesis and axon guidance and maintenance. More importantly, there was a significant difference in the expression levels of two biomarkers, LTBP2 and GPNMB, between two groups. have been verified. The gene expression levels of UBD, UCP1, LTBP2, CCL2, S100A8, HSPB7 and GPNMB were increased in the DPN groups compared with the control groups. Furthermore, MMP9, PON1, CYP2F2, CDH1, TUBB3, MYT1L, CACNB4 and MGL2 were significantly downregulated in the DPN groups ( Figure 8). Among these DEGs is the inflammatory response-related gene CCL2, which is reported to be associated with diabetic neuropathic pain [41]. A study has found that S100A8 expression levels increased in neurodegenerative disorders and inflammatory and autoimmune diseases [42]. Uncoupling protein 1 (UCP1) is a 32-kDa protein located in the inner membrane of mitochondria. It regulates the dissipation of excess energy via uncoupling oxidative phosphorylation from ATP synthesis [43]. Recent investigations have suggested that differential regulation of UCPs may be associated with diabetes and DPN [44]. Paraoxonase 1 (PON1) has been extensively evaluated as a genetic candidate for diabetic microvascular complications [45]. TUBB3 is primarily expressed in neurons and may be involved in neurogenesis and axon guidance and maintenance. More importantly, there was a significant difference in the expression levels of two biomarkers, LTBP2 and GPNMB, between two groups. Figure 8. Confirmation of microarray data in sciatic nerves from db/db diabetic mice and db/+ nondiabetic mice by qRT-PCR. The microarray data and qRT-PCR results are consistent. Results were presented as means ± SEM of five independent experiments (* p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001).
Discussion
DPN is a common, serious and troublesome chronic complication of DM [46]. Chronic hyperglycemia and oxidative stress lead to major structural and functional abnormalities of the peripheral nerves. In addition, neuroinflammation plays an important role in the development of DPN. Currently, large numbers of DPN patients complain of pain, fatigue, reduced quality of life and disability. Unfortunately, early diagnosis is difficult due to the lack of specific diagnostic indicators. Therefore, finding novel diagnostic biomarkers and analyzing the pattern of DPN immune cell infiltration is useful for improving the outcomes of patients with DPN. Previously, multiple studies have found that the signaling pathways comprised of some genes may play an important role in the development of DPN. However, few systematic analyses and comparisons of the transcriptome data have been made. More importantly, the exact mechanism underlying the progression of DPN driven by key genes remains to be fully elucidated.
In this study, bioinformatics techniques were used to analyze microarray data acquired from the GEO database isolated from sciatic nerves of T2DM mouse models to identify potential biomarkers. A total of 628 upregulated and 680 downregulated DPNrelated DEGs were identified in the GSE70852 dataset and GSE27382 dataset. GO Figure 8. Confirmation of microarray data in sciatic nerves from db/db diabetic mice and db/+ nondiabetic mice by qRT-PCR. The microarray data and qRT-PCR results are consistent. Results were presented as means ± SEM of five independent experiments (* p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001).
Discussion
DPN is a common, serious and troublesome chronic complication of DM [46]. Chronic hyperglycemia and oxidative stress lead to major structural and functional abnormalities of the peripheral nerves. In addition, neuroinflammation plays an important role in the development of DPN. Currently, large numbers of DPN patients complain of pain, fatigue, reduced quality of life and disability. Unfortunately, early diagnosis is difficult due to the lack of specific diagnostic indicators. Therefore, finding novel diagnostic biomarkers and analyzing the pattern of DPN immune cell infiltration is useful for improving the outcomes of patients with DPN. Previously, multiple studies have found that the signaling pathways comprised of some genes may play an important role in the development of DPN. However, few systematic analyses and comparisons of the transcriptome data have been made. More importantly, the exact mechanism underlying the progression of DPN driven by key genes remains to be fully elucidated.
In this study, bioinformatics techniques were used to analyze microarray data acquired from the GEO database isolated from sciatic nerves of T2DM mouse models to identify potential biomarkers. A total of 628 upregulated and 680 downregulated DPN-related DEGs were identified in the GSE70852 dataset and GSE27382 dataset. GO enrichment analysis showed that upregulated DEGs were mainly enriched in leukocyte migration (GO:0050900), leukocyte chemotaxis (GO:0030595), cell chemotaxis (GO:0060326), neutrophil migration (GO:1990266) and granulocyte migration (GO:0097530). Additionally, downregulated DEGs were mainly enriched in neurotransmitter transport (GO:0006836), regulation of membrane potential (GO:0042391) and axonogenesis (GO:0007409). From the above results, it was revealed that a DPN upregulated immune response and was significantly associated with the impairment of neurological function. Furthermore, DO enrichment analysis showed that immune-mediated diseases such as inflammation, fibrosis and arthritis were enriched. The IL-17 signaling pathway, p53 signaling pathway and Toll-like receptor signaling pathway were identified to be associated with DEGs by pathway enrichment analysis. Several investigators have suggested that upregulated IL-17 possesses a crucial role in the inflammatory process and the development of DM [47]. Ben Y et al. found that astragaloside IV could reduce the occurrence of mitochondrial-dependent apoptosis by regulating the SIRT1/p53 pathway in DPN rats [48]. Other studies have found that Toll-like receptor4 could be a potentially sensitive diagnostic biomarker for DPN in type 2 diabetic patients [49]. Our analysis data were also consistent with the findings above.
Through PPI network construction, genes that have high scores in eight algorithms were considered as key hub genes, such as CCL2, TGFB1, MMP9 and CD68. It is of note that abnormal expression of some genes has been reported to be related to DM or DPN in the past few years. As an example, C-C chemokine ligand 2 (CCL2) and its receptor are key players in the attraction of monocytes to sites of injury and inflammation and it was proposed to be a major cause of diabetic neuropathic pain [41,50]. Previous studies have demonstrated that Triphala churna acted as a neuroprotective agent in DPN via the downregulation of inflammatory cytokines such as TGFB1 [51]. Moreover, downregulation of MMP9 could improve peripheral nerve function via promoting Schwann cell autophagy in DPN [52]. A previous study found that CD68, a macrophage marker, was higher in the DRGs of patients with DPN, demonstrating that the upregulated inflammatory markers may contribute to the inflammatory response, potentially stemming from diabetes related neuronal pathology [53]. Overall, inflammation can be an important factor following peripheral nerve injury, as activated macrophages are needed to engulf myelin debris and apoptotic cells. However, sustained and low-grade inflammation is generally known to be linked to diabetes [54]. This impairs the cell viability in the peripheral nerve.
In order to explore the role of immune cell infiltration in DPN, CIBERSORT analysis was applied to estimate the fractions of immune cells in sciatic nerves. We found that an increased infiltration of M1 macrophages and resting CD4 memory T cells, and a decreased infiltration of M2 macrophages, resting mast cells, monocytes and follicular helper T cells may be related to the development of DPN. Macrophages are professional phagocytes belonging to the innate immune system that can be activated by a variety of external stimuli. Based on their function, macrophages can be differentiated into two phenotypes: M1 (proinflammatory) and M2 (anti-inflammatory) macrophages [55]. M1 macrophages are able to secrete a broad range of inflammatory factors, such as IL-6, IL-1β and TNF-α. Previous studies have shown that M1 macrophages increased significantly in DPN patients [56]. This means that M1 macrophages might play a pivotal role in the onset and development of DPN [57]. Moreover, it was found that increased expression of TLR4 in monocytes could be related to systemic inflammation in peripheral neuropathy in T2DM [58]. These findings further support the important role of immune cell infiltration and inflammation in the development of DPN.
LASSO logistic regression is a reliable method for selecting diagnostic features of DPN based on regression trees. It provides a statistically rigorous method to identify the variable λ when the predicted outcomes are best. Furthermore, SVM-RFE is a classic machine learning method based on a recursive feature elimination strategy to select important genes by training a support vector machine model. To further select feature variables and build an accurate classification model, we applied these two algorithms in this study. The overlap of the LASSO logistic regression model and the SVM-RFE algorithm was obtained. Consequently, LTBP2 and GPNMB were recognized as potential diagnostic markers for DPN. Latent transforming growth factor beta binding protein 2 (LTBP2) is a member of the fibrillin/LTBP extracellular matrix glycoprotein family [59]. It plays a critical role in regulating the extracellular matrix glycoprotein. A growing number of studies have found that LTBP2 was associated with cardiac fibrosis, acute heart failure, glomerular filtration rate and pre-eclampsia [59]. Recent investigations have suggested that overexpression of LTBP2 facilitated inflammation in endometriosis [60]. It is regretful that the role of LTBP2 in DPN development has not been studied. Therefore, this needs further experimental verification. GPNMB is an endogenous type 1 transmembrane glycoprotein. A study has shown that GPNMB is closely related to neuroinflammation [61]. Interestingly, neuroinflammation happens to be one of the most important mechanisms in the development of DPN. It would be reasonable to speculate that GPNMB may play an important role in the disease progression of DPN. In conclusion, evidence from previous studies indicates that LTBP2 and GPNMB may play an important role in the development and progression of DPN. However, validated experiments and clinical studies are still needed to assess the diagnostic value of LTBP2 and GPNMB. A comprehensive analysis was performed including LTBP2, GPNMB and immune cells. LTBP2 was significantly positively correlated with M1 macrophages and GPNMB was significantly negatively correlated with M2 macrophages. We speculate that LTBP2 and GPNMB affect immune cells to participate in the occurrence and progression of DPN. Further experimentation is needed to validate these hypotheses, including experiments regarding the interactions between genes and immune cells.
Conclusions
In this study, DEGs associated with DPN were identified by analyzing previously published datasets containing DPN and normal sciatic nerve samples. Then, functional enrichment and PPI network analyses were conducted for DEGs, elucidating the detailed mechanisms and the pathogenesis of DPN. What is more, by using novel bioinformatics methods such as LASSO logistic regression algorithms and the SVM-RFE algorithm, we have identified potential DPN diagnostic markers, LTBP2 and GPNMB. This is the first time that CIBERSORT was used to analyze immune cell infiltration in peripheral nerve tissues. Nevertheless, we recognize that there were important limitations in our study which cannot be ignored. First, the current study is limited by a small sample size due to the small number of gene microarrays in DPN. Furthermore, CIBERSORT analysis is based on limited genetic data that may deviate from heterotypic interactions of cells, diseaseinduced disorders or phenotypic plasticity. In addition, our research needs to be further experimentally validated. In conclusion, our results present the promising potential for several diagnostic biomarkers of DPN and provide a novel strategy for DPN diagnosis and treatment.
Institutional Review Board Statement:
The study was conducted in accordance with the Declaration of Helsinki, and approved the Laboratory Animal Centre of Tongji Medical College, Huazhong University of Science and Technology (2020-S2665).
Informed Consent Statement: Not applicable.
Data Availability Statement: The datasets generated during and/or analyzed during the current study are available in the Gene Expression Omnibus (GEO) datasets (http://www.ncbi.nlm.nih.gov/geo/, accessed on 10 March 2022). | 7,063 | 2022-12-26T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Non-invasive and noise-robust light focusing using confocal wavefront shaping
Wavefront-shaping is a promising approach for imaging fluorescent targets deep inside scattering tissue despite strong aberrations. It enables focusing an incoming illumination into a single spot inside tissue, as well as correcting the outgoing light scattered from the tissue. Previously, wavefront shaping modulations have been successively estimated using feedback from strong fluorescent beads, which have been manually added to a sample. However, such algorithms do not generalize to neurons whose emission is orders of magnitude weaker. We suggest a wavefront shaping approach that works with a confocal modulation of both the illumination and imaging arms. Since the aberrations are corrected in the optics before the detector, the low photon budget is directed into a single sensor spot and detected with high signal-noise ratio. We derive a score function for modulation evaluation from mathematical principles, and successfully use it to image fluorescence neurons, despite scattering through thick tissue.
Introduction
Scattering forms one of the hardest barriers on light-based approaches for tissue imaging.A promising approach for overcoming scattering is a wavefront-shaping correction.By using a spatial light modulator (SLM) device, one can reshape the coherent wavefront illuminating the sample, such that its aberration is conjugate to the aberration that will happen inside the tissue.When such a wavefront propagates through the sample, all incoming light can be focused into a small spot.In the same way, one can modulate the outgoing wavefront so that light photons emerging from a single target point are brought into a single sensor point, despite the tissue aberration.The main advantage of this approach is that all light photons are used, unlike in ballistic-filtering approaches where scattered light is rejected.
Despite the large potential of the idea, finding the desired shape of the modulation correction is rather chal-lenging.The desired modulation varies between different tissue samples and even varies spatially between different positions of the same tissue.For thick tissue, the modulation is a complex pattern containing a large number of free modes.
Earlier proof-of-concept demonstrations have used a validation camera behind the tissue to provide feedback to the algorithm [7][8][9][11][12][13], and other approaches have relied on the existence of a guiding star [4,10,[14][15][16][17][18][19][20][21][22].In the absence of such a guiding star, and when only non-invasive feedback is available, determining whether a wavefront has focused inside the tissue is not straightforward.The difficulty results from the fact that even if we can find an illumination wavefront that actually focuses into a small spot inside the tissue, the light backscattering from this spot is aberrated again on its way to the camera, forming yet another scattered pattern.
Earlier approaches evaluate whether a wavefront modulation has focused using multi-photon fluorescence feedback.In this way, the light emitted from a fluorescence spot is a non-linear function of the excitation intensity arriving to it, so when all light is focused into a single spot the total emission energy is maximized [2,15].How-Fig.1: System schematic: An incoming laser illumination propagates through a layer of scattering tissue to excite a fluorescent neuron behind it.The emitted light is scattered again through the tissue toward a detector (main camera).In a conventional imaging system, the image of the neuron in the main camera is highly aberrated.Moreover, due to the weak emission, the image is very noisy.By adding two SLMs modulating the excitation and emission wavefronts, we can undo the tissue aberration.This allows us to reshape the incoming illumination into a spot on the neuron, and also to correct the emitted light and focus it into a spot in the main camera.For reference, we also place a validation camera behind the tissue, which can image the neuron directly and validate that the excitation light has focused into a spot.This camera does not provide any input to our algorithm.We visualize example images from these two cameras with and without the modulation correction.ever, obtaining feedback using single-photon fluorescence is highly desired as the process is significantly simpler and cheaper than the multi-photon one.The single-photon case cannot be evaluated using the simple score function applied in the multi-photon case, since the emission energy is a linear function of the excitation energy and thus the amount of emission energy does not increase when all excitation is focused into a spot.Recently, progress has been made on non-invasive wavefront shaping using single-photon feedback [23,24].First, Boniface et al. [23] have suggested that one can evaluate whether an incoming wavefront modulation has focused by computing the variance of the emitted speckle pattern.More recently, Aizik et al. [24] have suggested a rapid approach that can find a wavefront shaping modulation using iterative phase conjugation.Both approaches were only demonstrated when the fluorescent feedback was provided by synthetic fluorescent beads, which emit a relatively strong signal.However, it is simpler if we could estimate wavefront shaping modulations using feedback from biological samples, such as neurons.These biological fluorescent samples impose two main differences.First, the targets are not sparse, but have wide continuous volumes.Thus, an initial excitation pattern is likely to produce a smooth image of emitted light rather than a speckle pattern, as il-lustrated in Fig. 1.Without speckle variation, the phaseretrieval process of [24] cannot be carried out.A second, more significant difference is the fact that the signal emitted from such samples is orders of magnitude weaker than the one provided by fluorescent beads, and bleaching is reached much earlier.Both algorithms [23,24] inherently assume that the speckle pattern emitted from a single fluorescent spot can be measured.However, if the number of fluorescent photons emitted from a neuron spot is low, and these photons are aberrated and spread over multiple sensor pixels, no speckle pattern can be observed and one can mostly measure noise, see visualization in Fig. 2.An attempt to measure the variance of this image, as required by [23], will result in the noise variance rather than the speckle variance.While it is possible to improve the image in Fig. 2(c) by using a longer exposure or by increasing laser power, the wavefront shaping optimization process of [23] would require capturing thousands of images of the target, and bleaching happens well before the optimization converges.This work proposes a confocal wavefront shaping framework, which can be applied in low-light scenarios and uses feedback from biological sources.To this end, we propose to use a simultaneous wavefront shaping modulation both on the incoming excitation wavefront and A single bead is excited and the emitted light scatters through the tissue to generate a wide speckle pattern in (a).In (b) we use an aberration correction in the imaging arm so that the sensor measures a sharp spot.With such synthetic sources we can image a speckle pattern at high SNR, but this is not always the case with real biological samples.For example, (c,d) demonstrate fluorescent emission from EGFP neurons (excitation/emission at 488/508nm), which is orders of magnitude weaker.In (c) a single fluorescent spot is excited, and the limited number of photons it emits are spread over multiple pixels.
Noise is dominant, and any attempt to measure the variance of this image will evaluate the noise variance rather than the speckle variance.In (d) aberration correction is applied in the optics.As all photons are collected by a single pixel, SNR is drastically improved.Note that images (c,d) are taken under equal exposure and equal excitation power.
on the outgoing emitted light [25,26], as illustrated in Fig. 1.The advantage is that since scattered photons are corrected in the optical path and we attempt to bring all photons emitted from a single spot into a single detector, we can measure them with a much higher signal-to-noise ratio (SNR).
To quantify the quality of a candidate modulation correction, we do not attempt to maximize the total energy emitted from the target.Rather, we seek to maximize the energy of the corrected wavefront in a single spot.We show that despite the fact that we use linear single-photon fluorescence, due to the double correction on both the illumination and imaging arms, our score function scales non-linearly with the intensity arriving at the fluorescence target.Thus, the returning energy at a single pixel is maximized by a focusing modulation that manages to bring all light into a single spot.We show that effectively, this score function is equivalent to the one used by previous two-photon fluorescence wavefront-shaping work [15].
We successfully use our algorithm to recover wavefront shaping modulations using fluorescent emission from EGFP neurons.By exploiting memory effect correlations, we use the modulation to locally image the shape of these neurons and their thin axons through scattering tissue.
Definitions
Imaging setup: Consider the schematic of Fig. 1.A laser beam illuminates a tissue sample, and an SLM can modulate its shape.The illumination wavefront propagates through the scattering tissue and excites the fluorescent target behind it.We wish to image that target, but the emitted light is scattered again through the tis-sue on its way to the camera.A second phase SLM at the imaging arm modulates the emitted light.Lastly, the modulated light is measured by the front main camera.In practice our SLMs are placed at the Fourier planes of the system, see supplementary for a complete description.
The setup includes a second validation camera behind the tissue sample to assess focusing quality and capture a clean reference image of the same target.We emphasize that we only use non-invasive feedback by the main (front) camera, and the validation camera does not provide any input to the algorithm.Image formation model: Consider a set of K fluorescent particles inside a sample, and denote their positions by o 1 , . . ., o K .The illumination SLM displays a complex 2D electric field u.We use ν to denote a K × 1 vector of the field propagating through the sample at each of the K fluorescent sources.We can express ν = T i u, where T i is the incoming transmission matrix, describing the forward coherent light propagation in the tissue.Likewise, the coherent propagation of light returning from the target to the SLM of the imaging arm can be described by a back-propagation transmission matrix T o .
Fluorescent energy emitted from a particle is proportional to |ν k | 2α , where α denotes the type of fluorescent excitation.The simplest case α = 1 is known as single-photon fluorescence, where the emission is linear in the excitation energy |ν k | 2 .In two-photon fluorescence, α = 2, and the emission is proportional to the squared excitation.
Since the laser energy is fixed, for any modulation u the energy arriving the fluorescent target is bounded and w.l.o.g.we assume (1)
Scoring modulations
To estimate a wavefront shaping modulation, we first need a score function that can evaluate the focusing quality facilitated by a candidate modulation mask, using a noninvasive feedback alone.We start by reviewing scores that were previously introduced in the literature and then propose our new, noise-robust confocal score.
Image quality scores: Modulation evaluation is a simpler task when the same modulation can correct a sufficiently large isoplanatic image region.This assumption was made by adaptive optics research [1][2][3]27,28] and also by wavefront shaping approaches [29][30][31][32], who evaluate the quality of the resulting image, in terms of contrast [2], sharpness, variance [29], or with a neural network regularization [32,33].However, for thick tissue, wavefront shaping correction can vary quickly between nearby pixels, and a modulation may only explain a very local region.This case makes the above image quality scores less applicable, as inherently they evaluate the quality of an image region rather than a pixel.For spatially varying modulations, ideally, we need to evaluate the success of the modulation based on a per-pixel criterion.
The total intensity score: Consider a configuration where we only try to correct the illumination arm, and the SLM in the imaging arm of Fig. 1 is not used.The simplest score that was considered in the literature [2,15] is just the total intensity measured over the entire sensor plane, which is also proportional to the total fluorescent power, reducing to Since the energy at the target is bounded (see Eq. ( 1)), for the case α > 1 this score is maximized when ν is a one-hot vector, which equals 1 at a single entry and zero at all the others.Two-photon fluorescence is expensive and hard to implement, and solutions that can use a single-photon excitation feedback are highly desirable.However, in the single-photon case where α = 1, Eq. ( 10) reduces to the total power in ν, S TI (u) = k |ν k | 2 , and since this power is fixed, the same amount of energy returns whether we spread the excitation power over multiple fluorescence sources or bring all of it into one spot.The variance maximization score: Boniface et al. [23] propose an alternative approach to evaluate focusing with linear, single-photon feedback.Their approach also modulates only the illumination wavefront attempting to focus light at a single spot, while the emitted light is scattered to the sensor.To score modulations, they measure the variance of the resulting speckle image and attempt to maximize it.The idea is that if we manage to focus all the excitation light at a single spot, the emitted light scattered through the tissue will generate a highly varying speckle pattern on the sensor plane.If the excitation is not focused, multiple sources emit simultaneously.The light emitted by these sources sums incoherently, and hence the variance of the speckle pattern on the sensor decays.They show that as in the two-photon case, speckle variance is proportional to This score is an important advance; however, it is hard to evaluate for low SNR biological sources.When a low number of photons is spread over multiple sensor pixels, the captured image is very noisy and an attempt to evaluate its variance will result in the noise variance rather than the speckle variance, see Fig. 2.
While one can reduce imaging noise by using a longer exposure or by increasing the power of the excitation laser, the optimization of [23] requires capturing many images of the same target, and usually the neuron bleaches well before convergence.Confocal energy score: In this research we suggest a new score for evaluating a wavefront shaping modulation.While the previous scores corrected only the illumination arm, inspired by [25,26,34], we suggest to correct both arms, using two modulations denoted by u i , u o .As emission is very weak, the fact that the SLM correction is applied before imaging helps collect all photons at one sensor pixel and improve SNR.To score the focusing quality of each modulation we will use the intensity at the central pixel, rather than the total intensity throughout the sensor.In supplementary we prove that we can express the energy of the central pixel as: As mentioned in Eq. ( 1), the energy of ν i is bounded, and due to reciprocity the same applies for ν o .It is easy to see that this score is maximized when ν i , ν o are both one-hot vectors, which bring all energy to a single joint entry k and have zero energy at all other entries.That is, the score of Eq. ( 3) is maximized when the excitation modulation u i brings all light to one of the particles o k , and the modulation at the imaging arm u o corrects the wavefront emitted from the same particle o k and brings all of it into the central pixel.
In particular, if the excitation and emission wavelengths are sufficiently similar, we explain in supplementary that it is best to use the same modulation at both the illumination and imaging arms, and the score of Eq. (3) reduces to k |ν k | 4 as in the variance-maximization and two-photon cases.
While the confocal score is equivalent to the variance maximization score above, it is significantly less susceptible to noise.This is due to the fact that the small number of photons we have at hand are collected at a single spot, rather than being spread over multiple pixels.Fig. 2(cd) shows the images emitted from a single neural spot with and without modulation in the imaging arm, and the significant noise reduction.
In this work we have explicitly optimized the confocal score (Eq.( 3)) using standard Hadamard basis optimization [12], detailed in Supplementary Sec 2. Overall this optimization approach is significantly slower than the iterative phase conjugation of [24], but can handle dense targets.In our case the excitation and emission wavelengths are similar, yet not identical.While using two different modulations at the excitation and emission arms results in a modestly improved correction, it also doubles the required number of measurements.Alternatively, we can neglect the wavelength difference and use the same correction in both arms.Despite the approximation, the faster optimization is advantageous in the presence of photobleaching.We compare a single correction to two corrections in supplementary section 5.
Experimental results
We image slices of mice brain with EGFP neurons, excited at 488nm and imaged at 508nm.We use two types of aberrations.In the first case we use thin brain slices of thickness 50µm, which are almost aberration-free.We generate scattering by placing these slices behind a layer of chicken breast tissue (200 − 300µm thick) or parafilm whose optical properties were measured in [23].The advantage is that since the fluorescence is present only in a thin 2D layer we can obtain a clean reference from a validation camera.In a second experiment we image through 400µm-thick brain slices.Since the target is 3D it is not always possible to capture clear aberration-free references.All experiments were approved by Institutional Animal Care and Use Committee (IACUC) at the Technion (IL-149-10-2021), as well as the Hebrew University of Jerusalem (MD-20-16065-4).More details about the mices are included in supplementary file.
In Fig. 3 we visualize some results of our algorithm.Fig. 3(a) shows an image of the initial excitation pattern from the validation camera behind the tissue.As can be observed, the tissue exhibits significant scattering.In Fig. 3(b), we visualize the excitation light after optimizing the wavefront shaping modulation, which is nicely focused into a sharp spot.In Fig. 3(c-d), we also show the emitted light.Before optimization a wide area is excited and we can see the neuron shape.At the end of the optimization a single point is excited.In Fig. 3(e-f), we visualize the views of the front main camera, providing the actual input to our algorithm.Before optimization the emitted light is scattered over a wide sensor area.As a low number of photons is spread over multiple sensor pixels, the captured imaged is noisy.At the end of the optimization the aberration is corrected and all the photons are brought into a single sensor pixel.In Fig. 3(g), we demonstrate the actual point spread function of the tissue aberration.For that we have used the correction only at the illumination arm and focused the illumination to excite a single spot.We used a blank SLM at the imaging arm so the emitted light is not corrected.One can see that the aberration of a single fluorescent spot is rather wide.Each of the images in Fig. 3 is normalized so that its maximum is 1, but as indicated by the colorbar, the spot at the focused images received a much higher number of photons than the wide scattering images of unfocused light, despite the fact that all images were captured under equal exposure and equal excitation power.
To better appreciate the noise handled by our algorithm, in Fig. 4 we visualize a 41 × 41-pixel window captured by the main camera, at a few iterations of our algorithm.In the beginning this image is very noisy, because a small number of emitted photons are spread over multiple sensor pixels.However, when the optimization proceeds, it finds a better modulation correction.As a result, all the laser power is brought to excite one spot and all the emitted photons are collected to one sensor spot and measured with better SNR.
We use the recovered wavefront shaping modulation to image a wide area rather than a single spot.In Fig. 5 we demonstrate results for a thin brain slice beyond chicken breast and parafilm.For that, we excite a wide area and use a correction only at the imaging arm.Due to the memory effect [35,36], the same modulation can allow us to image a small local patch rather than a single spot.With the correction, the neuron is observed with a much higher contrast and even the axons (thin lines around the neuron) emerging from it, whose emission is much weaker, can be partially observed.Additional results are provided in supplementary file.We note that noise is less visible in the aberrated images of Fig. 5(a), because these images are captured with a much longer exposure compared to the optimization images in Fig. 4.While it is possible to capture a few noise-free images of such targets, it is not possible to do that for all the optimization iterations without bleaching.In Fig. 5 we mark with an arrow some points at which the algorithm has converged.One could see a darker spot as such points have bleached during optimization.
In Fig. 6 we show imaging through a 400µm brain slice.Due to the 3D structure of the target, to isolate a neuron at a single-depth plane we have to use a slow confocal scanning, where the modulation is placed on both arms and is tilt-shifted to excite and image different spots of the target.We compare this with an uncorrected confocal scan that is significantly aberrated.In some cases there is a small shift between the corrected and uncorrected confocal images, because the recovered modulation has also shifted the focal spot inside the target.Note that our confocal scanning is currently implemented by tilting and shifting the modulation pattern on the SLM and not with a proper galvo mirror.Since this approach is very slow we could only scan small windows.We also include a full frame image from the validation camera behind the tissue, but due to the 3D fluorescence structure, in some cases this does not provide a clear ground truth.Additional results are provided in the supplementary file.
In Sec. 6 of the supplementary file we compare our confocal score with the variance maximization approach of [23], showing that our approach can converge using a significantly smaller number of photons.We also compare against one of the non-local approaches by [29].This approach assumes that a single modulation can correct a wide image region rather than a single spot.Our evaluation shows that when memory-effect (ME) exists over a wide extent, this algorithm can indeed recover good modulations, but the quality of the results degrades for a small ME, where the size of the iso-planatic patches that can be corrected with a single modulation is small.
Discussion
In this research, we have analyzed score functions for wavefront shaping correction, using non-invasive feedback at the absence of a guiding star.To assess focusing quality, we seek a score function that can measure a non-linear function of the light emitted by different sources.This is naturally achieved when using two-photon fluorescent feedback, but is harder to achieve with linear fluorescence.We show that by using a confocal correction at both the illumination and imaging arms we can measure such a non-linear feedback, which is maximized when all excitation light is brought into one spot.Moreover, the fact that our system uses a correction of the emitted light as part of the optical path allows us to bring the limited number of emitted photons into a single sensor spot, facilitating a high SNR measurement.
It is worth noting that while our approach can recover focusing modulations, it can converge to different spots on the fluorescent target, depending on initialization.While we cannot know where it has focused, we can use memory effect correlations to image the surrounding window.Another drawback of the approach is that in our current implementation it takes about 30 min to optimize for one modulation pattern.Some of this can be largely optimized with better hardware, such as a faster SLM.However, the iterative optimization is inherently slower than the power iterations of [24].Beyond better hardware, we are exploring algorithmic alternatives which can accelerate optimization.
Supplementary Material 1 Mathematical derivation
We provide a derivation of an image formation model using a transmission matrix formulation, and use it to prove the non-linear properties of our confocal score.This derivation explains why the confocal score can favor modulations that focus at a single spot despite the fact that it collects linear fluorescence feedback.
Image formation model
Consider a set of K fluorescent particles inside a sample, and denote their positions by o 1 , . . ., o K .We assume the SLM in the illumination arm is illuminated with a spatially uniform plane wave and use the SLM to display a complex 2D electric field that we denote by u.Although u is a 2D field, we reshape it as a 1D vector.We also use ν to denote a K × 1 vector of the field propagating through the sample at each of the K fluorescent sources.
The relation between u and ν is linear and can be described as a multiplication by a (very large) matrix where T i is the incoming transmission matrix, describing the forward coherent light propagation in the tissue.We note that T i is specific to the tissue sample being tested, and different tissue samples are described by very different transmission matrices.For thick tissue, T i can be an arbitrarily complex matrix incorporating multiple scattering events in the tissue.Likewise, if the light returning from the target is coherent, its propagation to the SLM of the imaging arm can be described as T o ν, where T o is the coherent back-propagation transmission matrix.We denote by ζ the wavefront placed on the SLM of the imaging arm, and by D(ζ) a diagonal matrix with ζ on its diagonal.Our SLMs are placed at the Fourier planes of the imaging system, and we denote by F the Fourier transform of the wavefront from the SLM to the camera sensor.With this notation, coherent light propagating from the target particles to the sensor, through the SLM, can be expressed as In the fluorescent case, the emissions from different points are incoherent, hence the recorded intensity is the sum of emitted intensity from each fluorescent bead: where |ν k | 2 is the energy of the excitation light arriving at particle o k , and ↓,k is the wavefront arriving to the sensor from o k .In Eq. ( 6) α denotes the type of fluorescent excitation.The simplest case α = 1 is known as single-photon fluorescence where the emission is linear in the excitation energy |ν k | 2 .In two-photon fluorescence, α = 2, namely the emission is proportional to the squared excitation.Phase conjugation: When the incoming and outgoing light have the same wavelength, the Helmholtz reciprocity principle leads to wave conjugation.Namely, if we record the wavefront emitted from a source point inside the tissue and illuminate with the conjugate wavefront, light will focus at the same point.This implies that the returning transmission matrix is the transpose of the incoming one [39]: In the fluorescent case, T i and T o describe propagation at two different excitation and emission wavelengths.However, for linear single-photon fluorescence the excitation and emission wavelengths are relatively similar and T o ≈ T i ⊤ .While our algorithm does not require the incoming and outgoing transmission matrices to be the same, this similarity will help us draw intuition on what is being optimized.Normalization: we assume for simplicity that our transmission matrices are normalized such that every column or row has a unit energy, that is for every k This means that the total amount of energy that can arrive to particle o k or emerge from it is fixed.As the laser energy is fixed, we also assume w.l.o.g. that all illumination vectors have a unit norm ∥u∥ = 1.As propagation through the tissue does not generate new energy, every incoming vector u should satisfy ∥T i u∥ ≤ 1 and thus the energy at the target is also bounded:
Score functions
We provide a longer derivation of the modulation scores mentioned in the main paper and explain how they can evaluate focusing.The total intensity score: Consider a configuration where we only try to correct the illumination arm, and the SLM in the imaging arm is not used (equivalently, D(ζ) in Eq. ( 6) is the identity matrix).One of the earlier scores that were considered in the literature [2,15] is just the total intensity measured over the entire sensor plane.Using Eqs. ( 6) and ( 8) it is easy to show that this total intensity score reduces to Since the energy at the target is bounded (see Eq. ( 9)), for the case α > 1 this score is maximized when ν is a onehot vector, which equals 1 at a single entry and zero at all the others.However, in the single-photon case where α = 1, Eq. ( 10) reduces to the total power in ν, S TI (u) = k |ν k | 2 , and since this power is fixed, the same amount of energy returns whether we spread the excitation power over multiple fluorescence sources or bring all of it into one spot.The variance maximization score: Boniface et al. [23] have recently suggested that to evaluate focusing with linear single-photon feedback, one should maximize the variance of the intensity measured by the sensor.The idea is that if we manage to focus all the excitation light at a single spot, the emitted light scattered through the tissue will generate a highly varying speckle pattern on the sensor plane.If the excitation is not focused, multiple sources emit simultaneously.The light emitted by these sources is summed incoherently, and hence the variance of the speckle pattern on the sensor decays.A short calculation provided in [23] shows where n is the number of image pixels.Hence, as before, the score is a non-linear function of the power at different fluorescent particles and is maximized by a one-hot vector.
Confocal score: Below we derive the relation between the energy at the central pixel and the vector of fluorescent power ν, with the goal of showing that the confocal score is a non-linear function of the power of the excitation vector.
Denoting by F 0,→ the central row of the Fourier transformation from the SLM plane to the image plane, the contribution of the particle o k to our measurement at the central pixel is Assuming w.l.o.g., that the central pixel is measuring the DC component of the Fourier transformation, corresponding to simple averaging, we can express the value at the central pixel in Eq. ( 12) as the product of the SLM modulation u o at the imaging arm, with the corresponding column of the outgoing transmission matrix: By modulating both illumination and imaging arms, we can express the energy of the central pixel as: with ν i = T i u i and ν o = u oT T o .As mentioned in Eq. ( 9), the energy of ν i is bounded, and due to reciprocity the same applies for ν o .It is easy to see that this score is maximized when ν i , ν o are both one-hot vectors at the same entry k.That is, the score of Eq. ( 14) is maximized when the excitation modulation u i brings all light to one of the particles o k , and the modulation at the imaging arm u o corrects the wavefront emitted from the same particle o k and brings all of it into the central pixel.
In particular, if the excitation and emission wavelengths are sufficiently similar so that we can approximate T o ≈ T i ⊤ , it is best to use the same modulation at both the illumination and imaging arms, and the score of Eq. ( 14) reduces to k |ν k | 4 as in the variancemaximization and two-photon cases.Optimization: In this work we have explicitly optimized the confocal score (Eq.( 14)) using standard Hadamard basis optimization [12], detailed in Sec. 2. Overall the Hadamard optimization is significantly slower than [24].We note that when all fluorescent sources o k have the same power, there are K different solutions that can maximize Eq. ( 14) and an optimization may converge to any of them.Also note that we constrain the solution such that regardless of the position of o k , it will be imaged in the central pixel.If o k is not at the center of the frame, the wavefront ν o , placed at the Fourier plane of the imaging system will contain a tilt, shifting o k to the central pixel.
Optimization algorithm
This section provides details on our optimization algorithm.In Sec. 5 below we compare two different correction models.(i) Assumes that as our excitation and emission wavefronts are sufficiently similar, and use the same modulation u in both the illumination and imaging arms.(ii) Solve for a different modulation in each arm.Here we provide the mathematical details for the optimization algorithm.
Same modulation in both arms
Assuming the excitation an emission wavelengths are sufficiently similar T o ≈ T i ⊤ the confocal score derived in Eq. ( 14) reduces to: where η is fluorescence efficiency, T i k is the k'th row of the input transmission matrix from the illumination SLM to the particles,T o k is the k'th column of the output transmission matrix from the particles to the imaging SLM and u is the wavefront correction vector on both SLMs.
Our optimization scans a set of binary phase masks and for each of them finds the phase that maximizes the score function.We show below that if only the phase of this mask can be adjusted, the score can be expressed as the sum of two sinusoidal functions, which can be measured using 5 samples.By capturing 5 shots we can select the optimal value for this phase and proceed to the subsequent mask.
Dictionary representation: In our optimization, we mark the phase function on the SLM in the mth iteration as ψ (m) (ω).We express the phase mask as a superposition of a dictionary of N binary masks, weighted by phases ϕ n : Here ϕ n is a scalar coefficient, µ n (ω) is a binary vector whose size is equivalent to the number of entries in the SLM, and ω is the index of an entry in the SLM plane.Since we use phase-only SLMs, the amplitude in each pixel remains constant and the wavefront correction vector is expressed as: In the mth iteration of our optimization, we choose a single dictionary element n and update ϕ n to obtain: while the values of the coefficients ϕ 1 , . . ., ϕ n−1 , ϕ n+1 , . . ., ϕ N are fixed to the value they had in iteration m − 1.
Sinusoidal model:
where A, B, C are complex scalars, and we denote their amplitude and phase by |A|, |B|, |C|, ∠A, ∠B, ∠C respectively.
Proof.We consider a binary mask µ n (ω), and denote its complimentary by µ n (ω).With this notation we can express the modulation u of Eq. ( 17) as We mark u (1) = e iψm−1(ω) • µ n (ω), and so, u(ϕ) simplifies to: With this notation we can express k = T i k u (1) and v (2) k = T i k u (2) .A short calculation shows that |ν k | 4 follows the form of Eq. (19).The confocal score of Eq. ( 15) is equivalent to Optimization details: Using the above claim, we can consider J ≥ 5 equally spaced phases ϕ j = [1 . . .J] 2π J , generate the wavefronts ψ m (ω) j = ψ m−1 (ω) + ϕ j • µ n (ω) and place them on both SLMs.For each phase we measure the intensity at the central pixel.We use these J intensity measurements to fit a sinusoid and find the value ϕ n maximizing Eq. (19).
In practice we notice that |C| is usually smaller than |A|, |B| and it is enough to fit the score function as a single sinusoid, hence we can also reduce the number of samples to J = 3 or J = 4. Fig. 2: Imaging setup: A laser beam is exciting a fluorescent target at the back of a tissue layer, and fluorescent emission is scattered again through the tissue, reflects at a dichroic beam-splitter and is collected by a main (front) camera.We place two SLMs in the Fourier planes of both illumination and imaging arms to allow reshaping these wavefronts.A validation camera views the fluorescent target at the back of the tissue directly.This camera is not actually used by the algorithm, and is only assessing its success.LP=linear polarizer, BS=beam-splitter, DBS=dichroic beam-splitter, BPF=bandpass filter, M=mirror, L1 . . .L8=lenses, Obj=Objective.Components are listed in Table 1.
Laser
Coherent
Binary masks:
We divide the SLM to √ N × √ N super-pixels, and choose to use binary Hadamard masks as our dictionary.We show several such Hadamard masks in Fig. 1.The advantage of this dictionary is that the ϕ n we adjust in each iteration is displayed on half of the SLM area, rather than on a single pixel.Thus, it has more impact on the intensity we measure and the measurement is less sensitive to noise.
Since the speckles from a single source have a compact support, as demonstrated e.g. in Fig. 3 of the main paper, we conclude that the correction mask has more content at the lower frequencies and less content on the higher ones.To account for that, we first scan the Hadamard basis elements that correspond to low frequencies and later the Hadamard elements with higher frequency content.We return to the same dictionary element more than once during optimization, and we invest more optimization iterations in the low frequency elements.
Different modulations in illumination and imaging arms
So far we considered a model constraining the modulations on both SLMs to be the same.This is only an approximation to the desired modulation, because the illumination and excitation wavelengths can differ.To solve for two different modulations we alternate between updating the ϕ n values in the excitation modulation, and updating the ϕ n values of the emission modulation.If we fix one modulation and vary the other, the confocal score S Conf (ϕ n ) is a single sinusoid rather than a sum of two sinusoids as in Eq. ( 19), and its phase can be fitted with J = 3 samples.In Sec. 5 below we compare two different modulations to a single modulation in both arms.We find out that two different modulations can lead into a somewhat better correction, but this optimization also doubles the number of exposures.Therefore, in the presence of photo-bleaching, a single modulation usually leads to better results.
Setup
In Fig. 2 we visualize the full imaging setup for wavefrontshaping correction.All components are listed in Table 1.A laser beam illuminates a tissue sample via a microscope objective.A phase SLM in the illumination arm modulates the illumination pattern.The illumination wavefront propagates through the scattering tissue and excites the fluorescent target behind it.We wish to image that target, but the emitted light is scattered again through the tissue on its way to the objective.Scattered light is collected via the same objective, and reflected at a dichroic beam-splitter.A second phase SLM at the imaging arm modulates the emitted light.Lastly, the modulated light is measured by the front main camera.
In our setup the SLMs are placed in the Fourier plane of the system.We use a 10nm bandpass filter in the imaging arm to image a relatively monochromatic light.
As we want to correct the scattering of the sample itself rather then aberrations in the optical path, before starting the optimization we place the sample so the fluorescent light has smallest support in the main camera, then focus the objective of the validation camera (Obj2 in Fig. 2) such that the neuron is in focus.We elaborate on the construction and alignment of this setup in Sec. 8.
We image slices of mice brain with EGFP neurons, excited at 488nm and imaged at 508nm.
We used two types of aberrations.In the first case we used thin, brain slices of thickness 50µm which are almost aberration free, and generate scattering by placing these slices behind a layer of chicken breast tissue (200 − 300µm thick) or parafilm whose optical properties were measured in [23].The advantage is that since the fluorescence is present only in a thin 2D layer we can obtain a clean reference from a validation camera.In a second experiment we image through thick brain slices.The slice were originally cut to be 400µm thick, though while squeezing between two cover-glass some of the water evacuated and the resulting slice is somewhat thinner.Since the target is 3D it is not always possible to capture clear aberration free references even with the help of a validation camera.The thin slices contain Betz neurons coming from a 6w female Strain:C57BL6 mouse.The thick slices contain pyramidal cells of layer 2-3 in the cortex, from a triple transgenic mouse, Rasgrf2-2A-dCre;CamK2a-tTA;TITL-GCaMP6f line 7m old.
Additional results
In Fig. 3 we show additional images of the focused spot achieved with our modulation.In particular, in the last row we show one failure example where the algorithm has converged on two spots rather than one.
Additional results imaging a thin brain layer with neurons behind chicken breast or parafilm layers are presented in Figs. 4 and 5.While the modulation we recover focuses at a single spot, due to the memory effect we can use it to image an area behind the tissue rather than a single spot.In Sec. 7 below we explain the tilt-shift adjustments required.
In Fig. 6 we show additional results imaging through a thick 400µm brain slice.Since the fluorescent target has 3D variation, we use a confocal scanning to isolate a neuron at a single depth.A confocal scanning of our modulation is significantly better than an uncorrected confocal scan.Also due to the 3D structure it is not always possible to capture a good full-frame reference from the validation camera.
In Fig. 7 we test the actual extent of the memory effect for such samples.We run the algorithm until convergence and then start to tilt-shift the modulation so that the focal spot translates along a line at the back of the tissue.We capture the intensity of the translating spot from the validation camera and plot it as a function of distance from the original focal spot.To demonstrate the variance of such curves we plot 4 curves shifting the point at 4 dif- ferent directions.We include two examples, one focusing through a 400µm brain slice and one focusing through a layer of parafilm.We include a few insets visualizing how speckles around the focal spot increase as it is translates.
One modulation against two modulations
As mentioned above, in our single-photon fluorescence case the emission and excitation wavelengths are similar and we can approximately use the same modulation in both excitation and emission arms.Alternatively we can solve for a different modulation in each arm, but this doubles the number of exposures.Below we compare these two approaches experimentally and learn that while two different modulations can lead to a better correction, they also require a longer acquisition with more photo-bleaching.In the presence of photo-bleaching the faster, one-modulation approach usually leads to better results.
Comparing two algorithms on the same sample is challenging, because if we try to run two algorithms sequentially on the same sample, the second one would have worse results just because more photo-bleaching took place.To avoid this we run the two algorithms in alternating order.The n'th iteration is composed of 4 steps: 1. Test basis element 2n for the single modulation case.
Test basis element n for the excitation arm in the
two-modulation case.
3. Test basis element 2n + 1 for the single modulation case.
4. Test basis element n for the emission arm in the twomodulation case.
The implication is that the single modulation approach scans the elements of the Hadamard basis faster than the two-modulation approach.In the results of Fig. 8 we scan the Hadamard basis 4 times in the single modulation optimization and only twice in the two-modulation optimization.We separate the measurement of the single and two-modulation approaches and plot them as two separate curves in Fig. 8(a).The single modulation approach leads to higher energy at the focused spot.To evaluate the quality of the modulation independently of photo-bleaching, at the end of the optimization we capture 3 images of the focus spot. 1) With two different modulations, scanning each basis element twice.2) With a single modulation with the same number of exposures, effectively scanning each element 4 times.3) An image of a single modulation in the middle of the optimization, which effectively scans the basis elements only twice as the two-modulation optimization.While this image corresponds to the modulation in the middle of the optimization, we capture it at the end, under equal photobleaching conditions.The single modulation that scanned the basis elements 4 times gives the best results.However if we compare the two modulations to a single modulation that has scanned the elements for the same number of times, the results are better.
In Fig. 8 we demonstrate this comparison twice, once when the fluorescent target is a neuron behind a layer of parafilm, and once when we spread a set of fluorescent beads behind the parafilm.In Fig. 8(f-g) we show images of the target from the validation camera in the beginning of the optimization and at the end.For the beads example, one can see that the bead at which the algorithm has converged (marked by an arrow) is significantly dimmer at the end of the optimization, showing the strong bleaching.
In Fig. 9 we show the phase of the modulation masks we found with the different approaches described above.The correlation between the modulations at the two different wavelengths is typically within the range [0.6, 0.8].The correlation between modulations found at two different wavelengths to the same modulation for both wavelengths is also in the same range.It is unclear if this is really a measure of the chromatic memory effect correlation, or if this difference is a result of the noisy optimization.
Comparison with alternative wavefront-shaping scores
We compare our confocal score with the variance maximization approach of [23], showing that our approach can converge using a significantly smaller number of photons.
We also compare against one of the non-local approaches in [29].This approach assumes that a single modulation can correct a wide image region rather than a single spot.Our evaluation shows that when memory-effect exists over a wide extent this algorithm can indeed recover good modulations, but the quality of the results degrades for a short ME, where the size of the iso-planatic patches that can be corrected with a single modulation is small.We note that it is very hard to run two algorithms on the same data under equal noise conditions, as due to photo-bleaching the 2nd algorithm would run on a significantly weaker fluorescent source.To support a controlled evaluation we used simulated transmission matrices syn-thesized using a multi-plane propagation model.We assume the tissue has a thickness of L = 250µm and the aberration of light propagating through this volume is formed by equally spaced, planar pseudo-random phase masks.We use different number of aberration layers to test different memory-effect extents as described below.The final spot using the same modulation in both channels.Since it scans the dictionary elements more times, the result is better (a higher energy is measured at the central pixel).(e) The result of one mask in the middle of the optimization when the number of dictionary elements is equivalent to what the two-modulation approach has scanned at the end of the optimization.When scanning the dictionary for the same number of times, the one-modulation result is somewhat lower than what was achieved by two modulations.(f-g) The fluorescent target from the validation camera at the beginning of the optimization and at the end, notice the strong bleaching at the focusing bead, marked with an arrow.We show the final modulations in Fig. 9.
Given a transmission matrix we can simulate the images formed under any modulation of choice and add a different amount of noise.Hence we can evaluate the results of different algorithms while varying any parameter of interest.
Speckle-variance maximization
As mentioned in the main paper both our confocal score and the variance maximization score seek to optimize the same non-linear function of fluorescent intensity, yet they are not equivalent in terms of SNR.To demonstrate this we run both algorithms, and in each iteration, as we update the modulation we also vary the laser power such that the SNR of the score we measure in each algorithm will be kept fixed.Since our algorithm attempts to bring all photons to one sensor pixel, the images we measure are less noisy then speckle-variance score (which does not use a correction on the emitted speckles).Alternatively, we can achieve the same SNR as the speckle variance score with a weaker excitation power.The correlation between the one-mask approach and the two-mask approach is also in the same range.
Fig. 10(d) demonstrates the curve of laser power as a function of iteration number in each algorithm.One can see that our algorithm can work with a significantly lower number of photons, meaning that the algorithm has a much better chance to converge without much photobleaching.We have repeated the evaluation with three different transmission matrices, where we changed the statistics of the aberration layers to be less forward scattering, hence the speckle support is wider.We can see that for a wider aberration the gain of the confocal score is higher (compare the different rows of Fig. 10, for the small support we require 5.8× fewer photons, for the wide one we requiere 43× fewer photons).This is because for a wider speckle pattern the speckles are spread between more pixels, hence the images are noisier and estimating the speckle variance without modulating the emission path is harder.
Non-local scores
Next, we compare against a recent non-local approach by [29].This assumes that a single modulation can correct a wide image region rather than a single spot.It uses a wide illumination and only correct the imaging arm, where a good modulation should lead to a sparse image, measured using a combination of a modified entropy and a variance maximization scores.
In Fig. 11 we evaluate this approach using two different transmission matrices.In the first case we used a transmission matrix generated by a single aberration layer.In this case the memory effect correlation holds over a wide extent, and a single modulation can correct a wide isoplanatic region.In the second case we simulated a trans-mission matrix with 3 different aberration layers equally spaced between depth 0 to 250µm.This is a more realistic approximation of a scattering tissue exhibiting volumetric aberration, but the extent of the memory effect is much shorter.In this case there is no single modulation which can fully correct the entire image.Fig. 11 shows a comparison of this algorithm against our confocal score with two structures of hidden neurons.When memory-effect exists over a wide extent the non-local score of [29] can indeed recover good modulations, but the quality of the results degrades when ME range is short, and the size of the iso-planatic patches that can be corrected with a single modulation is small.
Tilt-shift correction
Below we explain the acquisition of wide area images of Figs. 4 and 5. Given a wavefront shaping modulation that applies to one fluorescent particle inside the tissue sample, we correct nearby ones using the tilt-shift memory effect.For that, we denote by u o1 x , u o2 x two speckle fields obtained on the sensor plane of our main camera (where x denotes spatial position on this plane), generated by fluorescent particles at o 1 , o 2 .We focus the objective such that the sensor plane is conjugate to the plane containing the fluorescent sources.The tilt-shift memory effect [36,40] implies that, for small displacements, u o1 is correlated with a tilted and shifted version of u o2 and thus can be approximated as:
Calibration and alignment
Below we elaborate on various calibration and alignment details.
First, to correctly modulate the Fourier transform of Three shifts of correction pattern and the resulting images.We compare this reconstruction against the reference from the validation camera.
Combined reconstruction Reference
the wave, the illumination SLM needs to be at the focal plane of the lens right after it (L3 in the system figure), and the imaging SLM at the focal plane of the lens before it (L6).We do this alignment using another camera focused at infinity.We use this camera to view the SLM through the relevant lens, forming a relay system.We adjust the distance between the SLM and the lens until the calibration camera can see a sharp image of the SLM plane.We also ensure that the distance between the sensor of the main/validation cameras and the lenses L7/L8 attached to them is set such that the cameras focus at infinity.A second step of the alignment is to focus the excitation laser and the system camera on the same target plane.In our setup the sample and the objective of the validation camera are mounted on two motorized z-axis (axial) translation stages.We use fluorescent beads with no aberrating tissue and adjust the axial distance between the beads and the objective of the main camera (Obj1 in the setup figure) such that the main camera sees a sharply focused image of the bead.Then we adjust the distance of the validation objective (Obj2 in the setup figure) from the beads so that we see a sharp image of the same beads in the validation camera.We then want the laser to generate its sharpest spot on the same plane.Assuming the validation and main camera are focused at the same plane, we adjust the position of the lens L4 until the validation camera sees a sharp laser spot.
After the system has been aligned, we need to determine two mappings.The first one is between frequencies to pixels on the SLM.A second, more challenging one is the registration between the two SLMs, so that we can map a pixel on the imaging SLM to a pixel on the illumination SLM controlling the same frequency.We start with a mapping between frequencies to the SLM on the imaging arm.We first put a calibration camera that can image the camera SLM plane directly when it receives laser light.Since the SLM is conjugate to the aperture of the objective we see an illuminated circle on the SLM plane, corresponding to the numerical aperture of the imaging system.The center of this circle gives us a first estimate of the zero (central) frequency of the Fourier transform.Assuming we know the focal length of L6, the SLM pitch and the wavelength of the emitted light, we can map frequencies to SLM pixels using simple geometry.Alternatively we can display on the SLM sinusoidals of various frequencies.This shifts the image on the sensor plane.By measuring the shift resulting from each sinusoidal we can calibrate the mapping between frequencies and SLM pixels.To align between the two SLMs we place a fluorescent bead behind a scattering tissue.We capture a few images of the speckles resulting from this bead (at emission wavelength) and use a phase diversity approach [41] to estimate the complex wavefront that emerges from this bead.The Helmholtz reciprocity (phase conjugation) principle states that if the conjugate of this wavefront is placed on the illumination SLM, it will focus into a point behind the tissue.However, we need to determine how to position this modulation on the SLMs keeping in mind that tilt and shift on these planes may impact the results.For the imaging SLM this is less of an issue because we have already marked the zero frequency and because a tilt of the imaging SLM only shifts the position of the spot on the sensor.However, if the illumination SLM is not registered correctly, we may see a sharp spot behind the tissue, but it will be shifted from the bead of interest and will not excite it.Thus, we tilt and shift the modulation on the illumination SLM until the intensity we measure on the main camera (when the modulation correction is on) is maximized.After this is achieved we can fine-tune the shift on the imaging SLM, which is equivalent to the position of the zero (central) frequency that we have previously marked by looking at the illuminated circle.
To use a recovered modulation pattern to image a larger region of the fluorescent target, we leverage the tilt shift memory effect.To apply the scan we need to recover the parameter α of Eq. (24), determining the ratio between the tilt and shift.For that, after we recover the modulation pattern, we place it on the illumination SLM and use the validation camera to view the focused spot.We then adjust the ratio between tilt and shift of the modulation pattern so that we can move the focused spot in the validation camera, while preserving maximal intensity.
Fig. 2 :
Fig. 2: Types of fluorescent data: (a,b) emission from invitrogen fluorescent microspheres (excitation/emission at 640/680nm).A single bead is excited and the emitted light scatters through the tissue to generate a wide speckle pattern in (a).In (b) we use an aberration correction in the imaging arm so that the sensor measures a sharp spot.With such synthetic sources we can image a speckle pattern at high SNR, but this is not always the case with real biological samples.For example, (c,d) demonstrate fluorescent emission from EGFP neurons (excitation/emission at 488/508nm), which is orders of magnitude weaker.In (c) a single fluorescent spot is excited, and the limited number of photons it emits are spread over multiple pixels.Noise is dominant, and any attempt to measure the variance of this image will evaluate the noise variance rather than the speckle variance.In (d) aberration correction is applied in the optics.As all photons are collected by a single pixel, SNR is drastically improved.Note that images (c,d) are taken under equal exposure and equal excitation power.
Fig. 3 :
Fig. 3: Wavefront shaping results: we visualize views from the validation and main cameras, each row demonstrates a different tissue sample.(a-b) The excitation light as viewed by the validation camera at the back of the tissue.Due to significant scattering, at the beginning of the algorithm when no modulation (mod.) is available, a wide speckle pattern is generated.After optimization, the modulated wavefront is brought into a single spot.(c-d) By placing a band-pass filter on the validation camera, we visualize the emitted light with and without the modulation correction.(e-f) Views of the emitted light at the main front camera with and without the modulation correction.Note that this is the only input used by our algorithm.Without modulation, light is scattered over a wide image area and the image is noisy.A sharp clean spot can be imaged when the limited number of photons is brought into a single sensor pixel.(g) By correcting the emission such that a single spot is excited and leaving the imaging path uncorrected, we can visualize the actual aberration of a single fluorescent point source.The top two examples demonstrate a thin brain layer behind parafilm, and the lower one is a thick brain slice.
Fig. 4 :
Fig.4: Convergence with noise.Views from the main camera at a few iterations of our algorithm.In the beginning a small number of photons is spread over multiple sensor pixels and the resulting image is very noisy.As the algorithm proceeds and a wavefront shaping modulation is recovered, the low number of photons is brought to a single sensor spot and the measured image has a higher SNR.SNR is calculated by capturing 40 images with the same modulation.
Fig. 5 :
Fig.5: Imaging a wide area.We image a thin fluorescent brain slice behind a scattering layer.The top two images correct scattering through chicken breast tissue and the lower ones through parafilm.(a) Image of the neuron from the main camera with no correction, strong scattering is present and the neuron structure is lost.(b) Image with our modulation correction, the neuron shape as well as some of the axons are revealed.(c) A reference image of the same neuron, from the validation camera.The arrow marks a spot at which the optimization has converged.This spot is darker as it bleached during the optimization.
Fig. 6 :
Fig. 6: Imaging a wide 3D target, through a 400µm thick fluorescent brain slice.(a) A confocal image of the neuron from the main camera with no correction, strong scattering is present and the neuron structure is lost.(b) A confocal image with our modulation correction, the neuron shape as well as some of the axons are revealed.(c) A reference image of the same neuron, from the validation camera.Due to the 3D spreading of the fluorescent components, the validation camera cannot always capture an aberration-free image of the target.
Fig. 1 :Claim 1 .
Fig.1: Examples of Hadamard masks used in our optimization.The masks are placed in the Fourier plane and they are all cropped to the circular aperture area.In the figure, the leftmost mask has low frequencies, while the rightmost one corresponds to higher frequencies.
Fig. 3 :
Fig.3: Additional wavefront shaping results: we visualize views from the validation and main cameras.(a-b) The excitation light as viewed by the validation camera at the back of the tissue.Due to significant scattering, at the beginning of the algorithm when no modulation (mod.) is available, a wide speckle pattern is generated.After optimization, the modulated wavefront is brought into a single spot.(c-d) By placing a band-pass filter on the validation camera, we visualize the emitted light with and without the modulation correction.(e-f) Views of the emitted light at the main front camera with and without the modulation correction.Note that this is the only input used by our algorithm.Without modulation, light is scattered over a wide image area and the image is noisy.A sharp clean spot can be imaged when the limited number of photons is brought into a single sensor pixel.(g) By correcting the emission such that a single spot is excited and leaving the imaging path uncorrected, we can visualize the actual aberration of a single fluorescent point source.Each row demonstrates a different tissue sample.The top example demonstrates a thin brain layer behind parafilm, and the two lower ones are from a thick brain slice.The lowest row demonstrates a failure example where the optimization converged at two spots rather than one.
Fig. 4 :
Fig. 4: Additional results, a thin brain layer behind chicken breast tissue: (a) Image of the neuron from the main camera with no correction, strong scattering is present and the neuron structure is lost.(b) Image with our modulation correction, the neuron shape as well as some of the axons are revealed.(c) A clean reference image of the same neuron, from the validation camera.
Fig. 5 :Fig. 6 :
Fig. 5: Additional results, a thin brain layer behind parafilm: (a) Image of the neuron from the main camera with no correction, strong scattering is present and the neuron structure is lost.(b) Image with our modulation correction, the neuron shape as well as some of the axons are revealed.(c) An undistorted image of the same neuron, from the validation camera.
Fig. 7 :
Fig. 7: Extent of memory effect correlation: we plot the decay of memory effect correlation in our samples, by translating a recovered modulation to nearby points and measuring the energy of the focused spot behind the tissue, at the validation camera.Insets show sample shapes of the focused spots.As translation is larger, more speckles arise around the desired focused spot.Left: focusing through a 400µm brain slice.Right: focusing through a layer of parafilm.We plot 4 different line scans to demonstrate the variance of such curves.
Fig. 8 :
Fig. 8: Comparing the usage of two different modulations on the emission and excitation wavelengths, vs. a common modulation for both.The top row shows results on a brain slice and the lower one uses as the target a set of fluorescent beads behind parafilm.(a) The confocal intensity at the central pixel as a function of the number of iterations.Since the two-mask approach doubles the number of measurements for each dictionary element, its convergence is slower.(b) The initial image at the main camera.(c) The final confocal spot at the main camera using 2 different modulations.(d)The final spot using the same modulation in both channels.Since it scans the dictionary elements more times, the result is better (a higher energy is measured at the central pixel).(e) The result of one mask in the middle of the optimization when the number of dictionary elements is equivalent to what the two-modulation approach has scanned at the end of the optimization.When scanning the dictionary for the same number of times, the one-modulation result is somewhat lower than what was achieved by two modulations.(f-g) The fluorescent target from the validation camera at the beginning of the optimization and at the end, notice the strong bleaching at the focusing bead, marked with an arrow.We show the final modulations in Fig.9.
Fig. 9 :
Fig. 9: Modulation masks resulting from Fig. 8, in the Fourier plane.The correlation between the masks of the two different wavelengths is typically within the range [0.6, 0.8].The correlation between the one-mask approach and the two-mask approach is also in the same range.
Fig. 10 :
Fig. 10: Comparing the variance maximization score against the confocal score.During optimization we reduce the laser power so that the SNR of the captured images remains fixed.Using the confocal score, as the estimated modulation improves, more photons are brought into a single pixel and hence we can capture a high SNR image with a weaker laser power, reducing photo-bleaching.(a) Fluorescent target from the validation camera.(b) Scattering by a single spot (one row of the transmission matrix), illustrating the speckle spread.(c) Input image from the main camera.(d) Laser power at each iteration in both algorithms.(e) Final image from the main camera.The confocal algorithm results in a spot, and the variance maximization algorithm results in a speckle pattern.(f) Final image from the validation camera where both algorithms excite a single spot.The original area of the fluorescent target is marked with a square.The position of the focused spot inside the fluorescent area can vary.The different rows simulate 3 different speckle supports.When scattering is wider the advantage of the confocal score is more dominant, whereas for the smallest support we require 5.8× fewer photons and for the wider one 43×.
Fig. 11 :
Fig.11: Comparison against the non-local score of[29].(a) The ground truth fluorescent target.(b-c) Results using a transmission matrix with short range memory effect.In this case a single correction cannot explain the full image and the non-local approach leads to degraded results.(e-f) Results in the presence of long-range memory effect, where the non-local approach is successful.We simulate two fluorescent targets, the top one is sparser and easier to handle and the lower one is denser.In each example the top row shows images from the main camera under a wide illumination (for the confocal algorithm, we run it until convergence and then use a wide illumination while only correcting the imaging arm).In the lower row we use the recovered modulation in the imaging arm and show a view from the validation camera.If the modulation is good it should bring all light into a single sharp spot.The overlaid red dots illustrate the actual position of the fluorescent target.The lowest row shows a plot of the decay of memory effect correlation in each of the transmission matrices.
Fig. 12 :
Fig. 12: Using the tilt-shift memory effect to see a wide area behind the tissue.The top row demonstrates three different shifts of the recovered correction pattern.The second row demonstrates the image we capture by placing this shifted mask on the SLM of the imaging arm.Each shift allows us to see a different sub-region of fluorescent sources.By merging 21 × 21 such shifts we get the wider image in the lowest row.We compare this reconstruction against the reference from the validation camera.
Table 1 :
List of components. | 15,796.2 | 2023-01-26T00:00:00.000 | [
"Engineering",
"Medicine",
"Physics"
] |
Alternative Uses of Luminescent Solar Concentrators
Over the last decade, the field of luminescent solar concentrators (LSC) has experienced significant growth, as noted by the increasing number of studies. However, so far, most of the devices developed have only been employed in a simple planar configuration coupled with silicon photovoltaic solar cells. This type of device is essentially a solar window whose main objective is to produce electrical power. However, due to the intrinsic nature of LSC, that is, the ability to absorb, downshift and concentrate the solar radiation that impinges on it, this photonic device can be used in alternative ways. In particular, in this review, we will explore several non-conventional applications in which LSCs are used successfully, including as solar bioreactors for algae development, photo reactors for organic synthesis, and as greenhouses.
Introduction-The Luminescent Solar Concentrator Device
In its simplest form, a luminescent solar concentrator (LSC) is a fluorophore-containing waveguide with downshifting properties that do not contain any active components able to convert light into electricity [1]. Under irradiation, the fluorophores absorb part of the incident light and re-emit photons at longer wavelengths. The mismatch in refractive indices between the waveguide material and the air causes total internal reflection (TIR) inside the waveguide. For an LSC with a typical refractive index of around 1.5, this means that about 75% of the photons can be internally reflected and guided towards the edges of the waveguide. The properties of the waveguide and of the luminophore, such as its photoluminescent quantum yield and the Stokes shift (i.e., the difference between the spectral position of the maximum of the first absorption band and the maximum of the fluorescence emission), are of fundamental importance to obtain an efficient LSC device [1,2].
The idea of concentrating light using fluorescence and TIR was initially considered in the 50s for collecting light from scintillation counters [3]. However, the concept of using sunlight in combination with LSCs was introduced only in the late 70s with the primary objective of reducing the cost of the electricity produced by silicon photovoltaic (PV) panels [4,5].
In the case of LSC devices used to convert light into electricity, index-matched PV cells are placed at the edges to collect the concentrated photon flux arising from the waveguide ( Figure 1) [1,2,[6][7][8]. As the LSC top area exposed to sunlight is larger than the waveguide edges area, the flux of incident radiation onto the PV devices can be significantly increased [9,10]. This, in theory, allows us to reduce the effective area of PVs used to generate an identical amount of electrical power, minimizing the PV material consumption and reducing device cost. In addition, LSCs offer adaptability to the needs of architects for building-integrated photovoltaics (BIPV), such as various colors, shapes, transparencies, lightweight options, and flexibilities, making them an attractive option for high-rise buildings [11].
However, the excellent tunability of the LSC devices allows expanding their use away from their initial concept of electricity generation, realizing a multitude of other applications. In this review, we will explore alternative uses of LSC devices in which the main source of radiation is sunlight, focusing on photochemical reactors for the synthesis of organic molecules, agriculture systems for the growth of plants, and photoreactors for microalgae biomasses (Figure 2).
LSC Devices as Photochemical Reactors
Since Giacomo Ciamician conceived the notion of photochemistry [27] and particularly during the last few years, photochemical transformations have created a multitude of opportunities for the innovative synthesis of organic products toward more sustainable production of chemicals [28]. In this context, the use of solar radiation as a light source is highly desirable due to its relative abundance. However, the efficiency of photochemical transformations is still fairly low, mainly due to the difficulty of delivering photons to reactants [29]. During the last decades, most research on LSCs focused mainly on their use as complementary sunlight collectors for traditional PV systems, with considerable effort placed on improving the power output. In particular, the materials used in LSC configurations have rapidly developed. Several luminophores have been developed, such as dyes [7,[12][13][14], hybrid materials [15][16][17], and semiconducting quantum dots (QDs) [18][19][20][21][22][23][24]. Similarly, a large body of work has been dedicated to designing and optimizing the host-polymer waveguides, such as polymeric host matrix systems [25,26].
However, the excellent tunability of the LSC devices allows expanding their use away from their initial concept of electricity generation, realizing a multitude of other applications. In this review, we will explore alternative uses of LSC devices in which the main source of radiation is sunlight, focusing on photochemical reactors for the synthesis of organic molecules, agriculture systems for the growth of plants, and photoreactors for microalgae biomasses (Figure 2). Nanoenergy Adv. 2022, 2, FOR PEER REVIEW 2 transparencies, lightweight options, and flexibilities, making them an attractive option for high-rise buildings [11]. During the last decades, most research on LSCs focused mainly on their use as complementary sunlight collectors for traditional PV systems, with considerable effort placed on improving the power output. In particular, the materials used in LSC configurations have rapidly developed. Several luminophores have been developed, such as dyes [7,[12][13][14], hybrid materials [15][16][17], and semiconducting quantum dots (QDs) [18][19][20][21][22][23][24]. Similarly, a large body of work has been dedicated to designing and optimizing the host-polymer waveguides, such as polymeric host matrix systems [25,26].
However, the excellent tunability of the LSC devices allows expanding their use away from their initial concept of electricity generation, realizing a multitude of other applications. In this review, we will explore alternative uses of LSC devices in which the main source of radiation is sunlight, focusing on photochemical reactors for the synthesis of organic molecules, agriculture systems for the growth of plants, and photoreactors for microalgae biomasses (Figure 2).
LSC Devices as Photochemical Reactors
Since Giacomo Ciamician conceived the notion of photochemistry [27] and particularly during the last few years, photochemical transformations have created a multitude of opportunities for the innovative synthesis of organic products toward more sustainable production of chemicals [28]. In this context, the use of solar radiation as a light source is highly desirable due to its relative abundance. However, the efficiency of photochemical transformations is still fairly low, mainly due to the difficulty of delivering photons to reactants [29].
LSC Devices as Photochemical Reactors
Since Giacomo Ciamician conceived the notion of photochemistry [27] and particularly during the last few years, photochemical transformations have created a multitude of opportunities for the innovative synthesis of organic products toward more sustainable production of chemicals [28]. In this context, the use of solar radiation as a light source is highly desirable due to its relative abundance. However, the efficiency of photochemical transformations is still fairly low, mainly due to the difficulty of delivering photons to reactants [29].
Two main solar reactors have currently been developed, one in which light is diffuse and the other in which it is concentrated. Non-concentrating reactors, such as flatbed reactors [30], are usually oriented at a fixed angle toward the sky. They can benefit from diffuse light but present a limited reaction yield due to the low light intensity. On the other hand, by concentrating sunlight on the reactor surface to improve efficiency, concentrating reactors such as solar dish concentrators and solar ovens have been realized [29]. However, this type of reactor requires tracking devices; otherwise, the lens focus is lost as the Earth moves.
A possible alternative lies in LSC devices combined with microflow technologies. The device consists of a polymer LSC waveguide in which reactor microchannels are shaped within. The luminophore embedded in the LSC can effectively harvest sunlight and focus the remitted photons towards the reaction channels, maximizing the number of photons that reach the liquid reaction mixture.
This type of structure combines several advantages: (i) the LSCs can exploit efficiently diffuse irradiation; thus, they do not require a tracking device [31]; (ii) as the re-emitted light does not need to travel up to the edges of the LSC waveguide but instead can be focused on the nearest embedded microchannel, it is possible to use high luminophore concentration, increasing the photon flux that reaches the reactants, while keeping self-absorption losses to a minimum [2,32]; (iii) due to the tunability of the luminophore [2], it is possible to select the most appropriate emission wavelength to perform specific photochemical reactions.
One of the first examples of LSC photo-microreactors (LSC-PM) reported a luminescent red dye, Lumogen Red 305 (LR305), embedded in the polymer matrix of the LSC ( Figure 3). [32] This dye had been previously used in LSC devices for PV applications. The dye was paired with a common photocatalyst, Methylene Blue (MB) [33], and injected into the reactor microchannels. Carefully selecting the luminophore-catalyst pair is crucial to maximizing solar energy absorption. In this case, the coupling of LR305 with MB is particularly advantageous due to the excellent spectral overlap between the LR305 emission and MB absorption spectra ( Figure 3b). As proof of concept, the singlet oxygen-mediated cycloaddition of 9,10-diphenylanthracene (DPA) was used as a benchmark to show the system's capability. A 4.5-fold acceleration of the reaction with a 200 ppm LR305 doped LSC-PM was obtained compared to the reactor without luminophores (Figure 3c). This setup was also shown to work outdoors with scattered cloud cover. Similar to the lab test, the conversion of the dye-sensitized LSC-PM was significantly higher than in the non-sensitized reactor, even if the scattered clouds influenced the final yield.
An improved design based on modulating the residence time of the reactants within the microchannel depending on the amount of light impacting the reactor was shown to adequately handle light fluctuations caused by passing clouds (Figure 3d) [34,35].
A few follow-up studies probed the efficiency limits of the LSC-PM concept, showing that up to 1 m 2 LSC-PM optimized systems could be built, confirming the potential of LSCs as photochemical reactors [36]. The importance of using the fused deposition modeling (FDM) technique to 3D print molds for LSC manufacturing was also demonstrated. Rapid prototyping by 3D printing allowed for several reactor designs to be tested, in which several parameters, such as the number and spacing of channels and flow and collection sections, can be optimized [37].
However, this type of LSC-PM design presents stability issues due to the use of polydimethylsiloxane (PDMS) as a polymer for the waveguide. While PDMS is compatible with aqueous media and some alcohols, most organic solvents are soluble in PDMS, causing swelling of the polymer, leaching of the dye, and overall degradation of the device [38].
Thus, new LSC-PM devices have been designed in which the LSC is physically separated from the PM reactors. For example, a combination of commercially available poly(methyl methacrylate) (PMMA) LSC plates and solvent-resistant perfluoroalkoxy alkane (PFA) capillaries have been used [39]. In this design, chemically resistant capillaries are embedded into an LSC device. Due to the material's higher refractive index compared to PDMS-based LSC-PM, this new design allowed a substantial photon-flux improvement, with 40% more photons directed to the reaction channels, leading to significant rate accelerations [39]. Further, by using PMMA as a polymer matrix, different organic dyes can be introduced into the waveguide to target specific emission wavelengths by downshifting the incident solar light. However, this type of LSC-PM design presents stability issues due to the use of polydimethylsiloxane (PDMS) as a polymer for the waveguide. While PDMS is compatible with aqueous media and some alcohols, most organic solvents are soluble in PDMS, causing swelling of the polymer, leaching of the dye, and overall degradation of the device [38].
Thus, new LSC-PM devices have been designed in which the LSC is physically separated from the PM reactors. For example, a combination of commercially available poly(methyl methacrylate) (PMMA) LSC plates and solvent-resistant perfluoroalkoxy alkane (PFA) capillaries have been used [39]. In this design, chemically resistant capillaries are embedded into an LSC device. Due to the material's higher refractive index compared to PDMS-based LSC-PM, this new design allowed a substantial photon-flux improvement, with 40% more photons directed to the reaction channels, leading to significant rate accelerations [39]. Further, by using PMMA as a polymer matrix, different organic dyes can be introduced into the waveguide to target specific emission wavelengths by downshifting the incident solar light.
All the LSC-PM devices based on polymer matrices present an inherent problem: since the desired emission wavelength is fixed by the luminophore incorporated in the All the LSC-PM devices based on polymer matrices present an inherent problem: since the desired emission wavelength is fixed by the luminophore incorporated in the polymer, a dedicated reactor has to be fabricated for different photocatalysts as they require a different emission profile to match their absorption.
To address this issue, Tao et al. modified the LSC-PM design using a liquid dye. In this case, the luminophore is not dispersed within a polymer matrix but instead introduced into a suitable matrix as a fluid [40]. Similar liquid-based luminophores for LSCs have already been shown to be efficient for PV application [21,22].
A schematic illustration of the modified LSC-PM device and light transfer process is shown in Figure 4. The use of a liquid LSC design presents several advantages: (i) luminophores have, in general, better solubility and dispersibility in organic solvents, exhibiting equivalent or higher photoluminescence quantum yield (PLQY) than when embedded in solid matrices [21,41]; (ii) the emission wavelength can be easily changed by replacing the luminophore liquid solution; (iii) the light emission distribution of the fluid in 3D space can be controlled more easily. Overall, this design allows a higher degree of flexibility than the LSC-PM system based on luminophores embedded in polymers. cant acceleration in conversion rate and apparent reaction kinetics. In particular, under 1 sun illumination, the presence of fluorescent fluids increased the conversion rate by about 30% (Figure 3c). Subsequent work reported the scale-up of the proposed liquid-based reactor. The authors fabricated reactors with up to eight channels, demonstrating conversions comparable to those achieved in a single-channel reactor [42]. Other LSC designs have been proposed, in which the luminophores are used to increase the intensity of the ultraviolet B region (UV-B, wavelength of 280-315 nm) [43]. Since the UV-B wavelengths in the solar spectrum are less than 5% of the total UV radiation reaching the Earth's surface, these devices have so far only been tested with UV lamps. However, their proof-of-concept could be extended to LSC-PM reactors, in which the UVB emission intensity can be increased by approximately a factor of ten for an 8 W UV lamp [43,44]. Several prototype reactors (~6 cm long) were fabricated by curing a transparent photosensitive 3D-printed resin. The geometry of the reaction channel was optimized as a helix to allow fast mixing compared to the straight channel. In contrast, for the waveguide in which the liquid dye is placed, it was found that a cylindrical shape around the reaction channel presents optimal operation, providing homogeneous irradiation and the highest photon flux. Different dyes have been tested, and the optimized device showed a significant acceleration in conversion rate and apparent reaction kinetics. In particular, under 1 sun illumination, the presence of fluorescent fluids increased the conversion rate by about 30% (Figure 3c).
Subsequent work reported the scale-up of the proposed liquid-based reactor. The authors fabricated reactors with up to eight channels, demonstrating conversions comparable to those achieved in a single-channel reactor [42].
Other LSC designs have been proposed, in which the luminophores are used to increase the intensity of the ultraviolet B region (UV-B, wavelength of 280-315 nm) [43]. Since the UV-B wavelengths in the solar spectrum are less than 5% of the total UV radiation reaching the Earth's surface, these devices have so far only been tested with UV lamps. However, their proof-of-concept could be extended to LSC-PM reactors, in which the UVB emission intensity can be increased by approximately a factor of ten for an 8 W UV lamp [43,44].
In summary, certain requirements are needed to develop an LSC-PM device, such as a durable waveguide material with a high refractive index. Additionally, the luminophore should have a specific emission profile to properly activate the photocatalyst and a narrow spectral distribution to avoid side reactions or degradation of the starting precursors [45]. Distinctive LSC designs are also required to obtain high reaction yields.
As mentioned before, even if the LSC-PM presents several advantages compared to other types of photoreactors, there are still issues and criticalities that need to be further addressed (Table 1). In particular, the system's stability under different light conditions and environments is rarely reported, while this is a critical parameter for their future commercialization.
LSC for Controlled Environment Agriculture
According to projections by the Food and Agriculture Organisation (FAO) of the United Nations, by 2050, total food production will have to increase by 70% compared to current volumes to feed the world's growing population [46]. Improved yields and better farm management are therefore a necessity. In this framework, controlled environment agriculture (CEA) could play a fundamental role. CEA is an agricultural method that allows a degree of control over the environmental conditions of plants, such as lighting, temperature, carbon dioxide level, relative humidity, moisture content, and nutrient composition. CEA's goal is to improve crop yield, plants' resilience to climate change, sustainability, and food security [47].
Among the many factors that affect plant growth in a CEA system, light is one of the most important, as it has been shown that not all wavelengths of light give the same response in green plants [48]. Traditionally, light-emitting devices (LED) have been widely used in CEA and crop management due to their ability to control the quality of emitted light [49,50]. However, to operate, they still require external energy input, and they usually present high initial costs for the farmers.
In this framework, LSCs can represent a low-cost and sustainable solution for the modulation of the sunlight impinging on the crops, allowing the emission spectrum to be altered and potentially even redirecting the light.
The first examples of LSC for agricultural uses are based on fluorescence plastic films in which the main objective was to downshift green light towards the red region [51][52][53][54]. For example, by using these types of LSC greenhouse covers, tomato yield can be increased by 19.6%, and the number of flowering branches on rose bushes can be increased by 26.7% compared with the sheets without the fluorescent dye [55]. However, these films presented a reduced light transmission in the 400-700 nm range, also known as photosynthetically active radiation (PAR) ( Figure 5) [51][52][53][54].
For this reason, new combinations of luminescent pigments have been introduced, allowing absorb ultraviolet radiation to be absorbed and downshifting that into PAR, increasing the amount of useful light for the growth of strawberries [57]. For example, it was observed that by using a blue pigment with emissions of up to 480 nm, strawberry production could be increased by 11%. In another approach, by using a photoluminescent phosphor with suitable excitation/emission properties, such as Ca 0.4 Sr 0.6 S:Eu 2+ , it was shown that the photosynthetic activity (measured as CO 2 assimilation rate) of Spinacia oleracea can be improved by more than 25% compared to a purely reflecting reference film [58].
The first examples of LSC for agricultural uses are based on fluorescence plastic films in which the main objective was to downshift green light towards the red region [51][52][53][54]. For example, by using these types of LSC greenhouse covers, tomato yield can be increased by 19.6%, and the number of flowering branches on rose bushes can be increased by 26.7% compared with the sheets without the fluorescent dye [55]. However, these films presented a reduced light transmission in the 400-700 nm range, also known as photosynthetically active radiation (PAR) ( Figure 5) [51][52][53][54]. For this reason, new combinations of luminescent pigments have been introduced, allowing absorb ultraviolet radiation to be absorbed and downshifting that into PAR, increasing the amount of useful light for the growth of strawberries [57]. For example, it was observed that by using a blue pigment with emissions of up to 480 nm, strawberry production could be increased by 11%. In another approach, by using a photoluminescent phosphor with suitable excitation/emission properties, such as Ca0.4Sr0.6S:Eu 2+ , it was shown that the photosynthetic activity (measured as CO2 assimilation rate) of Spinacia oleracea can be improved by more than 25% compared to a purely reflecting reference film [58].
The majority of these LSC films use polyethylene (PE) as waveguide material as it is a low-cost polymer with high transparency and with moderate heat retention, such that it is primarily used in agriculture environments [59]. However, as the film is completely airand water-proof, there is the requirement to provide forced ventilation and water to the plants shielded by this type of material. To obtain an improved microclimate, nonwoven fabrics of spun-bond polypropylene (PP) have been proposed to replace the PE films. Recently, inorganic luminescent phosphors have also been incorporated into such PP fibers and used as luminescent layers in greenhouses. In this study, the authors compared the dynamics of growth in late cabbage plants (Olga variety) and leaf lettuce (Emerald variety) under an ordinary nonwoven PP film and under the spun-bond PP containing Y2O2SEu luminophores [60]. Interestingly, the application of the spun-bond containing the phosphorus, despite lowering the overall PAR, led to an increase in the rate of photosynthesis, the water-use efficiency, and the accumulation of the total biomass of plants by 30-50%, showing that luminescent PP-based fabrics could be a promising solution to optimize the plants' growth [60].
In general, the luminophore used in these studies had fixed wavelength emissions, poor photon conversion efficiencies, and in some cases, limited stability. In this context, recently, QDs-based LSCs have been employed successfully in CEA. QDs possess excellent optical properties such as high photon conversion efficiency, a tunable emission spectrum, and improved stability, making them an attractive candidate to replace the dye used in LSCs for CEA [61]. The majority of these LSC films use polyethylene (PE) as waveguide material as it is a low-cost polymer with high transparency and with moderate heat retention, such that it is primarily used in agriculture environments [59]. However, as the film is completely air-and water-proof, there is the requirement to provide forced ventilation and water to the plants shielded by this type of material. To obtain an improved microclimate, nonwoven fabrics of spun-bond polypropylene (PP) have been proposed to replace the PE films. Recently, inorganic luminescent phosphors have also been incorporated into such PP fibers and used as luminescent layers in greenhouses. In this study, the authors compared the dynamics of growth in late cabbage plants (Olga variety) and leaf lettuce (Emerald variety) under an ordinary nonwoven PP film and under the spun-bond PP containing Y 2 O 2 SEu luminophores [60]. Interestingly, the application of the spun-bond containing the phosphorus, despite lowering the overall PAR, led to an increase in the rate of photosynthesis, the water-use efficiency, and the accumulation of the total biomass of plants by 30-50%, showing that luminescent PP-based fabrics could be a promising solution to optimize the plants' growth [60].
In general, the luminophore used in these studies had fixed wavelength emissions, poor photon conversion efficiencies, and in some cases, limited stability. In this context, recently, QDs-based LSCs have been employed successfully in CEA. QDs possess excellent optical properties such as high photon conversion efficiency, a tunable emission spectrum, and improved stability, making them an attractive candidate to replace the dye used in LSCs for CEA [61].
One of the first examples of QD-LSCs for CEA used fiber-LSC with embedded CuInSe x S 2−x /ZnS QDs. This type of QD has been shown to be an excellent luminophore for LSCs for solar windows [62]. Instead of using LSC in thin-film form, in this case, the authors chose to prepare fiber-based LSC, allowing the absorbed sunlight to be delivered to the lower canopy of plants. In particular, the LSC fibers could provide an almost double amount of PAR to the lower canopy leaves converted from absorbed sunlight, which increased the yield of tomatoes in a commercial greenhouse by 7% (Figure 6a-c).
Subsequent work investigated the growth of lettuce in a more controlled experiment in which only the film cover of the greenhouse was varied, while all the other parameters were kept constant (Figure 6d-f) [63]. The authors used three different films: an orange QD film (emission centered at 600 nm), a red QD film (emission centered at 660 nm), and one control with no colour conversion film. The results showed that both colour conversion films gave better results than the control, with increased edible dry mass (13 and 9%), edible fresh mass (11% each), and total leaf area (8 and 13%) for the orange QD and red QD, respectively, compared to the control, despite a reduction of up to 11% in the PAR range observed for the red QD film. These results demonstrated the benefits of using QD films for plant growth with a potential application in space.
QD film (emission centered at 600 nm), a red QD film (emission centered at 660 nm), and one control with no colour conversion film. The results showed that both colour conversion films gave better results than the control, with increased edible dry mass (13 and 9%), edible fresh mass (11% each), and total leaf area (8 and 13%) for the orange QD and red QD, respectively, compared to the control, despite a reduction of up to 11% in the PAR range observed for the red QD film. These results demonstrated the benefits of using QD films for plant growth with a potential application in space. The main disadvantage of this thin-film technology is that it does not fully exploit the properties of the LSC device. In fact, all these studies use the LSC concept only for its The main disadvantage of this thin-film technology is that it does not fully exploit the properties of the LSC device. In fact, all these studies use the LSC concept only for its downshifting properties, without exploiting the possibility of concentrating specific wavelengths and obtaining electricity by coupling a PV device.
In this regard, Corrado et al. developed a semi-transparent system combining an LSC with conventional c-Si solar cells named the wavelength selective photovoltaic system (WSPV) by the authors [56]. Instead of positioning the PV devices at the edges of the LSC, the authors placed the cells on the front face, allowing for direct use of sunlight and reducing the traveling distance of the light. The use of the luminescent dye allowed some of the blue and green wavelengths to be absorbed and downshifted into red wavelengths and guided to the solar cell for conversion into electrical energy, while the rest of the sunlight was available to the plants [56]. The placement and types of PV cells used in the LSC panels were varied for performance comparisons, with the best configuration exhibiting a 37% increase in power production compared to the reference. Further, accelerated tests showed that this type of LSC could be stable for up to 20 years.
In a follow-up study, the same authors reported the influence of their WSPV device on tomato photosynthesis [67]. Under low light conditions, the photosynthesis rate of tomatoes was found to be similar, while light-saturated photosynthesis was slightly lower for tomatoes under WSPV. On the other hand, small water-saving potentials were found for plants under WSPV. The results were somewhat inconclusive, and a better understanding of light modulation with this type of device is needed to understand the actual effect on plant growth.
The use of LSCs for CEA systems can also have a negative effect on plant growth. A recent study described the use of an LSC film with a dye luminophore (SG80) in a high-tech greenhouse horticulture facility, and two experimental trials were conducted by growing eggplants [68]. By filtering more than 85% of UV light and 58% of far-red wavelengths, the presence of the film improved energy and resource use efficiency, mainly thanks to an 8% net reduction in heat load and an 18% reduction in water and nutrient consumption of the plants. However, the film strongly reduced the PAR, leading to a 25% reduction in total season fruit yield [68].
While previous thin-film planar solutions could be efficient for improving the growth of plants, they present an intrinsic issue: as the LSC is mounted in a planar configuration, a limited number of internally generated photons can exit the film from the escape light cone into free space. Instead, for optimum plant growth, ideally, a high fraction of the internally generated photons should be extracted in one direction toward photosynthetic organisms.
In recent work, Yin et al. presented an elegant solution based on a spectral-shifting and unidirectional light-extracting photonic LSC that does not require a reflector, and that can be used for improved light use on lettuce cultivation in both greenhouses and indoor farms (Figure 7) [69]. Instead of using a classical planar structure, the LSC based on a LF305 dye was molded with a microdome structure. By employing such a design, the light can be unidirectionally directed toward lettuce cultivation. In particular, more than 70% of the externally extracted light can be redirected in the forward direction and used to increase photosynthesis efficiency. In contrast, the classical planar structure only provides 9% of the internally generated light in the forward direction. The microdome-based LSC film significantly increased the fresh weight and dry weight of lettuce above ground on day 20 after transplanting by 21.7% and 30.3%, respectively. In addition, the presence of the microdome-based LSC led to the extensive growth of lettuce. Overall, a 20% improvement in lettuce production was observed in both indoor facilities with electric lighting and in a greenhouse with natural sunlight [69]. Overall, to fabricate an LSC device for CEA applications, the system needs to satisfy the following requirements ( Table 2): because each plant has a specific need for certain light wavelengths, the appropriate luminophore must be selected; absorption of light by the luminophore should not excessively reduce PAR; moreover, to compensate for this reduction, ideally, the luminophore should have a high quantum yield. Although thin-film devices present a low cost of fabrication and installation, they should be avoided in favor of more complex architectures that direct as much of the re-emitted light as possible toward the plants. In addition, the waveguide material should be chosen judiciously so that it is compatible with the luminophore but also resistant to weathering, UV, etc. While LSCs could be a low-cost solution to improve the crop yield, as also visible in Table 3, currently, there is no consensus in the literature on the actual effects of luminescent films and LSCs on fruit yield and quality in major greenhouse crops. This is mainly due to the lack of fundamental and systematic research and calls for more rigorous studies in this field [70].
LSC for Microalgal Production
During the last 60 years, microalgae biomasses have been used to synthesize a wide range of compounds of industrial interest (such as β-carotene, and astaxanthin), food products, pharmacy products, and cosmetics [71]. In addition, microalgae have shown promising results for biofuel production [72] and as a tool for carbon dioxide bioremediation [73].
The rate of microalgae synthesis, and thus the productivity, is strongly correlated with the light absorption properties of microalgae and the quantity and quality (i.e., which wavelengths) of light that reach them. In particular, photoinhibition plays an important role: if the irradiance is too high, it can lead to a decline in the maximum quantum yield of photosynthesis. In many microalgae species, irradiances in the range of 150-400 µmol photons m −2 s −1 (approximately 10% of full sunlight) can already cause photoinhibition [74,75].
This means that during the majority of the day, the microalgae system operates at low photosynthetic efficiency, absorbing but not effectively using incoming radiation.
Different light distribution systems have been developed to control the light intensity and optimize microalgae growth. Temporal light dilution systems, also known as flashing light systems, are based on inducing dark/light cycles by turbulent mixing of the biomass. This process exposes the microalgae to high light intensity for a short period of time; thus, the average intensity stays below the saturation point. This method can effectively yield a 3-fold increase in algal biomass production [76][77][78]. While efficient, the temporal light dilution systems require sophisticated mixing systems, which may not be technically feasible for large open pond biomass cultivation and may have high operational costs. In this regard, spatial light dilution methods do not require specific mixing compared to the flashing light systems and thus could be more advantageous. In this method, by using specific light distribution devices, the photon flux density is lowered below the 10% threshold. Systems such as optical fibers [79], parabolic dishes [74], or LSCs can be used to obtain an irradiance below the saturation intensity.
Among all spatial light dilution systems, LSC panels appear to be the most suitable method for microalgal culture systems. In fact, LSC devices are easy to fabricate and do not require a sun tracking system. Further, as it has been shown that exposing photosynthetic organisms to UV light may result in direct photosynthetic damage and causes photoinhibition [80], an LSC allows photons to be downshifted from the UV region to the PAR, reducing the damage to the cells while also yielding an increase in biomass production [81].
In one of the first examples of the LSC concept applied for microalgae production, dyes were incorporated in a double tubular reactor (algae inside and dye solution outside), demonstrating a growth enhancement for certain dyes with high quantum yields and stability, which had suitable absorption/emission spectra for the artificial light sources used [82]. Similarly, dyes embedded in acrylic films have been used as sunlight filters to control the growth of green algae Chlorella vulgaris and cyanobacteria Gloeothece membranacea. Under different light-modulated spectra, the growth and chlorophylla production were significantly promoted in these two microalgae species [83].
In one of the few examples in which LSC was used coupled to PV devices in conjunction with an algae growth system, Prufert-Bebout et al. demonstrated that microalgae and cyanobacteria grew as well or better under wavelength-selective LSC panels [84]. However, no data were provided on the efficiency of the PV system [84].
Many fluorescent coatings with different dyes and conjugated polymers have been explored and shown the beneficial effect of using this type of LSC device for the growth of microalgae biomasses [81,[85][86][87]. However, the use of common dyes as luminophores for LSCs suffers from the aggregation-caused quenching effect (ACQ), where the fluorescence will be quenched in a high concentration or the aggregate state. This phenomenon limits the usage of high concentrations in film doping, leading to limited spectral shift and poor stability [88,89]. The effect of ACQ can be mitigated by using luminophores that exhibit aggregation-induced emission (AIE) [90]. In molecules such as AIE lumogens (AIEgens), non-radiative deactivation is significantly reduced in the aggregated state due to physical constraints on both intramolecular spins and π-π stacking due to the highly convoluted molecular core [90].
Since their first use in LSC devices [88,91], AIE active molecules have shown promising results, demonstrating a viable design for LSC-PV applications [92,93]. Recently, this concept has also been applied for spectra shifting for augmented photosynthesis of microalgae. For example, AIE active diketopyrrolopyrroles (DPP) have been embedded in PMMA films and used in culturing green algae (Chlorella sp.) (Figure 8) [94]. By applying the film to the front cover of a culture flask, it was possible to increase the flux density of photosynthetic photons of orange-red (600-650 nm) by 4% and of deep red (650-700 nm) by 3.4%. This led to an increase in biomass by 26% and in total fatty acid methyl esters by 28.8% [94]. In follow-up research, by using tailored AIE-DPPs molecules with strong deep-red emissions, an increase in the total fatty acid methyl ester content of microalgae of more than 62% was demonstrated, confirming the promising application of AIEgens to accelerate microalgae mass production [95].
In general, the LSC layers are positioned between the algae culture and light source in a so-called front-side conversion. However, another configuration is possible in which the LSC is placed behind the culture to capture and convert transmitted light. The latter configuration can also be modified with the addition of a reflective backing layer (similar to the mirror configuration used for PV-LSC applications) to have a double-pass effect on the transmitted light back. This configuration is useful only for a dilute concentration of microalgae in which a meaningful amount of light can reach the LSC layer.
Using the backside configuration, Brabec et al. positioned an LSC as a backlight converter integrated into a flat panel algae reactor (Figure 9) [96]. In this case, the luminophore chosen was strontium sulfide doped with divalent europium, Ca0.59Sr0.40Eu0.01S. The photoluminescent phosphor was deposited on top of a mirror back-plate and used to culture H. pluvialis. The presence of the backside LSC in the reactor increased the algae growth and oxygen production, mainly due to the increased amount of red light in the reactor. In particular, a 36% greater biomass generation at low densities was observed. In follow-up research, by using tailored AIE-DPPs molecules with strong deep-red emissions, an increase in the total fatty acid methyl ester content of microalgae of more than 62% was demonstrated, confirming the promising application of AIEgens to accelerate microalgae mass production [95].
In general, the LSC layers are positioned between the algae culture and light source in a so-called front-side conversion. However, another configuration is possible in which the LSC is placed behind the culture to capture and convert transmitted light. The latter configuration can also be modified with the addition of a reflective backing layer (similar to the mirror configuration used for PV-LSC applications) to have a double-pass effect on the transmitted light back. This configuration is useful only for a dilute concentration of microalgae in which a meaningful amount of light can reach the LSC layer.
Using the backside configuration, Brabec et al. positioned an LSC as a backlight converter integrated into a flat panel algae reactor (Figure 9) [96]. In this case, the luminophore chosen was strontium sulfide doped with divalent europium, Ca 0.59 Sr 0.40 Eu 0.01 S. The photoluminescent phosphor was deposited on top of a mirror back-plate and used to culture H. pluvialis. The presence of the backside LSC in the reactor increased the algae growth and oxygen production, mainly due to the increased amount of red light in the reactor. In particular, a 36% greater biomass generation at low densities was observed. In summary, certain requirements must be met for using an LSC for microalgae growth (Table 4), such as having a luminophore with a high emission quantum yield and absorption in the UV region to reduce cell damage. Additionally, the photon flux density must be carefully controlled to avoid photosaturation and thus inhibition of microalgae growth. The device must also provide uniform light at different depths of the reactor.
Although the LSC can be an excellent spatial light dilution system for microalgae growth, several issues still need to be addressed, starting from finding the optimal architecture to avoid emission losses and increasing the illumination of deeper areas of the reactor. In addition, there are still no systematic studies on the long-term stability of LSC devices. This parameter is especially critical if the LSC system will be partially submerged in microalgae culture. Table 4. Summary of the key parameters of LSC devices for microalgal production.
Requirements
Advantages Issues In a front-light design, part of the reemitted light is lost as it is not directed towards the algae.
•
The long-term stability needs to be verified In summary, certain requirements must be met for using an LSC for microalgae growth (Table 4), such as having a luminophore with a high emission quantum yield and absorption in the UV region to reduce cell damage. Additionally, the photon flux density must be carefully controlled to avoid photosaturation and thus inhibition of microalgae growth. The device must also provide uniform light at different depths of the reactor. Table 4. Summary of the key parameters of LSC devices for microalgal production.
Requirements
Advantages Issues Although the LSC can be an excellent spatial light dilution system for microalgae growth, several issues still need to be addressed, starting from finding the optimal architecture to avoid emission losses and increasing the illumination of deeper areas of the reactor. In addition, there are still no systematic studies on the long-term stability of LSC devices. This parameter is especially critical if the LSC system will be partially submerged in microalgae culture.
Conclusions and Outlook
So far, LSC devices have been mostly used as luminophore-doped polymer waveguides coupled with PV cells to separate the area from which the sunlight is collected from where it is converted into electricity. However, LSCs are powerful photonic devices that can be used in a large variety of applications thanks to their high versatility. In particular, in this review, we discussed alternative LSC systems in which sunlight is used as the primary light source, such as solar photosynthetic reactors for organic products, greenhouses for improving plant growth, and solar photobioreactors for microalgae biomass production. In all these cases, the LSC improved the system's efficiency due to its ability to downshift and concentrate specific wavelengths.
In developing such LSC devices, particular attention should be placed on testing the devices under natural light conditions. In fact, even if many luminophores such as organic dyes, phosphors, or QDs report high internal quantum efficiencies, when they are used under low-intense irradiation such as sunlight, they exhibit lower values, and only a smaller amount of incident light is actually converted and exploited by the reaction centers. Further, while increasing the concentration of luminophores could allow greater absorption of sunlight, it can actually be detrimental to the final efficiency due to reabsorption and scattering effects.
Overall, the efficiency of the spectral downshifting with respect to light directed towards the reaction centers (e.g., cell culture, organic photocatalysts, and plants) should be the main parameter to control for obtaining high efficiency.
So far, the majority of LSCs have adopted simple geometries; however, engineering their shape and aspect ratio could yield higher efficiencies. In particular, by borrowing concepts developed for photoreactors based on classical concentrators, it could be possible to integrate them for developing improved LSCs architectures. In this type of system, the LSC device could be placed on the reflector instead of being used as a filter. For example, a parabolic concentrating collector/light distribution system comprised of splitters, optical fibers, and fluorescent reflector has been theoretically proposed for CAE, predicting a 35% improved crop yield [97].
Ultimately, the commercial exploitation of the systems discussed in this review will be a function of how feasible their implementation is, the cost, and the value of the product obtained. One of the main objectives of future research must be to validate the results observed in plant growth and microalgae production to distinguish the impact of spectral light downshifting, light attenuation, and other environmental parameters. Continued work in this field should assess these factors more rigorously and focus on developing/using the highest performing materials, particularly those that can convert non-PAR photons into PAR photons with minimal attenuation of visible light. Particular attention should be paid to the intensity of light provided to plants. In fact, yield enhancement due to light intensity depends on plant (or algae) type, temperature, and other growth factors. Under a low light regime, the leaves are not saturated with light, and in theory, yield should increase almost linearly. However, once light intensity increases over a certain level, the plant's net photosynthetic rate will eventually saturate, and the yield benefits from higher light levels will be minor and sometimes even harmful. Concerning LSC-based photoreactors for organic products, while few examples have shown to be versatile, most thin-film LSC-PM devices present an intrinsic issue: each reactor can target only a specific photocatalyst as the emission wavelength is fixed. Alternative liquid LSC devices can play a significant role in addressing this problem.
As we have seen for all these LSC-based technologies, a key aspect is controlling the amount and energy of photons delivered to reaction centers, whether photocatalysts or photoreceptors in plants or algae. Therefore, future research should focus on improving LSC systems to achieve the optimal (and not maximum) delivery of photon flux to as many reaction centers as possible, with minimal temporal and spatial fluctuations. Overall, LSCs are a fascinating class of devices, and while their primary use is mainly as solar windows, they can also be employed in other systems, especially those where wavelength selectivity and concentrated light are major requirements. Before LSC devices can be successfully exploited commercially, new fluorophores and waveguide materials should be further investigated. In addition, there is a strong need for real-world prototypes and life-cycle assessments on scaled-up LSC devices. Funding: Natural Science and Engineering Research Council of Canada (NSERC, Discovery Grants) and the Canada Foundation for Innovation (CFI) for infrastructure and its operating funds.
Data Availability Statement:
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 10,525.2 | 2022-06-28T00:00:00.000 | [
"Engineering",
"Physics",
"Environmental Science"
] |
A moving switching sequence approach for nonlinear model predictive control of switched systems with state‐dependent switches and state jumps
In this paper, we propose an approach for real‐time implementation of nonlinear model predictive control (NMPC) for switched systems with state‐dependent switches called the moving switching sequence approach. In this approach, the switching sequence on the horizon moves to the present time at each time as well as the optimal state trajectory and the optimal control input on the horizon. We assume that the switching sequence is basically invariant until the first predicted switching time reaches the current time or a new switch enters the horizon. This assumption is reasonable in NMPC for systems with state‐dependent switches and reduces computational cost significantly compared with the direct optimization of the switching sequence all over the horizon. We update the switching sequence by checking whether an additional switch occurs or not at the last interval of the present switching sequence and whether the actual switch occurs or not between the current time and the next sampling time. We propose an algorithm consisting of two parts: (1) the local optimization of the control input and switching instants by solving the two‐point boundary‐value problem for the whole horizon under a given switching sequence and (2) the detection of an additional switch and the reconstruction of the solution taking into account the additional switch. We demonstrate the effectiveness of the proposed method through numerical simulations of a compass‐like biped walking robot, which contains state‐dependent switches and state jumps.
Summary
In this paper, we propose an approach for real-time implementation of nonlinear model predictive control (NMPC) for switched systems with state-dependent switches called the moving switching sequence approach. In this approach, the switching sequence on the horizon moves to the present time at each time as well as the optimal state trajectory and the optimal control input on the horizon. We assume that the switching sequence is basically invariant until the first predicted switching time reaches the current time or a new switch enters the horizon. This assumption is reasonable in NMPC for systems with state-dependent switches and reduces computational cost significantly compared with the direct optimization of the switching sequence all over the horizon. We update the switching sequence by checking whether an additional switch occurs or not at the last interval of the present switching sequence and whether the actual switch occurs or not between the current time and the next sampling time. We propose an algorithm consisting of two parts: (1) the local optimization of the control input and switching instants by solving the two-point boundary-value problem for the whole horizon under a given switching sequence and (2) the detection of an additional switch and the reconstruction of the solution taking into account the additional switch. We demonstrate the effectiveness of the proposed method through numerical simulations of a compass-like biped walking robot, which contains state-dependent switches and state jumps. nally forced switches. For example, we can see state-dependent switches in contacts of mechanical systems with objects or environments [2][3][4][5] and logical switching controllers [6][7][8] and state-independent switches in gear shifts of automobiles, 9 electrical circuit systems, 10,11 and process systems. 12,13 In this paper, we focus on control of switched systems with state-dependent switches.
It is difficult to apply conventional control methods directly to switched systems with state-dependent switches, particularly when they contain nonlinear subsystems. Nonlinear model predictive control (NMPC) [14][15][16] is the only generic method that can control such systems in practice. In NMPC, an open-loop optimal control problem over a finite future is solved at each time on the basis of the current state and the system's model, and the actual control input to the system is given by the initial value of the optimal control input. Even if the system contains discrete events such as switches or state jumps, we can realize NMPC in consideration of future discrete events as long as we can solve the finite horizon nonlinear optimal control problem (NOCP) within a given sampling period. However, it is difficult to solve the finite horizon NOCP for the switched systems with state-dependent switches in a short computational time because the control input and sequence of the active subsystems complicatedly depend on each other. 17 In previous studies, optimal control of systems with state-independent switches and that of systems with state-dependent switches are often considered in the same framework, and many different optimization approaches have been studied. Mixed-integer programming (MIP) is a classical method for optimization problem with a kind of switches and numerical methods based on MIP are successfully applied to systems with long sampling period, eg, process systems. 18,19 Two-stage methods [20][21][22] define two independent optimal control problems (one for the sequence of active subsystems over the horizon, and the other for control input and switching instants) and solve them alternately by conventional numerical methods for optimization. The algebraic geometry method 23,24 applies cylindrical algebraic decomposition to the NOCP for polynomial switched systems. The embedded method 25,26 converts the optimal control problem for switched systems into a conventional continuous optimal control problem for a larger set of systems by introducing variables corresponding to mode activation. Although these methods successfully optimize the sequence of active subsystems and control input simultaneously, it is difficult to realize NMPC by using these methods when the switched system contains nonlinear subsystems whose state equations are complex, and the sampling period is short because these methods still take long computation time. These studies require seeking the switching sequence, sequence of active subsystems, all over the horizon of NOCP, which increases the computational time significantly compared with the optimal control problem for systems without switches. On the other hand, the minimum-principle for the switched or hybrid systems has been studied for a long time, 27,28 and efficient numerical methods combining it with conventional optimization algorithms, such as dynamic programming or gradient-descent methods under a fixed sequence of the active subsystems, have been developed. 17,[29][30][31][32][33][34][35] However, these methods increase the computational cost a lot compared with the optimal control problem or NMPC for a system without any switches. These methods have to find an optimal (or at least a feasible) sequence of active subsystems on the horizon. For this purpose, they have to solve the optimal control problems for all possible sequence of active subsystems, which obviously takes much more computational time than solving the optimal control problem for only one sequence of active subsystems.
In this paper, we propose an approach for real-time implementation of NMPC for switched systems with state-dependent switches called the moving switching sequence approach, in which the switching sequence on the horizon moves to the present time at each time as well as the optimal state trajectory and the optimal control input on the horizon. We assume that the switching sequence is basically invariant until the first predicted switching time reaches the current time or a new switch enters the horizon. This assumption is reasonable in NMPC for systems with state-dependent switches and reduces computational cost significantly compared with directly optimizing the switching sequence all over the horizon. In the proposed method, we update the switching sequence by checking whether an additional switch occurs or not at the last interval of the present switching sequence and whether the actual switch will occur or not between the current sampling time and the next sampling time. We check the former by computing the state trajectory on the horizon on the basis of the current solution and the current switching sequence. If we detect an additional switch on the state trajectory, we reconstruct the solution of NMPC in consideration of the additional switch. We check the latter by evaluating the switching instants after the solution is updated. If the instant of the first switch in the switching sequence after the solution is updated becomes less than the next sampling time, we predict that the actual switch occurs between the current sampling time and the next sampling time and then remove variables related to the switch from the solution. We propose an algorithm of the moving switching sequence approach consisting of two parts: (1) the local optimization of the control input and switching instants by solving the two-point boundary-value problem (TPBVP) for the whole horizon under a given switching sequence and (2) the detection of an additional switch and the reconstruction of the solution by solving a reduced optimal control problem defined only for additional variables with respect to an additional switch.
The former takes almost the same computational time as the NOCP for systems without switches, and the latter can be achieved at sufficiently small computational burden. Therefore, the computational time of the proposed method increases only slightly from that of NMPC for a system without switches.
In solving the TPBVP, we can apply conventional numerical methods for implementing NMPC in real time. [36][37][38][39][40][41][42] We adopt the continuation/generalized minimum residual (C/GMRES) method, 37 which achieves fast computation by tracking the solution at each time without approximation for state equations and cost functions as long as the sampling period is set sufficiently short. By utilizing the C/GMRES method for the TPBVP containing switches, we update the control input and switching instants simultaneously at each sampling time in a short computational time. In reconstructing the solution for an additional switch, we obtain variables with respect to the switch by solving the reduced optimal control problem only for variables with respect to the additional active subsystem using conventional numerical methods, eg, Newton's method. Note that there are studies using the C/GMRES method that implicitly make the same assumption about the invariance of switching sequence for NMPC for a class of switched systems with state-dependent switches. [43][44][45] For example, Yamakita et al extended the C/GMRES method to mechanical systems with collisions by handling the optimality conditions approximately. 43,44 However, that approximation generates errors from the optimal control, and an undesirable oscillation of the control input is observed in their simulations. They introduce an integrator to avoid that oscillation, which does not guarantee the optimality of the solution. To treat the optimality conditions more appropriately with the C/GMRES method, we proposed a penalty function method. 45 However, this method can make numerical computation unstable due to the penalty function. These approximations are introduced to avoid directly treating changes in the number of optimization variables caused by the addition of a new switch. In contrast, the moving switching sequence approach does not introduce such approximations because this method obtains the additional variables efficiently when the number of switches on the horizon changes.
The rest of this paper is composed as follows. In Section 2, we describe a class of switched systems with state-dependent switches. In Section 3, we derive the optimality conditions and define the TPBVP for that system. In Section 4, we describe the C/GMRES method for TPBVP formulated in Section 3, detection of an additional switch and reconstruction of the solution for the additional switch, and the algorithm of the moving switching sequence approach. In Section 5, we present a numerical simulation of a compass-like robot walking for two cases, ie, a nominal case without any disturbances and a case with impulse disturbances, using the proposed method and demonstrate the method's effectiveness. In Section 6, we conclude our paper.
SWITCHED SYSTEMS WITH STATE-DEPENDENT SWITCHES
We consider a switched system consisting of subsystems .
where x(t) ∈ ℝ n denotes the state vector, u(t) ∈ ℝ m denotes the control input vector, q ⊆ ℝ n denotes the domain of f q (x, u) for x, and M denotes a positive integer. We represent that subsystem q(t) is active or active subsystem is q(t) if the state is governed by . We also consider the switching sets Ψ q 1 ,q 2 ⊆ q 1 that represent the condition of state-dependent switches This means that, if subsystem q 1 is active and x ∈ q 1 reaches Ψ q 1 ,q 2 , that is, if x ∈ q 1 satisfies q 1 ,q 2 (x) = 0, then the active subsystem switches to q 2 . We also consider the state changes discontinuously, ie, a state jump occurs at the same time as the switch, as where x − ∈ Ψ q 1 ,q 2 denotes the state just before the state jump, and x + ∈ ℝ n denotes the state just after the state jump.
Note that, if x satisfies q,q (x) = 0 for some q ∈ Q, just the state jump x + = q,q (x − ) occurs without a switch of the active subsystems. We make the following assumptions about f q (x, u), q 1 ,q 2 (x), and q 1 ,q 2 (x).
Assumption 1. f q (x, u) are continuously differentiable for all x ∈ q , for all u ∈ ℝ m , and for all q ∈ Q, and q 1 ,q 2 (x) and q 1 ,q 2 (x) are also continuously differentiable for all x ∈ ℝ n and for all q 1 , q 2 ∈ Q.
Assumption 2. q 1 ,q 2 (x) is uniquely and explicitly determined for all x ∈ q 1 and for all q 1 , q 2 ∈ Q.
Note that Assumption 4 implies that the length of a time interval in which any subsystem is active is not zero, ie, state jumps do not occur in a row. That is, situations that are too complicated such as infinite loops of state jumps are not considered.
TWO-POINT BOUNDARY-VALUE PROBLEMS
In NMPC, the optimality conditions, which are the necessary conditions of the optimal control, are formulated into a TPBVP and solved numerically to obtain the optimal control input at each time. The optimal control problem for switched systems with state-dependent switches and state jumps is defined as follows: find u(t ′ ) (t ≤ t ′ ≤ t + T) minimizing the cost function subject to (1)-(3), where q (x) ∈ ℝ denotes the terminal cost for subsystem q and L q (x) ∈ ℝ denotes the stage cost for subsystem q. To derive the optimality conditions, we introduce a sequence of subsystems for t ∈ [t, t+T], a switching sequence defined as where q k ∈ Q for all k = 1, … , m, and introduce switching instants t k ∈ ℝ for k = 1, … , m − 1 representing the instants of the switch from subsystem q k to subsystem q k+1 , where 0 ≤ m < ∞, t ≤ t 1 < · · · < t m−1 ≤ t + T, and q 1 , … , q m ∈ Q. Note that t i ≠ t i+1 because of Assumption 4. This denotes that subsystem q 1 is active at [t, t 1 ), subsystem q k is active at (t k−1 , t k ) for k = 2, … , m − 1, and subsystem q m is active at (t m−1 , t + T]. Figure 1 illustrates the switching sequence and a state trajectory on the horizon. We make the following assumption about the solution of the NOCP for systems with state-dependent switches.
Assumption 5. The optimal control input u and switching sequence of a finite length exist.
Suppose that the switching sequence on the horizon t ′ ∈ [t, t + T] is given as = (q 1 , q 2 , … , q m ), where q 1 , … , q m ∈ Q and the instants of the switch from q k to q k+1 are given as t k for all k = 1, … , m − 1. The state trajectory is then given as
FIGURE 1
The switching sequence = (q 1 , q 2 , … , q m ) and a state trajectory on the horizon The optimal control problem is then defined as finding the optimal control input u(t ′ ) (t ≤ t ′ ≤ t + T) minimizing the cost function subject to (6)- (10).
For numerical computation, we discretize the optimal control problem. We divide the horizon t ′ ∈ [t, t + T] into N steps, define the time step Δ ∶= T∕N, and introduce i k as an integer satisfying Figure 2 shows an example of the discretization over the horizon, in which i k The optimal control problem is then given as follows: find the optimal control input sequence
FIGURE 2
An example of the discretization of x and u over the horizon subject to x To derive the optimality conditions, we define the augmented cost function, 46 which is obtained by adjointing equality constraints (14)- (20) to the cost function (12). We introduce the Lagrange multipliers for the state equations in (14), (15), and (18) We also introduce the Lagrange multipliers for the switching conditions q 1 ,q 2 The augmented cost functionJ is then given as We further introduce the Hamiltonian of subsystem q ∈ Q We then obtain the optimality conditions by calculus of variations 46 * The unknown quantities of the optimal control problem are (29). Thus, we define variables to be determined as satisfying where for k = 1, … , m − 1. Note that, if = (q), ie, if only one subsystem q is active over the whole horizon, the optimal control problem is just given as the conventional one for subsystem q.
THE MOVING SWITCHING SEQUENCE APPROACH
The moving switching sequence approach considers the recession of the switching sequence on the horizon as well as the optimal state trajectory and the optimal control input on the horizon. To justify this approach, we make the following assumption.
Assumption 6. Let = (q 1 , … , q m ) be the switching sequence on the horizon at the sampling time t. The switching sequence at the next sampling time t + Δt is given as one of the following: = (q 1 , … , q m ), = (q 1 , … , q m , q m+1 ) for some q m+1 ∈ Q, = (q 2 , … , q m ), or = (q 2 , … , q m , q m+1 ). This assumption limits the change in the switching sequence to those only by the exit of the first switch of the switching sequence because of an actual switch occurrence or the addition of a new switch to the end of the switching sequence. This is a reasonable assumption for systems with state-dependent switches as long as the actual state changes continuously. If the state changes continuously, ie, the actual state changes slightly at each time in a sufficiently short sampling period, the optimal state trajectory also changes slightly from that of the previous sampling time. This assumption will then hold as long as the distance between each switching condition is sufficiently far.
Under this assumption, the switching sequence = (q 1 , … , q m ) moves to the present time at each time without changing from the previous time to the current time and an additional switch may occur at t ′ ∈ (t m , t + T], where the last subsystem of is active. We update the switching sequence by checking whether an additional switch occurs at t ′ ∈ (t m , t + T] at each time and whether the actual switch occurs or not between the current sampling time and the next sampling time. We check the former by computing the state trajectory on the horizon on the basis of the current solution and the current switching sequence. If we detect an additional switch on the state trajectory, we reconstruct the solution taking the switch into account. We check the latter after the solution is updated by evaluating the updated switching instants. If the instant of the first switch of the switching sequence after the solution is updated is less than the next sampling time, we predict the actual switch between the current sampling time and the next sampling time and then remove variables related to the switch from the solution. We propose an algorithm of the moving switching sequence approach composed of two main parts: (1) the optimization of the control input and switching instants under the switching sequence obtained at the previous sampling time by solving the TPBVP formulated in Section 3 and (2) the reconstruction of the solution by solving the reduced optimal control problem for the additional switch. For (1), we utilize the C/GMRES method, and for (2), we define and solve the optimal control problem just for variables with respect to the additional switch and additional active subsystem and reconstruct the solution.
Possible changes of the switching sequence from t to t + Δt under Assumption 6. A, The switching sequence dows not change; B, An additional switch from subsystem q m to subsystem q m+1 is detected in the last interval of the switching sequence; C, An actual switching from subsystem q 1 to subsystem q 2 occurs between t and t + t; D, An actual switch from subsystem q 1 to subsystem q 2 occurs between t and t + t, and an additional switch from subsystem q m to subsystem q m+1 is detected in the last interval of the switching sequence In Section 4.1, we briefly introduce the C/GMRES method for the TPBVP and the method to update the solution in the C/GMRES method in accordance with the switches. In Section 4.2, we describe the detection of an additional switch and optimization of variables related to an additional switch. In Section 4.3, we summarize the algorithm of the moving switching sequence algorithm.
The C/GMRES method
To realize NMPC for systems with state-dependent switches, we have to compute the unknown quantities to be determined, U(t), which is defined by (36) and satisfies (37), at each sampling time within a given sampling period. The C/GMRES method 37 is a fast algorithm of NMPC composed of the continuation method 47 and the GMRES method. 48 In the C/GMRES method, we do not compute U(t) directly but update the solution by integrating . U(t) at each time, eg, as where Δt is the sampling period, and the nonlinear
equation F(U(t), x(t), t) = 0 for U(t) is transformed into a linear equation for
.
U(t) by the continuation method as
where > 0 is a stabilization parameter. We solve this equation by using the forward-difference GMRES method (FD-GMRES method), 48 which solves linear equations fast. In the FD-GMRES method, forward-difference approximation is used to approximate the products of Jacobians and some W ∈ ℝ n ′ , w ∈ ℝ n , ∈ ℝ as where h > 0 is a time increment. By utilizing the FD-GMRES method for the TPBVP formulated in Section 3, we obtain not only the time derivatives of the optimal control input sequence but also the time derivatives of the optimal switching instants and the corresponding Lagrange multipliers with respect to the switching conditions. Integration of . (39) is based on the continuity of U(t) between t and t + Δt, and we have to modify the numerical integration method when there are switches and state jumps because that continuity does not hold when there are switches or state jumps. Let = (q 1 , … , q m ) be the switching sequence for the NOCP at time t, let t k (t) be the optimal switching instants from subsystem q k to subsystem q k+1 at time t for k = 1, … , m − 1, and let i k (t) be an integer satisfying i k (t)Δ ≤ t k (t) − t < (i k (t) + 1)Δ . Note that subsystem q k is active at t ′ ∈ (t k (t), t k+1 (t)) in the NOCP at time t, and then,
U(t) taking discontinuity of the control input into account The integration
, and u * i k+1 (t) (t) are the optimal control input on the interval where subsystem q k is active. Figure 4 illustrates the update of the control input taking the discontinuity of the control input into account. Black circles represent u * i (t) and u * i (t + Δt) for i = i k (t), … , i k+1 (t) + 1, and white circles represent u * (t + k (t)), u * (t + k+1 (t)), u * (t + k (t + Δt)), and u * (t + k+1 (t + Δt)). We then have to determine the control input for subsystem q k at time t + Δt on the basis of control inputs and their time derivatives for subsystem q k at time t, ie, we have to determine u * (t + k (t + Δt)), u * i k (t+Δt)+1 (t + Δt), … , u * i k+1 (t+Δt)−1 (t + Δt), and u * i k+1 (t+Δt) (t + Δt) on the basis of u * (t + k (t)), u * i k (t)+1 (t), … , u * i k+1 (t) (t), However, when i k (t) ≠ i k (t + Δt), it is not suitable to determine u * i (t + Δt) by (39) for i such that i k (t + Δt) < i ≤ i k (t) or i k (t) < i ≤ i k (t + Δt) because (39) does not consider the difference between the active subsystem at iΔ at the sampling time t and that of iΔ at the sampling time t + Δt. For example, in Figure 4, ie, when i k (t) = i k (t + Δt) + 1 and i k+1 (t) = i k+1 (t + Δt) − 1, it is not suitable to determine u * i k (t) (t + Δt) by (39) because control input u * i k (t) (t) and its time derivative . u * i k (t) (t) are for subsystem q k−1 whereas u * i k (t+Δt) (t + Δt) is for subsystem q k . Additionally, it is also not suitable to determine u * i k+1 (t) (t + Δt) by (39) because control input u * i k+1 (t)+1 (t) and its time derivative . u * i k+1 (t)+1 (t) are for q k+1 whereas u * i k+1 (t+Δt)+1 (t + Δt) is for q k . The update (39) when i k (t) ≠ i k (t + Δt) is modified as follows. We first compute u * (t + k (t + Δt)) for k = 1, … , m − 1 by and the switching instants t k (t + Δt) for k = 1, … , m − 1 by Next, we compute u * i (t + Δt) for i satisfying i k (t) < i ≤ i k+1 (t) and i k (t + Δt) < i ≤ i k+1 (t + Δt) by (39), ie, by Thin arrows in Figure 4 represent updates by (42) and (44). We update u * i (t + Δt) for i such that i k (t + Δt) < i ≤ i k (t) or i k (t) < i ≤ i k (t + Δt) using the control input for q k at the sampling time t + Δt and its time derivative. We regard u * i (t + Δt) for i such that i k (t + Δt) < i ≤ i k (t) as the control input obtained by integrating Additionally, we regard u * i (t + Δt) for i such that i k (t) < i ≤ i k (t + Δt) as the control input obtained by integrating Thick arrows in Figure 4 represent the integration by (45) and (46). Note that the Lagrange multipliers for the switching conditions * q 1 ,q 2 (t + Δt), … , * q m−1 ,q m (t + Δt) are just updated by (39), ie, as * q k ,q k+1 (t + Δt) = * q k ,q k+1 (t) +
U(t). Prediction of the actual switch
Suppose that the switching sequence is given as = (q 1 , q 2 , … , q m ). If the switching instant t 1 (t + Δt) in U(t + Δt) after the update of U(t) by the C/GMRES method satisfies t 1 (t + Δt) < t + Δt, we predict that the switch from subsystem q 1 to q 2 occurs on the actual system between t and t + Δt. We then remove q 1 from and U q 1 ,q 2 (t + Δt) from U(t + Δt). Note that u * 0 (t + Δt) is already updated for subsystem q 2 by (45) in this case because i 1 (t + Δt) < 0 ≤ i 1 (t) holds from the fact that t 1 (t + Δt) < t + Δt.
Detection of an additional switch and reconstruction of the solution
Suppose that the switching sequence is given as = (q 1 , q 2 , … , q m ). In solving the TPBVP, the state trajectory on the horizon is computed on the basis of x(t), , and U(t), and we then check whether an additional switch occurs or not for t m−1 < t ′ ≤ t + T. In the discretized problem, the state trajectory for t m−1 < t ′ ≤ t + T is given as , which is computed by (19) and (20), and we check whether an additional switch occurs or not between t m−1 and (i m−1 + 1)Δ + t and between iΔ + t and (i + 1)Δ + t for i = i m−1 + 1, … , N − 1 by evaluating q m ,q k (x) for all q k ∈ Q. For example, when l q m ,q k = 1 for all q k ∈ Q, we regard that a switch from subsystem q m to q k occurs when the sign of q m ,q k (x * (t + m (t))) and that of q m ,q k (x * i m (t)) are different or when that of q m ,q k (x * i (t)) and that of q m ,q k (x * i+1 (t)) are different for some When we detect an additional switch in computing the state trajectory, we reconstruct the solution U(t) taking the additional switch and the additional active subsystem into account. Suppose that we detect a switch from q m to q m+1 between i m Δ + t and (i m + 1)Δ + t where i m−1 < i m < N. Note that, if the switch is detected between t m−1 and (i m−1 + 1)Δ + t, we can formulate the NOCP for reconstructing the solution just by replacing x * i m (t), u * i m (t), and i m Δ + t in the following formulation with x * ( + m−1 (t)), u * ( + m−1 (t)), and t m−1 , respectively. When we detect a switch from q m to q m+1 between i m Δ + t and (i m + 1)Δ + t, then t m (t), * q m ,q m+1 (t), and u * (t + m (t)) are appended to U(t). In addition, we have to reoptimize u * i m +1 (t), … , u * N−1 (t) for the additional active subsystem q m+1 because u * i m +1 (t), … , u * N−1 (t) are the optimal control input sequence for subsystem q m before detecting the additional switch. To reduce the computational cost of the reconstruction of the solution, we define an optimal control problem only for the interval after the additional switch to determine t m (t), * q m ,q m+1 (t), u * (t + m (t)), and u * i m +1 (t), … , u * N−1 (t) numerically. Although this optimal control problem only considers the optimality of the interval where q m+1 is active, we resolve the entire optimal control problem in consideration of the whole horizon after solving this optimal control problem. We define the reduced optimal control problem for these variables as follows: find the optimal control input u * (t + m (t)), u * i m (t), … , u * N−1 (t) minimizing the cost function defined for t ′ ∈ (i m Δ + t, t + T], subject to the state trajectory for t ′ ∈ (t + i m Δ , t + T], The optimality conditions for this optimal control problem are derived as * The unknown quantities in this optimal control problem are given as , t m , and * q m ,q m+1 (t). Note that u * i m (t) and x * i m (t) are given as the boundary value. For a given u * (t + m (t)), u * i m (t), … , u * N−1 (t), t m (t), and * q m ,q m+1 (t), we can compute x * (t − m (t)), x * (t + m (t)), … , x * N (t) from (49) and (51)-(53) and * N (t), … , * (t + m (t)) from (54)-(57). Therefore, variables to be determined for this optimal control problem are given as and U q m+1 (t) has to satisfy We solve this problem numerically using the conventional methods, eg, Newton's method. For computation using Newton's method, we make the following assumption for F q m+1 (U q m+1 (t)) and U q m+1 (t).
Note that, if Assumption 7 does not hold, we add a minimum number of the control inputs u to U q m+1 (t) so that the Jacobian of F q m+1 (U q m+1 (t)) for U q m+1 (t) becomes nonsingular and redefine the optimal control problem.
Algorithm 1 shows the reconstruction of the solution for an additional switch. First, if we detect a switch between i m and i m + 1, where i m < N − 1, we set t m (t) between i m Δ + t and (i m + 1)Δ + t. For example, it is reasonable to approximate t m (t) by linear interpolation as Next, we substitute initial guess values for u * (t + m (t)), u * i m (t), … , u * N−1 (t), and * q m ,q m+1 (t). After that, we iterate Newton's method for (62) and obtain U q m+1 (t). We terminate the iteration when the error in the optimality condition ||F q m+1 (U q m+1 (t))|| is sufficiently small or when the number of iteration is sufficiently large. We introducer ∈ ℝ for the criterion of the former and i max ∈ ℕ for that of the latter. After the iteration, we replace u * i m +1 (t), … , to U(t), and append q m+1 to .
The algorithm of the moving switching sequence approach
The length of the horizon We set the length of the horizon as a smooth increasing function such that T(0) = 0 and T(t) → T f (t → ∞), eg, as where T ∈ ℝ and ∈ ℝ are positive constant values, to initialize U(0) in sufficiently short computational time and to modify the solution U(t) when Assumption 6 does not hold.
Initialization of U(0)
We have to compute the initial solution U(0) by iterative method such as Newton's method instead of the C/GMRES method because we do not know the optimal U(t) before t = 0. We also have to compute U(0) in a short computational time unless we know x(0) and compute U(0) in advance. If we set T(t) such that T(0) = 0, we can compute the initial solution U(0) in sufficiently short computational time even by iterative method such as Newton's method: from the initial state x(0) and active subsystem q at t = 0, we obtain u(0) ∈ ℝ m by solving where 0 ∈ ℝ n is obtained by and we set u * i (0) = u(0) for all i = 0, … , N − 1. Note that U(t) obtained by this initialization satisfies F(U(t), x(t), t) = 0. Note also that, if we set the sampling period sufficiently short, T(t) and T(t + Δt) are almost same and we can update U(t) just by integrating .
Shrinkage of the horizon when Assumption 6 does not hold
When we detect an additional switch before the end of the switching sequence in computing the state on the horizon, ie, when Assumption 6 does not hold, we can still use the C/GMRES method by shrinking the horizon and then increasing the length of the horizon smoothly. Suppose that = (q 1 , q 2 , … , q m ) is the switching sequence at time t and an additional switch is detected at the middle of the switching sequence, eg, an additional switch from q k to q j (1 ≤ j < m, q j ≠ q k+1 ) is detected between i j and i j + 1 (i k + 1 ≤ i j < i k+1 ). We then shrink the length of the horizon T(t) by setting t 0 > 0 so that T(t) satisfies We also modify u * , which correspond to u * (t), u * (t+Δ 2 ), … , u * (t+(N−1)Δ 2 ), by linear interpolation from u * 0 (t), … , u * N−1 (t) before the modification, which correspond to u * (t), u * (t+Δ 1 ), … , u * (t+i j Δ 1 ). Figure 5 shows an example of the linear interpolation of u * 0 (t), … , u * N−1 (t) when we shrink the horizon. After interpolating u * 0 (t), … , u * N−1 (t), we remove U q k ,q k+1 (t), … , U q m−1 ,q m (t) from U(t) and remove q k+1 , … , q m from . Once the solution U(t) for the shrunk horizon is obtained, we can continue to update the solution by the C/GMRES method with the length of the horizon T(t) increased smoothly toward T f .
Entire algorithm
Algorithm 2 shows the entire algorithm of the moving switching sequence approach. Note that, in Algorithm 2, we treat q 1 as the first element of and q m as the last element of . This algorithm basically updates U(t) using the C/GMRES method introduced in Section 4.1. If we predict that t 1 (t + Δt) satisfies t 1 (t + Δt) < t + Δt, where q 1 is the first element of the switching sequence , we regard that a switch occurs between the current sampling time and the next sampling time and we remove variables related to the switch from U(t) and remove q 1 from . While computing x * 0 (t), … , x * N (t) in solving the TPBVP, we monitor whether a new switch occurs or not at t ′ ∈ (t m−1 , t + T], where q m is the last element of . If we detect a switch from q m to q m+1 for some q m+1 ∈ Q at t ′ ∈ (t m−1 , t + T], we obtain the additional variables related to the switch numerically and reconstruct U(t) by Algorithm 1 and append q m+1 to . If we detect the violation of Assumption 6, we shrink the horizon and modify U(t) and .
NUMERICAL SIMULATIONS OF COMPASS-LIKE WALKING
This section describes an application of the proposed method to the simplest biped walking robot model, called a compass-like model. 2 Compass-like walking involves state-dependent switches and state jumps, and we successfully control it by using the moving switching sequence approach. Figure 6 shows the model of the compass-like robot, and Table 1 lists its physical parameters. The compass-like robot is composed of two legs and one joint, and we assume that the physical properties of both legs are the same. Let 1 be q T ] T be the state vector. The dynamics of compass-like walking contains three phases: (1) leg 1 is standing on the ground and leg 2 is swinging, (2) leg 2 is standing on the ground and leg 1 is swinging, and (3) legs 1 and 2 are standing on the ground. To simplify the problem, we make the following assumption about the impact between the swinging leg and the ground. 3 Assumption 8. The impact between the swinging leg and the ground results in no rebound and no slipping, and the supporting leg lifts from the ground at the time of the impact.
Compass-like walking model
Under this assumption, the model of the compass-like walking robot is composed of the dynamics when one leg is standing on the ground and the other is swinging and state jumps by the collisions between the swinging leg and the ground. The state equation when leg 1 is standing and leg 2 is swinging is derived by Lagrange's method as where u ∈ ℝ is a control input torque, M 1 ∈ ℝ 2×2 is the inertia matrix, H 1 ∈ ℝ 2 consists of terms related to Coriolis force and terms about gravity force, and B 1 = [1 − 1] T . Additionally, the state equation when leg 2 is standing and leg 1 is swinging is derived as where M 2 ∈ ℝ 2×2 is the inertia matrix, H 2 ∈ ℝ 2 consists of terms related to Coriolis force and terms about gravity force, and B 2 = [−1 1] T . The equations of state jump are derived from the conservation law of angular momentum. 2 When leg 2 collides with the ground, the conservation law of angular momentum is written as where Q − 1,2 , Q + 1,2 ∈ ℝ 2×2 , and when leg 1 collides to the ground, the conservation law of angular momentum is written as where Q − 2,1 , Q + 2,1 ∈ ℝ 2×2 . Note that the posture of the robot is not changed by the collision because the collision is instantaneous from Assumption 8. The equations of the state jump are then given as and ] . (74) The collision occurs when the height of the tip of the swinging leg, h in Figure 6, becomes zero. That is, the switching conditions are given as 1,2 (x) ∶= l(cos 1 − cos 2 ) = 0, (75) and 2,1 (x) ∶= l(cos 2 − cos 1 ) = 0.
We introduce an assumption to abstract compass-like walking motion easily as a switched system. 3 Assumption 9. The swinging leg does not contact with the ground when the swinging leg is put forward from behind the supporting leg.
Under Assumption 9, we ignore the collision between leg 2 and the ground when 1 − 2 ≤ 0 and the collision between leg 1 and the ground when 2 − 1 ≤ 0. Figure 7 summarizes switches and state jumps in compass-like walking.
Controller design
To achieve a steady gait, we design the cost function with two aims: moving forward steadily and achieving a steady gait. To achieve the former aim, we make the angular velocity of the supporting leg close to the reference constant v ref . To achieve the latter aim, we add a 2 ( 1 + 2 ) 2 to the cost function, thereby preventing the tip of the swinging leg from being too high. In addition to these two aims, we add a penalty to the control input u. Consequently, we design the cost functions as and 1 (x) = 2 (x) = 0. The parameters of NMPC are listed in Table 2.
Simulation conditions
To evaluate the robustness of the proposed method, the numerical simulations are performed in two cases: the nominal case without disturbances and the case with impulsive disturbances. In the simulation with disturbances, we input instantaneous changes on the angular velocity as Figures 8,9, and 10 show simulation results for the nominal case, those for the case with disturbances, and the enlarged time history of the predicted switching instant around the disturbances, respectively. Note that dashed lines in these Figures represent discontinuous change of variables. In Figures 8 and 9, switches and state jumps occur at points where . 1 and . 2 are discontinuous except for discontinuous points due to the disturbances, and the compass-like robot walked successfully in both cases. ||F|| in Figures 8 and 9 denotes the norm of the residual in (37), and t k in Figures 8, 9, and 10 are switching instants on the predicted horizon. Note that, when there are no switches on the predictive horizon, t k is plotted as t k = 0.
Simulation results
We can see that the predicted switching instants are mostly constant in the nominal case from Figure 8 and fluctuate in the case with disturbances from Figures 9 and 10. From Figure 9, instantaneous increases of ||F|| at the moments of the impulsive disturbances attenuate immediately, which suggests that the proposed method optimizes the switching instants and control input simultaneously. Note that, even if in the nominal case, the error increases at the sampling time when an additional switch is detected and when a switch is removed from , ie, when an actual switch occurs. This suggests that errors increase when the switching sequence changes from the previous sampling time. In addition to the points where the error peaks due to such reasons, we can see errors increase when there is a switch on the horizon. This occurs when i k (t) differs from i k (t − Δt). However, ||F|| is sufficiently small on the whole, and we can see that the proposed method can compute the optimal solution. We also found that each update takes around 0.14 [ms] and that we can implement control update within the sampling period of 1 [ms].
CONCLUSIONS
In this paper, we proposed a moving switching sequence approach for real-time computation of NMPC for switched systems with state-dependent switches. In the approach, the switching sequence on the horizon moves to the present time at each time as well as the optimal state trajectory and optimal control input on the horizon. We assume that the switching sequence is basically invariant on the horizon and update the switching sequence at each time by checking whether an additional switch occurs or not and whether the actual switch will occur or not. We propose an algorithm composed of the update of the solution by solving the TPBVP and reconstruction of the solution taking into account an additional switch. We utilize the C/GMRES method for the former problem and solve the reduced optimal control problem for the latter. We performed numerical simulations of compass-like walking for a nominal case without disturbances and a case with impulsive disturbances and found that the proposed method successfully computes the optimal solution in a short time even if there are disturbances.
For future work, we will analyze the necessary or sufficient conditions of the assumption about the invariance of the switching sequence. We will also seek methods to design the cost functions and constraints so that the assumption holds.
ACKNOWLEDGEMENT
This work was partly supported through the JSPS Kakenhi grant JP15H02257. | 11,141.2 | 2019-11-05T00:00:00.000 | [
"Mathematics"
] |
Antiviral Activities of Eucalyptus Essential Oils: Their Effectiveness as Therapeutic Targets against Human Viruses
Given the limited therapeutic management of infectious diseases caused by viruses, such as influenza and SARS-CoV-2, the medicinal use of essential oils obtained from Eucalyptus trees has emerged as an antiviral alternative, either as a complement to the treatment of symptoms caused by infection or to exert effects on possible pharmacological targets of viruses. This review gathers and discusses the main findings on the emerging role and effectiveness of Eucalyptus essential oil as an antiviral agent. Studies have shown that Eucalyptus essential oil and its major monoterpenes have enormous potential for preventing and treating infectious diseases caused by viruses. The main molecular mechanisms involved in the antiviral activity are direct inactivation, that is, by the direct binding of monoterpenes with free viruses, particularly with viral proteins involved in the entry and penetration of the host cell, thus avoiding viral infection. Furthermore, this review addresses the coadministration of essential oil and available vaccines to increase protection against different viruses, in addition to the use of essential oil as a complementary treatment of symptoms caused by viruses, where Eucalyptus essential oil exerts anti-inflammatory, mucolytic, and spasmolytic effects in the attenuation of inflammatory responses caused by viruses, in particular respiratory diseases.
Introduction
Eucalyptus is a genus of trees belonging to the Myrtaceae family native to Australia and Tasmania that includes 900 species and subspecies cultivated in different areas of the world with subtropical and Mediterranean climates [1][2][3][4]. Various species of Eucalyptus are recognized for their high biomass production, rapid growth rate, good adaptation to various environmental conditions, and excellent wood quality to produce paper and derived products [5][6][7][8]. In turn, some species of the genus (e.g., E. polybractea, E. smithii, and E. globulus) have received particular attention as sources of essential oils for use in pharmaceutical and cosmetic products [1,9,10].
Numerous examples illustrate the phytopharmacological potential of essential oils obtained from Eucalyptus. These compounds are recognized for their broad spectrum of action, such as antibacterial, antifungal, antiviral, anti-inflammatory, anti-immunomodulatory, antioxidant, and wound healing properties. They are commonly used for the treatment of respiratory tract diseases such as the common cold, nasal congestion, sinusitis, pulmonary tuberculosis, bronchitis, asthma, influenza, acute respiratory distress syndrome (ARDS), and chronic obstructive pulmonary disease (COPD) [1,9]. Regardless of the route of administration of preparations of Eucalyptus essential oil, after being absorbed, the components exert their antiseptic, anti-inflammatory, and expectorant activities, which justifies the interest by researchers in the use of Eucalyptus essential oil to treat respiratory
Chemical Composition of Eucalyptus Essential Oil for Medicinal Use
Eucalyptus essential oil is generally obtained from steam distillation or hydrodistillation of leaves and less frequently from fruits, flowers, and stems [18,19]. At least 300 species of Eucalyptus contain volatile oils in their leaves, with a chemical composition comprising Pharmaceuticals 2021, 14, 1210 3 of 18 a mixture of volatile bioactive compounds, mainly monoterpenoids, such as 1,8-cineole, α-pinene, β-pinene, γ-terpinene, limonene, and p-cymene, and, in a smaller quantity, sesquiterpenes, such as globulol, α-humulene and β-eudesmol [9,20]. Eucalyptus oil for medicinal purposes is commonly extracted from the leaves of E. polybractea, E. smithii, or E. globulus because the content of the main bioactive monoterpene, 1,8-cineole (eucalyptol), in these species is greater than 70% (v/v) of the total oil. The pharmacopoeias of many countries, including the United States, Spain, the United Kingdom, Germany, France, Belgium, the Netherlands, Australia, Japan, and China, have ruled on the benefits and applications, i.e., infusion, inhalation (steam), and topical application, of these oils [1]. E. globulus is the main species used in the phytopharmacological industry to obtain essential oil of high medicinal value [12,21], primarily because the species is widely cultivated worldwide and has been subjected to genetic selection processes in breeding programs to optimize different wood productivity characteristics [1,9].
There are numerous products prepared with the essential oil of E. globulus or with its main component (1,8-cineole), both for internal use (tablets, capsules, or syrups) and external use (nasal drops and ointments) [1,9]. In recent years, medications such as 1,8-cineole and Myrtol ® standardized capsules (300 mg capsule that has at least 75 mg of 1,8-cineole, 75 mg of limonene, and 20 mg of α-pinene), sold commercially as GeloMyrtol ® and GeloMyrtol forte ® , have received substantial attention due to their therapeutic benefits in various respiratory conditions [1]. Due to the multiple clinical studies that support their phytomedicinal use, Eucalyptus essential oil and products containing its derivatives have been classified as highly safe [1,16,[21][22][23][24][25]. Other species of Eucalyptus whose oils are of medicinal use, either because of their high 1,8-cineole content or because of their beneficial properties, include E. maideni, E. bicostata, E. sideroxylon, E. cinerea, E. leucoxylon, E. camaldulensis, E. tereticornis, and E. grandis [19][20][21][26][27][28][29][30]. The profile of bioactive compounds differs between different Eucalyptus species, resulting in differences in medicinal properties [9]. Table 1 provides the main Eucalyptus species for medicinal use, the chemical composition of their essential oils, and the total percentage of their major compounds. The chemical structures of the main monoterpenes and sesquiterpenes are provided in Figure 1. Table 1. Eucalyptus species for medicinal use, chemical composition, and the total percentage of compounds of their essential oils.
Chemical Composition of Eucalyptus Essential Oil for Medicinal Use
Eucalyptus essential oil is generally obtained from steam distillation or hydrodistillation of leaves and less frequently from fruits, flowers, and stems [18,19]. At least 300 species of Eucalyptus contain volatile oils in their leaves, with a chemical composition comprising a mixture of volatile bioactive compounds, mainly monoterpenoids, such as 1,8cineole, α-pinene, β-pinene, -terpinene, limonene, and p-cymene, and, in a smaller quantity, sesquiterpenes, such as globulol, α-humulene and β-eudesmol [9,20]. Eucalyptus oil for medicinal purposes is commonly extracted from the leaves of E. polybractea, E. smithii, or E. globulus because the content of the main bioactive monoterpene, 1,8-cineole (eucalyptol), in these species is greater than 70% (v/v) of the total oil. The pharmacopoeias of many countries, including the United States, Spain, the United Kingdom, Germany, France, Belgium, the Netherlands, Australia, Japan, and China, have ruled on the benefits and applications, i.e., infusion, inhalation (steam), and topical application, of these oils [1]. E. globulus is the main species used in the phytopharmacological industry to obtain essential oil of high medicinal value [12,21], primarily because the species is widely cultivated worldwide and has been subjected to genetic selection processes in breeding programs to optimize different wood productivity characteristics [1,9].
There are numerous products prepared with the essential oil of E. globulus or with its main component (1,8-cineole), both for internal use (tablets, capsules, or syrups) and external use (nasal drops and ointments) [1,9]. In recent years, medications such as 1,8-cineole and Myrtol ® standardized capsules (300 mg capsule that has at least 75 mg of 1,8cineole, 75 mg of limonene, and 20 mg of α-pinene), sold commercially as GeloMyrtol ® and GeloMyrtol forte ® , have received substantial attention due to their therapeutic benefits in various respiratory conditions [1]. Due to the multiple clinical studies that support their phytomedicinal use, Eucalyptus essential oil and products containing its derivatives have been classified as highly safe [1,16,[21][22][23][24][25]. Other species of Eucalyptus whose oils are of medicinal use, either because of their high 1,8-cineole content or because of their beneficial properties, include E. maideni, E. bicostata, E. sideroxylon, E. cinerea, E. leucoxylon, E. camaldulensis, E. tereticornis, and E. grandis [19][20][21][26][27][28][29][30]. The profile of bioactive compounds differs between different Eucalyptus species, resulting in differences in medicinal properties [9]. Table 1 provides the main Eucalyptus species for medicinal use, the chemical composition of their essential oils, and the total percentage of their major compounds. The chemical structures of the main monoterpenes and sesquiterpenes are provided in Figure 1.
Antiviral Activity of Eucalyptus Essential Oil
Natural products derived from essential oils and extracts of medicinal plants are natural sources well tolerated by humans [14,32]. In this sense, plant essential oils have been extensively studied and are reported to have antiviral activities [16]. Among them, the essential oil and bioactive terpenes present in Eucalyptus leaves have shown great potential as antiviral therapies [33,34]. Inhalation of steam from Eucalyptus essential oil has previously shown a positive impact on treating difficulties derived from viral infections, such as cold, bronchiolitis, rhinosinusitis, and asthma [13,15]. Therefore, they represent a good alternative to treat infections caused by viruses, either to alleviate symptoms or to affect different pharmacological targets of these pathogens [14,15,35]. Table 2 summarizes the major studies related to the antiviral activity of Eucalyptus essential oil or its monoterpenes.
Antiviral Activity of Eucalyptus Essential Oil
Natural products derived from essential oils and extracts of medicinal plants are natural sources well tolerated by humans [14,32]. In this sense, plant essential oils have been extensively studied and are reported to have antiviral activities [16]. Among them, the essential oil and bioactive terpenes present in Eucalyptus leaves have shown great potential as antiviral therapies [33,34]. Inhalation of steam from Eucalyptus essential oil has previously shown a positive impact on treating difficulties derived from viral infections, such as cold, bronchiolitis, rhinosinusitis, and asthma [13,15]. Therefore, they represent a good alternative to treat infections caused by viruses, either to alleviate symptoms or to affect different pharmacological targets of these pathogens [14,15,35]. Table 2 summarizes the major studies related to the antiviral activity of Eucalyptus essential oil or its monoterpenes. Herpes simplex viruses (HSVs) are DNA viruses belonging to the Herpesviridae family. Among these, HSV type 1 (HSV-1) and type 2 (HSV-2) stand out as common and contagious pathogens in humans. HSV-1 produces gingivostomatitis, cold sores, and herpetic keratitis, and HSV-2 usually produces genital lesions [33]. These pathogens can be transmitted when an infected person spreads the virus via active lesions. Treatment is symptomatic, and antiviral therapy is performed using medications such as acyclovir (ACV), valaciclovir, famciclovir, cidofovir, and foscarnet, which target viral DNA polymerase, for the treatment of acute, severe, or recurrent infections [33,35]. Different studies have addressed the potential of Eucalyptus essential oil for the treatment of HSV-1 and HSV-2. Bourne et al. [36], for example, evaluated the in vivo efficacy of 1,8-cineole in a mouse model of genital HSV-2 infection. In healthy females, 15 µL of 1,8-cineole was administered vaginally at a concentration of 100%, followed by an intravaginal challenge with the pathogen (104 pfu of HSV-2). The findings showed that 1,8-cineole provided significant protection (44%) against this pathogen. Schnitzler et al. [37] evaluated the effect of E. caesia essential oil against HSV-1 and HSV-2. Antiviral activity was tested in vitro in RC-37 cells using a plaque reduction assay. Eucalyptus oil was active against both pathogens, with 50% inhibitory concentrations (IC 50 s) of 0.009% and 0.008% to prevent HSV-1 and HSV-2 plaques, respectively. The antiviral activity was confirmed in viral suspension tests, where at nontoxic Eucalyptus oil concentrations (0.03%), HSV-1 and HSV-2 viral titers were reduced by 57.9% and 75.4%, respectively. In that study, the essential oil exerted a direct antiviral effect on HSV because it reduced the infection before or during virus adsorption but not after the penetration of the virus into the host cell. Similar results were obtained by Gavanji et al. [31], who evaluated the in vitro effect of E. caesia essential oil against HSV-1 through a plaque reduction assay in Vero cells. Substantial anti-HSV-1 capacity was reported, with an IC50 of 0.004%, obtaining better results than those for acyclovir, which did not generate an inhibitory effect on HSV-1 at the concentrations tested (0.001-0.01%). In that same study, it was speculated that the molecules present in the oil interacted with the HSV-1 envelope, consequently inhibiting binding to the host cell. Minami et al. [38] evaluated the in vitro anti-HSV-1 effect of E. globulus essential oil using a plaque reduction assay in Vero cells. An IC100 of 1% was reported when HSV-1 was incubated for 24 h with oil before infecting Vero cells. However, when Vero cells were treated with the essential oil before or after viral adsorption, no anti-HSV-1 activity was observed, suggesting that the antiviral activity of the essential oil may be due to direct interaction with virions and binding to viral envelopes and glycoproteins.
Similar results were reported in a study by Astani et al. [39], who verified the effects of Eucalyptus essential oil and its purified major monoterpenes (1,8-cineole, α-pinene, p-cymene, γ-terpinene, α-terpineol, and terpinen-4-ol) against the KOS strain of HSV-1. Using an in vitro plaque reduction assay in RC-37 cells, the authors evaluated the inhibition of viral replication and the selectivity index of the tested compounds. Moderate antiviral effects were observed when oil or monoterpenes were added before infection or after HSV-1 penetration into host cells. Among the compounds, the monoterpenes 1,8-cineole and α-pinene were more active, moderately inhibiting viral replication (close to 40%). However, when HSV-1 was pretreated with essential oil or individual monoterpenes, viral infectivity was considerably reduced, with an IC 50 value of 55 µg/mL for E. globulus essential oil; the exception was 1,8-cineole (IC 50 : 1.20 mg/mL). Better IC 50 values were observed for the individual monoterpenes: α-pinene (4.5 µg/mL), γ-terpinene (7.0 µg/mL), p-cymene (16.0 µg/mL), α-terpineol (22.0 µg/mL), and terpinen-4-ol (60.0 µg/mL). These findings allow us to conclude that the mechanism of anti-HSV-1 action is exerted by direct inactivation, that is, by the binding of the components to the viral proteins involved in the adsorption to and penetration of the host cell. The structure of the Herpes virus, replication cycle, and the potential antiviral mechanism of 1,8-cineole and other monoterpenes present in Eucalyptus essential oils are provided in Figure 2. essential oil; the exception was 1,8-cineole (IC50: 1.20 mg/mL). Better IC50 values were observed for the individual monoterpenes: α-pinene (4.5 μg/mL), γ-terpinene (7.0 μg/mL), p-cymene (16.0 μg/mL), α-terpineol (22.0 μg/mL), and terpinen-4-ol (60.0 μg/mL). These findings allow us to conclude that the mechanism of anti-HSV-1 action is exerted by direct inactivation, that is, by the binding of the components to the viral proteins involved in the adsorption to and penetration of the host cell. The structure of the Herpes virus, replication cycle, and the potential antiviral mechanism of 1,8-cineole and other monoterpenes present in Eucalyptus essential oils are provided in Figure 2. [46] and Lussignol et al. [47], the Herpes virus consists of 7 structural glycoproteins (gB, gC, gD, gH, gK, gL and gM) present in the lipid bilayer envelope (LBE). However, only four of these glycoproteins (gB, gD, gH, and gL) are necessary and sufficient to allow the fusion of the virus with the plasma membrane of the host cell (shown in the illustration). It has a relatively large, double-stranded, linear DNA genome surrounded by an icosahedral capsid. This, in turn, is surrounded by an integument that contains between 15 and 20 proteins and is in direct contact with the LBE. The herpes virus replication cycle begins when the gB, gD, and gH-gL glycoproteins bind to their receptors in the host cell (gB receptors: PILRα (HSV-1), MAG, NMHC-IIA; gD receptors: HVEM, Nectin-1/Nectin-2, 3-OS HS (HSV-1); gH-gL receptors: αvβ3 integrin). This allows the LBE of the virus to fuse with the plasma membrane or endocytosis, releasing the capsid and integument into the cytoplasm. Using the microtubule network, the nucleocapsid is transported to the nuclear pore, where the viral genome is released into the nucleus and circularized. Viral DNA serves as a template for RNA polymerase II, which leads to the production of mRNA, expressed in three successive and coordinated phases. The mRNAs are translated in the cytoplasm into different viral proteins, including immediate-early (α-proteins), early (β-proteins), and late (γ-proteins) proteins. Most of the late gene products contribute to the formation of the viral particle. Packaging of DNA into preassembled capsids takes place in the nucleus. This is followed by a primary envelope of the capsid by budding through the inner nuclear membrane. The envelope of the perinuclear virions then fuses with the outer nuclear membrane to release naked capsids into the cytoplasm (de-envelopment). The envelope proteins are glycosylated in the endoplasmic reticulum (ER) and then move by transport vesicle from the ER to the Golgi apparatus and finally to the cell plasma membrane. Tegumented capsids acquire a "second" final envelope to become virions from post-Golgi membrane compartments. A role for autophagic membranes in virion envelope and release has been proposed for some herpes viruses. Once formed, virions are transported to the cell surface within small vesicles using exocytosis machinery and released from cells. The red box indicates the potential mechanism of action against the Herpes virus by 1,8-cineole and other monoterpenes present in Eucalyptus essential oils by binding and inhibiting the glycoproteins gB, gD, and gH-gL and thus inhibiting the binding of the virus with its receptors and subsequent fusion of LBE with the host cell.
Influenza Virus
Influenza viruses (IFVs) are enveloped RNA viruses that belong to the Orthomyxoviridae family and are classified as type A, B, or C based on hemagglutinin and neuraminidase proteins [34]. IFV-A is the most notable in terms of human morbidity and mortality because it has several different serotypes that have caused pandemics worldwide, for example, H1N1, which caused the "Spanish flu" in 1918 (with 40-50 million reported deaths worldwide) and swine flu in 2009; H2N2, which caused the Asian flu in 1957 (>1 million deaths worldwide); H3N2, which caused the Hong Kong flu in 1968; and H5N1, which caused avian influenza in 2004 [22]. Viral influenza is an infectious respiratory disease with symptoms such as fever, runny nose, sore throat, muscle pain, headache, cough, and fatigue but can progress to pneumonia and other complications such as ARDS, COPD, rhinosinusitis, meningitis, encephalitis, and worsening of preexisting health problems such as asthma and cardiovascular diseases [34]. IFV infections are primarily treated with mantadine, rimantadine, oseltamivir (Tamiflu ® ), and zanamivir (Relenza ® ), all neuraminidase inhibitors. However, their use has been limited by side effects and the emergence of resistant viral strains [22,48]. Although vaccination is the most effective means of protection against influenza, the vaccines must be administered annually; therefore, the use of natural products for the treatment of IFVs could provide supplemental protection [49].
Several studies have reported the anti-IFV capacity of Eucalyptus essential oil [22,34,50,51]. For example, Usachev et al. [40] investigated the antiviral activity of E. polybractea against influenza A virus (NWS/G70C/H11N9) in air. Their results showed that when the pure essential oil was actively diffused with a nebulizer for 15 s (oil concentration: 125 µg/L of air in the chamber), IFV-A was completely inactivated in the air. Saturated oil vapor was slightly less effective, achieving a viral inactivation of 86% after one day of exposure. However, it was concluded that both aerosol and E. polybractea oil vapor could be used as effective natural antiviral agents for disinfection applications. Vimalanathan & Hudson [41] demonstrated the anti-influenza A activity (Denver/1/57/H1N1) of E. globulus essential oil, both in the liquid phase and in the vapor phase, using a plaque reduction assay in MDCK cells. In that study, an MIC100 of 50 µL/mL was reported in liquid phase assays, and a 94% reduction in viral infection was observed after 10 min of exposure of the virus to 250 µL of oil vapor. Furthermore, the possible direct effects of E. globulus essential oil on the main external proteins of influenza virus (hemagglutinin and neuraminidase) were also evaluated; the authors reported that 10 min of exposure to steam (dilution 1/160) was able to inhibit hemagglutinin activity but not neuraminidase activity, suggesting an interaction with hemagglutinin as a possible mechanism for antiviral activity.
Li et al. [42] These findings allow us to conclude that 1,8-cineole protects mice from IFV-A by attenuating pulmonary inflammatory responses. In another study, Li et al. [23] evaluated the coadministration of 1,8-cineole (6.25 and 12.5 mg/kg) with the influenza vaccine (0.2 µg of hemagglutinin) and its capacity to provide cross-protection against infection by influenza virus A (FM/1/47/H1N1) in a mouse model that was immunized intranasally three times (days 0, 7, and 14) and challenged with the virus seven days after the last immunization. The results indicated that mice that had received the influenza vaccine in conjunction with 1,8-cineole (12.5 mg/kg) exhibited a longer survival time, less inflammation, less weight loss, a lower mortality rate, less pulmonary edema (pulmonary index), and lower viral titers than those for mice immunized with the vaccine without 1,8-cineole. The coadministration of the vaccine with 1,8-cineole increased the serum production of specific antibodies against influenza (IgG2a), the secretory response of IgA in the nasal cavity mucosa, the expression of intraepithelial lymphocytes in the upper respiratory tract, the maturation of dendritic cells, and the expression of costimulatory proteins, i.e., cluster of differentiation (CD)40, CD80, and CD86, in peripheral blood. These results suggested that the coadministration of 1,8-cineole (12.5 mg/kg) with the influenza viral antigen generated cross-protection against the influenza virus in a mouse model.
Furthermore, some studies have investigated the effect of 1,8-cineole in the treatment of characteristic symptoms of influenza and diseases associated with influenza complications, such as asthma, COPD, and rhinosinusitis [1,25]. The nasal application of 1,8-cineole did not demonstrate significant effects on cough induced by citric acid (antitussive activity) in a guinea pig model validated for the cough reflex [52]. However, it is noteworthy that the compound demonstrated clinical efficacy in patients diagnosed with severe bronchial asthma generating anti-inflammatory, mucolytic, and spasmolytic effects [25,53] as well as reducing exacerbations and dyspnea in patients with COPD [54]. In this context, 1,8-cineole was also shown to relieve headache, trigeminal nerve pressure point sensitivity, nasal obstruction, and rhinological secretion in patients diagnosed with acute rhinosinusitis [55]. It is suggested that the mechanism of action involves an increase in the antiviral activity of IRF3 as well as the IκBα-and JNK-dependent inhibitory effect of IRF3 on the NF-κB p65 and NF-κB proinflammatory signaling pathways [56]. The structure of the Influenza A virus, replication cycle, and the potential antiviral mechanism of 1,8-cineole and other monoterpenes present in Eucalyptus essential oils are provided in Figure 3.
SARS-CoV-2 (COVID-19)
The type 2 coronavirus that causes severe acute respiratory syndrome (SARS-CoV-2) is a positive-strand RNA virus belonging to the genus Betacoronavirus of the family Coronaviridae and is responsible for coronavirus disease 2019 . This disease was declared a global pandemic in March 2020 because it can be transmitted effectively between humans and has shown a high degree of morbidity and mortality [15,59]. The majority of people infected with SARS-CoV-2 will experience a mild or moderate respiratory illness and will recover without the need for special treatment. Older people and those with underlying medical problems such as cardiovascular diseases, diabetes, chronic respiratory diseases, and cancer are more likely to develop serious diseases [16]. Among the most common symptoms are fever, chills, dry cough, sputum production, fatigue, lethargy, arthralgia, myalgia, headache, dyspnea, nausea, vomiting, anorexia, and diarrhea [15]. In extreme cases, patients experience a condition known as a "cytokine storm", which is characterized by a dramatic increase in the levels of chemokines and proinflammatory cytokines (such as IL-6 and TNF-α), leading to the development of SARS, pneumonia, septic shock, metabolic acidosis, coagulation dysfunction and even death [16,60]. Currently, there are licensed vaccines that can be used to generate immunity against SARS-CoV-2 and that have demonstrated high efficacy in the prevention of COVID-19 [61]. In addition, the FDA has authorized a variety of therapeutic options for emergency use that are available or are being evaluated for the treatment of COVID-19, including antiviral drugs (e.g., remdesivir), anti-SARS-CoV-2 monoclonal antibodies (e.g., bamlanivimab/etesevimab and casirivimab/imdevimab), antiinflammatory drugs (e.g., dexamethasone), and immunomodulatory agents (e.g., baricitinib and tocilizumab) [62,63]. The most promising antiviral strategy is the development or reuse of drugs that inhibit proteins that play a central role in the viral replication cycle both in SARS-CoV-2 and in the host, for example, the "spike" (S) glycoprotein, the type 3C protease, also called the main coronavirus protease (M pro , 3CL pro or PL pro ), RNA-dependent RNA polymerase (RdRp) and human angiotensin-converting enzyme 2 (hACE2) [11,12]. Genomic RNA with RNA polymerase, NP, matrix proteins, and packaging proteins are exported from the nucleus to the cytoplasm with the help of M1 and NS2 proteins (late viral proteins). The envelope proteins produced in the endoplasmic reticulum (ER) move through the transport vesicle from the ER to the Golgi apparatus and then to the plasma membrane. Finally, the genomic RNA and the viral protein complex are packaged into progeny viruses as they emerge from the cell membrane by exocytosis. The red box indicates the potential mechanism of action against influenza A virus by 1,8-cineole and other monoterpenes present in Eucalyptus essential oil by binding and inhibiting the hemagglutinin protein and thus inhibiting the binding of the virus with its receptor and subsequent entry into the host cell. In this context, the use of natural products has been evaluated as a complement to conventional treatments for this disease [15,32,59,62,63]. Several studies have evaluated the monoterpenes present in Eucalyptus essential oil, either for the treatment of symptoms caused by COVID-19 or to evaluate their ability to inhibit the M pro protein, a key homodimeric cysteine protease enzyme that cleaves polyproteins into individual proteins necessary for the replication and transcription of SARS-CoV-2 [1,11,12,60]. They also evaluated the action of 1,8-cineole against SARS-CoV-2 M pro through an in silico molecular docking assay. The compound showed efficient binding, with an estimated free binding energy of −6.04 kcal/mol for amino acids in the active site of M pro . The interaction results indicated that the M pro -1,8-cineole complexes form hydrophobic interactions through MET6, PHE8, ASP295, and ARG298 in the active site. Based on these results, the authors propose that using 1,8-cineole may represent a possible treatment for COVID-19, acting as an inhibitor of M pro . Similar results were reported by Panikar et al. [13] in an in silico molecular docking study that evaluated the effect of the major monoterpenes present in Eucalyptus essential oil, namely, 1,8-cineole and α-pinene α-terpineol, limonene, and o-cymene, on SARS-CoV-2 M pro . The study showed a great capacity of monoterpenes to bind to the active site of M pro and classified them based on free binding energy: 1,8-cineole (−5.86 kcal/mol) > α-pinene (−5.6 kcal/mol) > α-terpineol (−5.43 kcal/mol) > limonene (−5.18 kcal/mol) > or o-cymene (−4.99 kcal/mol). These results allowed us to postulate that Eucalyptus essential oil or its individual major monoterpenes, mainly 1,8-cineole, can be used as a potential SARS-CoV-2 inhibitor. The structure of the SARS-CoV-2, replication cycle, and the potential antiviral mechanism of 1,8-cineole and other monoterpenes present in Eucalyptus essential oils are provided in Figure 4.
Other Viruses
Cermelli et al. [44] evaluated the effect of E. globulus essential oil against the mumps virus, an RNA virus coated by a capsid that in turn is surrounded by a viral envelope; this virus causes painful inflammation of the parotid glands. Antiviral activity was verified by an in vitro plaque reduction assay in Vero cells. Treatment with oil (0.25 µL/mL) was performed after viral infection for a period of 72 h. Mild antiviral activity was reported, with a 33% reduction in the formation of mumps virus plaques. Based on the results, the authors speculated that because the mumps virus has an envelope, the possible mechanism of action of the oil is through the direct binding on virus particles during the extracellular phase of the virus cycle. In the same study, adenovirus, a nonenveloped virus, was not affected by Eucalyptus essential oil, suggesting that this is due to the lack of a viral envelope. Elaissi et al. [18] explored the in vitro antiviral activity of E. bicostata, E. cinerea, and E. maidenii essential oils against coxsackievirus B3, an RNA virus that can trigger different clinical conditions, such as colds, viral meningitis, and myocarditis. The antiviral activity was verified by evaluating the percentage of protection against the virus in a Vero cell model. Better antiviral activity results were observed with pretreatment with oil prior to cellular infection with coxsackievirus B3. A significant reduction in viral infectivity was observed, with IC50 values of 0.7 µg/µL, 102.0 µg/µL, and 136.5 µg/µL for E. bicostata, E. cinerea, and E. maidenii, respectively. This observation allowed the researchers to conclude that the possible mechanism of action involves the direct binding of the molecules present in the oil after viral infection. El-Baz et al. [45] evaluated the antiviral effect of E. camaldulensis essential oil against rotavirus strain Wa, coxsackievirus B4, and HSV-1 by plaque reduction assays in MA104, BGM, and Vero cells, respectively. A 1/10 dilution of 100 µL of oil was used as treatment. All viruses were affected by essential oils, with plaque reduction percentages of 50%, 53.3%, and 90% for rotavirus strain Wa, coxsackievirus B4, and HSV-1, respectively. Based on these results, the authors speculated that E. camaldulensis essential oil could be a candidate for use in preparations or drugs against RNA viruses that cause infectious diseases. , and, for some beta coronaviruses, hemagglutinin esterase (not shown). The positive-sense single-stranded RNA (+ssRNA) genome is encapsulated by N, while M and E are incorporated into the viral particle during the assembly process. The replication cycle begins with the arrival of the SARS-CoV-2 virus to the target cell. The S viral protein binds to its receptor in the cell, the angiotensin-converting enzyme 2 (ACE2). After receptor binding, the S protein is cleaved by the cell surface serine protease TMPRSS2, forming two subunits, the S1 subunit containing the receptor-binding domain (RBD) and the S2 subunit containing the binding peptide to the fusion protein present in the cell membrane, allowing the entry of the virus into the host cell, either through the formation of an endosome or by the fusion of the viral envelope. Following the fusion of the virus and host cell membranes, the uncoating occurs and the release of viral RNA into the cytoplasm to initiate the translation of coterminal polyproteins (pp1a/ab), which carry out the replication of the viral genome. After translation of viral RNA into polyproteins, the major protease (M pro ), a homodimeric cysteine protease, self-cleaves in order to cleave polyproteins into nonstructural proteins (nsps). Several nsp proteins interact with nsp12 (also called RNA-dependent RNA polymerase (RdRp)) to form the replicase-transcriptase complex (RTC), which is responsible for the synthesis of the full-length viral genome (replication) and subgenomic RNA (transcription
Other Viruses
Cermelli et al. [44] evaluated the effect of E. globulus essential oil against the mumps virus, an RNA virus coated by a capsid that in turn is surrounded by a viral envelope; this virus causes painful inflammation of the parotid glands. Antiviral activity was verified by an in vitro plaque reduction assay in Vero cells. Treatment with oil (0.25 μL/mL) was performed after viral infection for a period of 72 h. Mild antiviral activity was reported, with a 33% reduction in the formation of mumps virus plaques. Based on the results, the authors speculated that because the mumps virus has an envelope, the possible mechanism , and, for some beta coronaviruses, hemagglutinin esterase (not shown). The positive-sense single-stranded RNA (+ssRNA) genome is encapsulated by N, while M and E are incorporated into the viral particle during the assembly process. The replication cycle begins with the arrival of the SARS-CoV-2 virus to the target cell. The S viral protein binds to its receptor in the cell, the angiotensin-converting enzyme 2 (ACE2). After receptor binding, the S protein is cleaved by the cell surface serine protease TMPRSS2, forming two subunits, the S1 subunit containing the receptor-binding domain (RBD) and the S2 subunit containing the binding peptide to the fusion protein present in the cell membrane, allowing the entry of the virus into the host cell, either through the formation of an endosome or by the fusion of the viral envelope. Following the fusion of the virus and host cell membranes, the uncoating occurs and the release of viral RNA into the cytoplasm to initiate the translation of coterminal polyproteins (pp1a/ab), which carry out the replication of the viral genome. After translation of viral RNA into polyproteins, the major protease (M pro ), a homodimeric cysteine protease, self-cleaves in order to cleave polyproteins into nonstructural proteins (nsps). Several nsp proteins interact with nsp12 (also called RNA-dependent RNA polymerase (RdRp)) to form the replicase-transcriptase complex (RTC), which is responsible for the synthesis of the full-length viral genome (replication) and subgenomic RNA (transcription). Viral structural proteins are expressed and transferred to the endoplasmic reticulum (ER). Genomic RNA encapsulated in protein N is translocated with structural proteins in the ER-Golgi intermediate compartment (ERGIC) to form new viral particles. Finally, the new virions are secreted from the infected cell by exocytosis. The red box indicates the potential mechanism of action against SARS-CoV-2 by 1,8-cineole and other monoterpenes present in Eucalyptus essential oils by inhibiting M pro (binding to the active site), thus inhibiting proteolysis of viral polyproteins necessary for virus replication.
It should be noted that there are studies that have addressed the action of Eucalyptus essential oils against the main vectors of several viral diseases. It has been described that the essential oil from E. nitens [65], E. camaldulensis [66], E. polybractea, and E. smithii [67] have a repellent and larvicidal effect against Aedes aegypti and Aedes albopictus, the main vectors that transmit the Zika virus, yellow fever virus, and Dengue virus. The essential oil from E. globulus has shown acaricidal and repellent activity against Rhipicephalus bursa, a vector of the Crimean Congo hemorrhagic fever (CCHF) virus [68]. Moreover, it has been shown that some nonvolatile acylphloroglucinol dimers from Eucalyptus inhibit the Zika virus (ZIKV) and could be developed as anti-ZIKV agents [69]. These studies give us reasons to consider the potential of Eucalyptus essential oils against other viruses and their biological vectors.
Conclusions
This review critically examined and exhaustively summarized the data that exists in the literature on the emerging role of Eucalyptus essential oil for medicinal use as antiviral therapy, emphasizing the current understanding of the chemical composition and of the antiviral molecular mechanisms that arise from in vitro, in vivo, and in silico models.
Eucalyptus essential oil has demonstrated incredible health benefits and, because of this, is widely used in traditional medicine to treat symptoms of airborne infectious diseases, including the common cold, pulmonary tuberculosis, nasal congestion, sinusitis, bronchial disease, and asthma, and is also used as a disinfectant, antioxidant, and antiseptic agent, especially in the treatment of respiratory tract infections. Regardless of the route of administration of Eucalyptus preparations, the essential oil after being absorbed is eliminated by the pulmonary route, where it exerts its antiseptic and expectorant actions, justifying the interest by researchers for its application in treating respiratory diseases. Oil for medicinal use is characterized by a high 1,8-cineole content (greater than 70% v/v) and is obtained mainly from the leaves of E. globulus. However, the different compositions of bioactive monoterpenes present in the different species of Eucalyptus for medicinal use promote differences in their phytopharmacological properties, which should be studied on a case-by-case basis.
Although it has been shown that the main effect of 1,8-cineole is the anti-inflammatory activity through IκBα-and JNK-dependent inhibition of NF-κ B p65 and NF-κ B, multiple studies have reported activity against a variety of RNA and DNA viruses. Most experiments in vitro, in vivo, and in silico models refer to the activity against enveloped viruses, in particular, HSV-1 and HSV-2, and, to a greater extent, influenza virus and SARS-CoV-2.
In vitro and in vivo studies clearly show that the primary mechanism of antiviral and viricidal action of Eucalyptus essential oil is based on the direct action of its components on free virions and thus the inhibition of the steps involved in binding, penetration, intracellular replication, and virus release from host cells. This was concluded because in most cases, the most significant antiviral effect was observed when the virions (in the liquid phase or ambient air) were incubated with individual oils or monoterpenes for 1 h or more before their addition to host cells (pretreatment of cell-free virions), indicating a direct effect (viricidal effect) on the virions outside host cells. With respect to this, it can be speculated that the viricidal effect is due to alterations in the enveloped virus and its associated structures, such as glycoproteins, which are necessary for virus adsorption and entry into host cells. Some in silico studies complement this mechanism by demonstrating the inhibition of vital viral enzymes. Thus, through molecular docking techniques, it has been possible to demonstrate that several major monoterpenes in Eucalyptus essential oil for medicinal use (1,8-cineole, α-pinene α-terpineol, limonene, and o-cymene) can exert relatively strong binding and inhibit the active site of the M pro protein, a key homodimeric cysteine protease enzyme that cleaves polyproteins into individual proteins necessary for SARS-CoV-2 replication and transcription.
Great emphasis should be placed on potential applications of Eucalyptus essential oil, such as the coadministration of Eucalyptus essential oil with available antiviral vaccines because this approach has been shown to considerably increase protection against viruses involved in infectious airborne diseases. In addition to enhancing its use as a complementary treatment of symptoms caused by viruses, another mechanism of action involved is the attenuation of inflammatory responses caused by viruses, with special emphasis on viruses that cause respiratory diseases, in which essential oils exert anti-inflammatory, mucolytic, and spasmolytic effects. | 8,654.8 | 2021-11-23T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Chemistry"
] |
Global alteration of T-lymphocyte metabolism by PD-L1 checkpoint involves a block of de novo nucleoside phosphate synthesis
Metabolic obstacles of the tumor microenvironment remain a challenge to T-cell-mediated cancer immunotherapies. To better understand the interplay of immune checkpoint signaling and immune metabolism, this study developed and used an optimized metabolite extraction protocol for non-adherent primary human T-cells, to broadly profile in vitro metabolic changes effected by PD-1 signaling by mass spectrometry-based metabolomics and isotopomer analysis. Inhibitory signaling reduced aerobic glycolysis and glutaminolysis. A general scarcity across the panel of metabolites measured supported widespread metabolic regulation by PD-1. Glucose carbon fate analysis supported tricarboxylic acid cycle reliance on pyruvate carboxylation, catabolic-state fluxes into acetyl-CoA and succinyl-CoA, and a block in de novo nucleoside phosphate synthesis that was accompanied by reduced mTORC1 signaling. Nonetheless, exogenous administration of nucleosides was not sufficient to ameliorate proliferation of T-cells in the context of multiple metabolic insufficiencies due to PD-L1 treatment. Carbon fate analysis did not support the use of primarily glucose-derived carbons to fuel fatty acid beta oxidation, in contrast to reports on T-memory cells. These findings add to our understanding of metabolic dysregulation by PD-1 signaling and inform the effort to rationally develop metabolic interventions coupled with immune-checkpoint blockade for increased treatment efficacy.
Introduction
Immune-checkpoint blockade targeting programmed cell death 1 (PDCD1/PD-1) and its ligand CD274 (PD-L1) has shown promise for the treatment of tumors of various histologies, with long-term responses and limited toxicities in a subset of patients 1 . T-cells found at the tumor margin can infiltrate and proliferate within the tumor upon successful immune-checkpoint blockade 2 . The tumor microenvironment presents obstacles to T-cell infiltration, and there is a growing appreciation for the metabolic restrictions imposed, such as competition with tumor cells for glucose 3 . Thus, it may be advantageous to couple immune-checkpoint blockade with metabolic interventions, and to include metabolic robustness in the engineering of cytotoxic T-cells 4 . Detailed knowledge of T-cell metabolism in different contexts would be a prerequisite for the rational development of such strategies.
The study of metabolism changed radically with the ability to do systems-level analyses using mass spectrometry 5 . Currently, there is no consensus regarding how to prepare non-adherent mammalian cell samples for metabolomics, and even less guidance specific for immune cells 6,7 . Metabolomic studies of human T-cells have used as many as 30 million cells per replicate 8,9 , an input requirement which may prohibit profiling rare Tcell subsets. A typical extraction protocol for intracellular metabolites includes: (i) separation of cells from media, (ii) a wash step to eliminate remaining contaminating media metabolites, (iii) quenching of metabolism and extraction of metabolites. Separation and washing can be done rapidly with adherent cells, but centrifugation requires prolonged exposure to the wash solution, which ideally should maintain physiological conditions while minimizing post-harvesting metabolic activity.
The purpose of this study was to broadly profile metabolic changes in human T-cells effected by PD-1 axis signaling using mass spectrometry-based metabolomics analyzing a panel of 155 polar and semi-polar metabolites. We simulated in vitro a tumor-directed attack by primed T-cells and used recombinant human PD-L1 protein to engage the immune-checkpoint, without the use of target cells and ensuing issues of cell separation and contaminating metabolites. To ensure profiling physiologicalrange conditions, antibody-based activation was adjusted using melanoma cell line antigen presentation as a guide. Our approach to minimize input requirements and optimize readout included screening several candidate wash solutions. As a reference control, we tested the ability of our assay parameters to identify expected metabolic changes induced by substitution of glucose by galactose in culture media. Finally, we incorporated [U-13 C] glucose tracer experiments and analyzed carbon fate by steadystate isotopomer distributions of metabolites to infer relative pathway activities. This is particularly useful for non-linear pathways with multiple possible contributing pathways, such as the tricarboxylic acid (TCA) cycle 10 .
Simulating an abortive T-cell tumor-directed attack
To simulate an abortive T-cell tumor-directed attack, we used stimulating antibodies, followed by recombinant human PD-L1 exposure in a plate-based system. Previously frozen PBMC were thawed and expanded for 7 days with induction of surface PD-1 ( Supplementary Fig. S1) before T-cell isolation and treatment (Fig. 1a). Multiparameter immuno-phenotyping of T-cell differentiation using this expansion protocol was previously published by our group 11 . We adjusted our treatment conditions to have the same cytokine-based output levels as a cell-based antigen presentation system. MART-1-specific transgenic human T-cells (F5 TCR) were co-cultured with peptidepresenting target cells and interferon gamma (IFNγ) release was used to gauge activation and inhibition. As expected, M202 melanoma and K562-A2.1 peptide-pulsed cells caused IFNγ release, contrary to mock-pulsed cells and M238 melanoma cells that do not present the peptide (Fig. 1b). At a similar level of activation using stimulating antibodies, the PD-L1 peptide efficiently inhibited IFNγ release (Fig. 1b).
Optimization of metabolite collection and analysis
Next, we sought to find an appropriate wash solution for our intracellular metabolite extraction protocol. We reasoned that microscopy could provide a quick screen for drastic morphologic changes. Ammonium acetate, a common metabolite collection-protocol wash solution for adherent cells, caused Jurkat cells to swell over 4 min, while a methanol-based solution altered morphology on contact (Fig. 1c). To more formally assess the effect of ammonium acetate, we used adherent mouse embryonic fibroblasts (MEF) cells, because their exposure time to NaCl 0.9% (isotonic) and ammonium acetate solutions could be easily controlled. The variability of measurements with ammonium acetate increased as a function of exposure length (Fig. 1d). To overcome the osmotic swelling, we chose to proceed with an isotonic solution. Metabolite profiles of Jurkat cells using NaCl vs. mannitol isotonic wash solutions were highly correlated, with R 2 = 0.93 (Fig. 1e), and with mannitol preserving the measurement of more metabolites. To co-optimize for a low cellular input amount that can quantitatively detect a substantial number of metabolites, we first assessed the ability of serial two-fold dilutions to predict the metabolite levels of the maximum cell number sample and found a good albeit decreasing correlation across the range of dilutions (Fig. 1f). Nonetheless, the number of reproducibly measured metabolite levels was highest at 5 × 10 5 cells, although at the cost of detecting fewer metabolites. Further, our ability to confidently measure decreasing levels of metabolites of successive dilutions was best (see figure on previous page) Fig. 1 Development of a platform to interrogate PD-L1-induced changes by LC/MS metabolomics. a Schematic representation of T-cell treatment prior to metabolite extraction. PBMC are expanded using a clinical grade adoptive cell transfer protocol that leads to upregulation of PD-1 receptor on the surface of T-cells. Isolated T-cells are then seeded in plates with anti-CD3 and anti-CD28 antibodies with or without recombinant human PD-L1 (rhPD-L1). b IFN gamma ELISA of 24 h supernatants. T-cells bearing an exogenously expressed MART-1-specific T-cell receptor (F5 TCR) were co-cultured with M202 melanoma cells that present MART-1 via HLA-A2.1, M238 cells that do not, K562 cells with exogenous expression of HLA-A2.1 pulsed with MART-1 [26][27][28][29][30][31][32][33][34][35] peptide, or stimulated with anti-CD3 and anti-CD28 antibodies without target cells. Anti-CD3 and anti-CD28 antibodybased activation of T cells is comparable to levels of cell-based melanoma antigen presentation. c Light microscopy images show changes in cellular morphology with certain wash solutions. Jurkat cells were exposed for the indicated time intervals. The ammonium acetate iso-osmotic wash buffer provides little tonicity and causes considerable swelling of the cells. The ice-cold ammonium bicarbonate solution in 60% methanol causes cell changes on contact. In contrast, such changes are not noted with the mannitol solution. d Principal component analysis of 137 of 155 profiled metabolites demonstrates that the variance of intracellular metabolite measurements increases as a function of time when exposed to the ammonium acetate solution. Spontaneously immortalized 3T3 mouse embryonic fibroblasts were exposed to a 0.9% NaCl or 150 mM ammonium acetate solutions for the indicated times. The ellipses were added for emphasis. e Correlation of profiled metabolites between mannitol and NaCl wash solutions. Triplicates of 2 × 10 6 Jurkat cells per condition. f Correlation of metabolite measurements of two-fold serially diluted samples. Values are corrected for the dilution factor. The dashed lines represent a perfect correlation. The solid colored lines are the best fit lines of the data. g Schematic model displaying the reduced flux of galactose metabolism ending with complete oxidation to CO 2 compared to high glycolytic flux with glucose and production of lactate. h Supernatant lactate levels of activated primary human T-cells in glucose-containing or galactose-containing culture medium. Displayed are the summary results of three experiments, sampled from three replicate wells per condition (n = 9 vs. 9) between 1 × 10 6 and 5 × 10 5 cellular inputs (Supplementary Fig. S2). Based on these comparisons, together with format considerations, we proceeded with 8.4 × 10 5 cells per replicate, using a mannitol wash solution.
To test our parameters for metabolic extraction in the context of measuring a metabolic change, we activated Tcells in galactose-containing medium, which is known to decrease glycolytic flux and force cells to respire (Fig. 1g) 12 . As anticipated, lactate production was suppressed in galactose-containing medium (Fig. 1h). After 24 h, we detected higher intracellular levels of Leloir pathway intermediates UDP-hexose and hexose-phosphate with galactose treatment (Table 1). Aspartic acid was also elevated, consistent with recent reports of respiration driving aspartate production in proliferating cells 13,14 . Glycolytic intermediates were higher in the glucose condition, as was UDP N-acetylglucosamine, a reflection of glucose availability for the hexosamine pathway 15 . These findings fit expectations based on the literature.
Profiling of PD-L1 checkpoint-induced metabolic changes reveals a block in de novo nucleoside phosphate synthesis
Next, we interrogated metabolic changes caused in primary human T-cells by PD1 axis signaling. After 72 h of treatment, we adequately measured 146 of 155 intracellular metabolites of our panel, and found most to be reduced by treatment (Fig. 2a). Intracellular levels of glycolytic intermediates were reduced (Fig. 2b), as were lactate levels in media (Fig. 2c). Consumption of glutamine and serine were also reduced (Fig. 2c). Relative levels of nucleoside phosphates were decreased (Fig. 2d).
To probe further, we performed isotopomer analysis. This showed that despite similar incorporation of heavy carbons from [U-13 C] glucose into ribose, very little label was incorporated into nucleoside phosphates with PD-L1 treatment ( Fig. 3a and Supplementary Fig. S3). Isotopomer analysis further showed differential carbon labeling of Krebs cycle intermediates ( Fig. 3b and Supplementary Fig. S4). Based on these carbon tracing changes, we inferred a model of fluxes into the cycle. PD-1 signaling results in increased reductive carboxylation of pyruvate, anaplerosis of the cycle at acetyl-CoA and succinyl-CoA that could come from fatty acids and branched chain amino acids, less anaplerosis from glutamine, and less nucleoside synthesis despite higher relative levels of aspartate (Fig. 3c).
We questioned whether the block in nucleoside phosphate synthesis was an early event. At a 24 h time-point, the landscape of metabolic changes was consistent but more modest than at 72 h ( Supplementary Fig. S5). Among relative level changes caused by PD-L1 treatment with an FDR < 0.05, we found reduced ribose-5-P and UMP, and increased aspartic acid (Fig. 4a). The inhibited cells incorporated less 13 C from ribose-5-P into nucleoside phosphates, as reflected in their M5 isotopomers. At this early timepoint, the fold reduction was larger for pyrimidines than purines (Fig. 4b). The mechanistic target of rapamycin complex 1 (mTORC1) has been shown to regulate nucleoside phosphate synthesis 16,17 . We probed phosphorylation sites on downstream targets of mTORC1, including carbamoyl-phosphate synthetase 2, aspartate transcarbamylase, and dihydroorotase (CAD), and found decreased mTORC1 activity with PD-L1 treatment (Fig. 4c). In certain cellular contexts, isolated deficiency of nucleoside phosphate synthesis can be rescued by providing exogenous downstream substrates, including nucleosides [16][17][18] . However, in this checkpoint context, we could not document a rescue of proliferation in PD-L1-treated cells supplemented with exogenous nucleosides (Fig. 4d and Supplementary Fig. S6). Inhibition of mTOR partially phenocopies the PD-L1mediated block in de novo nucleoside phosphate synthesis We questioned whether the block in pyrimidine nucleoside phosphate synthesis could be solely attributed to mTORC1 inhibition, or whether additional PD-L1activated mechanisms were responsible for the magnitude of the phenotype and its refractoriness to reversal by exogenous nucleoside administration. To address this question we similarly stimulated and expanded T-cells for 7 days and then treated for 48 h with rapamycin, a known inhibitor of mTORC1 and an immunosuppressant used clinically 19 . We found that low doses of rapamycin were sufficient to block phosphorylation of p-70 S6 kinase and its target phosphorylation site on CAD, although phosphorylation of 4E-BP1 was only reduced at much higher doses ( Fig. 5a and Supplementary Fig. S7). We next measured intracellular nucleoside phosphate levels and found lower levels upon rapamycin treatment (Fig. 5b), yet the effect was smaller than with PD-L1 treatment for 72 h (Fig. 2d). We confirmed that exogenous unlabeled nucleosides could be incorporated in the intracellular metabolite pool by measuring the fraction of [U-13 C] glucose-derived label. Glucose-derived heavy carbon labeling could be competitively depleted by increasing concentrations of unlabeled nucleosides (Fig. 5c). However, similarly to the PD-L1 treatment, mTOR inhibition could not be circumvented by the administration of the nucleoside cocktail, as measured by the metabolic activity (resazurin reduction capacity) of the treated T-cells (Fig. 5d).
Discussion
Our motivation to study human T-cell metabolism led us to develop an in vitro platform where cells can be activated and treated with ligands, drugs, or different nutrient conditions, and metabolic changes can be measured by LC/MS, with the use of an extraction protocol optimized for T-cells. This approach enabled us to exhibit a landscape of metabolic changes caused by PD-1 checkpoint engagement. Our study is unique, in that we have used a cell-based antigen presentation system as a guide for relevant in vitro artificial stimulation, and used [U-13 C] glucose, to infer pathway activities and substrate There are several proposed methods for metabolite extraction of non-adherent cells. We found isotonic solutions to be the most appropriate for washing (Fig. 1c, d) and chose to work with mannitol because it is compatible with both LC/MS and capillary electrophoresis technologies 20 . Early in our investigation we evaluated a filterbased method originally applied to microbial extraction that did not require pumps and automation 21 , but despite experimenting with several filter types and sizes we found filter clogging to be an issue. This limitation and reports of suboptimal recovery of metabolites from filters 6 led us to prioritize a centrifugation-based strategy. A methanolbased wash solution 22 distorted cells on contact (Fig. 1c) and has been reported to cause leakage of metabolites 6 , leading us to consider this option no further. Ammonium acetate is a common metabolite extraction wash solution because it is volatile and should not contribute to ion suppression after evaporation during sample preparation.
A study with primary focus on lipidomics in Jurkat cells concluded that ammonium acetate is appropriate for human non-adherent cell metabolomics 7 . In our work, we found that it causes cells to swell. Penetration of mammalian cells by ammonium salts of weak acids has been studied 23 . More importantly, we found metabolite variability increased as a function of time exposed to ammonium acetate (Fig. 1d). However, we analyzed polar metabolites and do not know whether the lipid compartment is similarly affected.
We found that PD-1 inhibitory signaling shifts metabolism away from aerobic glycolysis and glutaminolysis (Fig. 2b, c), and forces the cell to utilize alternative substrates to feed the TCA cycle (Fig. 3b, c), in agreement with previous reports 9, 24 . The single other mass spectrometry-based metabolomics study related to PD-1 signaling 9 reported a block in the uptake and utilization of branched chain amino acids (BCAA), based on relative levels of intracellular and extracellular valine, and intracellular levels of the downstream metabolite 4-methyl-2oxopentanoate. Induction of carnitine palmitoyltransferase I expression and increased mitochondrial spare respiratory capacity were also demonstrated, reminiscent of findings in murine T-memory cells 25 . In contrast to T-memory cells, lysosomal acid lipase (LAL) was not increased while adipose triglyceride lyase (ATGL) protein levels were. ATGL appears dispensable for T-cell memory 25 . We showed that unlabeled carbons enter the TCA cycle as acetyl-CoA (Fig. 3c), which does not support the idea of PD-L1-treated cells engaging in "cell-intrinsic lipolysis". This term refers to the futile diversion of glucose-derived carbon into triglyceride synthesis by T-memory cells, only to be mobilized from lysosomes to re-enter the TCA cycle 25 . Thus, fatty acid beta-oxidation appears qualitatively different between PD-1-signaling cells and Tmemory cells. Future studies are required to elucidate whether these early metabolic differences between PD-1signaling cells and T-memory cells are causally linked to their distinct epigenetic profiles and potential for reinvigoration 26,27 . Considering the block of BCAA catabolism reported with PD-1 signaling 9 , the increase of unlabeled carbons we documented entering the TCA cycle at succinyl-CoA (Fig. 3c) must be attributed to the terminal three carbons of odd-chain fatty acids alone. However, our study does not provide the fractional contribution of each substrate to definitively confirm a block in BCAA catabolism, nor can we discount effects that our different treatment conditions could have had on those fractions.
For the first time, we demonstrated that PD-1 signaling results in a block of nucleoside phosphate de novo synthesis. This was more notable for pyrimidines at an early, 24 h timepoint (Fig. 4b). The difference in timing is consistent with pyrimidine de novo synthesis being controlled rapidly by mTORC1 phosphorylation of CAD and the control of purine synthesis being regulated through slower transcriptional mechanisms 16,17 . Indeed, we observed less phosphorylation of canonical mTORC1 targets and CAD (Fig. 4c). Activated T-lymphocytes are dependent on de novo purine and pyrimidine synthesis for proliferation and survival, and nucleotides regulate their cell cycle 28 . Nonetheless, a cocktail of purines and pyrimidines by itself was not sufficient to rescue the proliferation of PD-L1-treated cells (Fig. 4d). Additionally, mTOR inhibition by rapamycin treatment only partially phenocopied the block in nucleoside phosphate de novo synthesis and incorporation of glucose-derived heavy carbons, even though the phosphorylation of CAD and p-70 S6K was effectively inhibited (Fig. 5a). On the one hand, this suggests that S6K activity is not the only determinant of de novo pyrimidine and purine synthesis, although required for full capacity. Rapamycin-resistant T-cell proliferation has been documented in the context of strong T-cell receptor stimulation 19 , which would require such residual ability to synthesize RNA and DNA building blocks. On the other hand, even the partial inhibition by rapamycin was not circumventable by the provision of exogenous nucleosides. In fact, this intervention appeared to be further deleterious to the metabolic fitness of the cells (Fig. 5d). Possibly, this is an indication of substrate level negative feedback 29 . While we did not identify an effective maneuver to clearly define the role of mTOR in the context of PD-L1 signaling, conceivably there may be a combination of individual nucleoside concentrations that would have a positive effect. Based on the more profound inhibition with PD-L1 compared to rapamycin it is tempting to invoke other nodes of metabolic regulation, in addition to the central role and multiple functions described for mTOR in T-cell biology 30 . PD-1 signaling also inhibits AKT, another central regulator of proliferation and metabolism 24 . Furthermore, PD-1 inhibits the cell cycle through upregulation of p27 kip1 , in contrast to the PI3K-dependent downregulation of p27 kip1 described in rapamycinresistant T-cell proliferation 19,31 .
Future in vivo studies with transgenic animals will likely be required to expand on our findings and define which metabolic interventions can be coupled with immunotherapies for increased therapeutic efficacy.
Cell lines and culture
Peripheral blood mononuclear cells (PBMCs) were obtained from a healthy donor by leukapheresis under UCLA IRB#10-001598. M202 and M238 were established from patient biopsies with informed consent from all subjects under UCLA institutional review board approval IRB#02-08-067 32 . The K562-A2.1 cell line was a kind gift of Dr. Cedric Britten 33 . Human PBMCs, Jurkat cells, M202 and M238 melanoma cells were cultured in RPMI 1640 with glutamine and supplemented with 10% fetal calf serum (FCS) and 1% streptomycin, penicillin, and fungisome antibiotic cocktail (100x SPF, 10,000 units/mL penicillin, 10,000 µg/mL of streptomycin, and 25 µg/mL Amphotericin B). Immortalized MEF were cultured in DMEM with 10% FCS and 1% SPF. MEFs were immortalized as previously described 34 .
T-cell-directed expansion of PBMCs
Human PBMCs were expanded toward the T-cell lineage as previously described 11 using 10% FCS instead of human AB serum. Medium acidity was monitored and fresh medium added every 2 days to maintain a concentration of 7 × 10 5 cells/mL.
T-cell metabolite extraction protocol
Treated T-cells were harvested into microcentrifuge tubes and centrifuged at 500×G-force for 4 min. The medium was collected and stored at −80°C until further processing. Ice-cold 5.4% mannitol wash solution was added, 1 mL to each pellet, and centrifuged with the same parameters at 4°C. Subsequent steps were performed on ice with solutions ice-cold. The wash solution was removed and sequentially 250 µL methanol, 250 µL water, and finally 250 µL chloroform were added, vortexing briefly between steps. The mixture was centrifuged at 16,000×G-force for 5 min. The top polar phase was collected into glass chromatography vials and stored at −80°C until further processing. Interphase protein was measured by BCA assay kit (ThermoFisher #23225). The polar phase, 20 µL of the supernatants, and three mockextracted controls per run (polar phase of extraction mixture without cells) were evaporated with an EZ2-Elite centrifugal evaporator for 80 min on HPLC setting with 30°C maximum temperature. The samples were blockrandomized and stored at −80°C prior to recovery and submission of 1/10 of each replicate for mass spectrometry. Additional extraction methods tested are described in the supplemental methods.
Data analysis
Mass spectrometry metabolomics data were meannormalized between runs. Significant changes were calculated using a two-sided Student's t-test assuming unequal variance. P-values were adjusted for multiple hypothesis testing using Benjamini and Hochberg's method of false discovery rate (FDR). Analysis was performed in R 36 . | 5,142.2 | 2019-09-30T00:00:00.000 | [
"Biology"
] |
BrainIAK tutorials: User-friendly learning materials for advanced fMRI analysis
Advanced brain imaging analysis methods, including multivariate pattern analysis (MVPA), functional connectivity, and functional alignment, have become powerful tools in cognitive neuroscience over the past decade. These tools are implemented in custom code and separate packages, often requiring different software and language proficiencies. Although usable by expert researchers, novice users face a steep learning curve. These difficulties stem from the use of new programming languages (e.g., Python), learning how to apply machine-learning methods to high-dimensional fMRI data, and minimal documentation and training materials. Furthermore, most standard fMRI analysis packages (e.g., AFNI, FSL, SPM) focus on preprocessing and univariate analyses, leaving a gap in how to integrate with advanced tools. To address these needs, we developed BrainIAK (brainiak.org), an open-source Python software package that seamlessly integrates several cutting-edge, computationally efficient techniques with other Python packages (e.g., Nilearn, Scikit-learn) for file handling, visualization, and machine learning. To disseminate these powerful tools, we developed user-friendly tutorials (in Jupyter format; https://brainiak.org/tutorials/) for learning BrainIAK and advanced fMRI analysis in Python more generally. These materials cover techniques including: MVPA (pattern classification and representational similarity analysis); parallelized searchlight analysis; background connectivity; full correlation matrix analysis; inter-subject correlation; inter-subject functional connectivity; shared response modeling; event segmentation using hidden Markov models; and real-time fMRI. For long-running jobs or large memory needs we provide detailed guidance on high-performance computing clusters. These notebooks were successfully tested at multiple sites, including as problem sets for courses at Yale and Princeton universities and at various workshops and hackathons. These materials are freely shared, with the hope that they become part of a pool of open-source software and educational materials for large-scale, reproducible fMRI analysis and accelerated discovery.
Thank you for pointing this out. We have updated the license for the tutorials on GitHub to Apache 2.0. We have also licensed the datasets under the Creative Commons Attribution 4.0 International License and updated Table 1 accordingly.
• Please consider redistributing the tutorials via CodeOcean (https://codeocean.com/) We are currently in the process of distributing these tutorials via NeuroLibre (https://conppcno.github.io), a free platform to run and execute the tutorials. No installation is required by the user. This platform is user-friendly and is widely available to the general public, so we believe that it achieves the same goal as distributing via CodeOcean. Having said that, we are big fans of CodeOcean andgoing forwardwe plan to work with them and other platforms to increase the reach of the tutorials.
• BIDS -The Brain Imaging Data Structure is not even mentioned. Given that BIDS provides a wealth of educational materials, I believe it would be great to mention it in tutorial 2. This is not to suggest to change all datasets used in the tutorial to BIDS (although their availability in BIDS format would be desirable).
Thank you for this suggestion. We are increasingly using BIDS in other work and agree that it is important to highlight the general issue of file naming and header conventions. We now highlight this issue in Notebook 2 and we provide a link to the BIDS educational materials. BIDS data are compatible with our tutorials, as the tutorials can easily be modified to suit any directory and file naming structure. We have provided an example in Notebook 2 about how to read data files in BIDS format.
• Most of the tutorials propose exercises -is there a way for students to check their solutions?
As these materials were designed for classroom teaching --and we hope others will adopt them for the same purpose --we intentionally left the answers out. However, we do have a solution key that we will finalize after the review process to incorporate any revisions to the notebooks. The solution key will be hosted as a private repo and will be provided to users upon request if they affirm that they are not currently students in a for-credit course using these materials. We have also added a note to the public repo that instructors interested in using the tutorials in courses are encouraged to contact the creators for additional information (at which point the solutions will be shared, along with other classroom advice based on three semesters of teaching).
• Tutorial 7 was excessively long for me -I would split the HPC section of the end to a separate unit (probably proposed before starting with searchlight).
We have found that the best way to teach the use of HPC is by linking it to a neuroscientific goal (here, searchlight analysis). A standalone HPC notebook would be different in character from all of the other surrounding notebooks, which are based on fMRI methods rather than generic computing topics. For these reasons, we prefer to keep Tutorial 7 as a single tutorial. Having said this, we understand the reviewer's concern; to address this concern, we have added a message at the start of the HPC section in the notebook mentioning that this will take a long time to execute in non-HPC environments. This will allow users to plan their effort in completing the notebook, and also allow users on clusters to get the full benefit of the tutorials.
• The authors explicitly state that "the materials available to learn these methods are limited, **the software is rarely open-source**, and the analyses are often difficult to run on large datasets" in the Author Summary (and then this is suggested at points). I believe that actually, most of the neuroimaging software is open-source. I believe the authors want to refer to reproducibility issues here. The paper would benefit from some discussion about how these tutorials are useful to address the problem of irreproducibility in neuroimaging.
In the revision, we have highlighted the problem of reproducibility in psychology and neuroscience (see below for changes). The tutorials are inherently designed to reproduce findings with open data and open code. We are hopeful this will encourage further transparency when users adapt the tutorial code for their research studies.
To address the reviewer's concerns listed above, we have added the following text: p. 4, lines 53-55: "Furthermore, the materials available to learn these methods do not encompass all the methods used, work is often published with no publicly available code, and the analyses are often difficult to run on large datasets without cluster computing." p. 6 lines 88-90: "One barrier to increasing the accessibility of these techniques is that, in most cases, they were created as custom code within individual labs and are thus not part of other fMRI software analysis packages." p. 11 lines 203-208: "We have released these tutorials publicly and freely. The users can also apply these methods to publicly available datasets from the existing literature, leading to independent validation of the published results. We are hopeful that this will help increase reproducibility of results more broadly: when tutorial users analyze their own data, they will have already become familiar with the tools necessary to share their code and data, leading to a cycle of improved data sharing and code validation." Lastly, we agree that most neuroimaging software is open source these days, however, some of packages require paid licenses (e.g. Princeton MVPA Toolbox, through Matlab). We have added the following text to the Introduction (p. 5-6, lines 83-88): "There exist multiple open-source packages that implement MVPA techniques and RSA. Some of these packages require paid MATLAB licenses (e.g. Princeton MVPA Toolbox, The Decoding Toolbox [17], and CoSMoMVPA [18]) and others are completely free (e.g. Nilearn [19]and PyMVPA [20,21]). Although all these packages cover a broad range of MVPA and RSA techniques, they do not cover techniques such as FCMA, ISC, ISFC, SRM, and event segmentation." • Limitations: please clearly mention what are the limitations of these tutorials (e.g., no tutorial about GLMs, which is a very standard technique, shallow coverage of containers, etc.). This is not a request for expanding the materials -as I said above they are well designed.
Our goal in these tutorials was to focus on advanced fMRI analysis techniques, and hence we consciously decided not to cover GLMs or preprocessing, or to spend much time on the deployment components such as containers themselves. We agree that these are gaps and limitations, which we have now outlined in the paper, directing readers to other resources (p. 20, lines 382-386): "Furthermore, our goal for these tutorials was to cover advanced fMRI analysis and hence our tutorials do not cover pre-processing methods, General Linear Model analysis, or software deployment options (e.g., containers) in great detail. An exhaustive list covering multiple helpful tools and tutorials is available here: https://github.com/ohbm/hackathon2019/blob/master/Tutorial_Resources.md." For all these reasons my recommendation is acceptance with minor revisions.
Oscar Esteban Postdoctoral Fellow, Stanford University
Thank you again for these detailed and constructive suggestions, which we believe have strengthened the paper and tutorials.
In this manuscript, Kumar and colleagues describe their recently released BrainIAK tutorials, a collection of resources designed to make MVPA-style analyses accessible to the broader neuroimaging community. As the authors note, there is a relative lack of educational materials for these methods despite their use throughout computational cognitive neuroscience. These tutorials, therefore, are of significant general interest to researchers working in this or related fields. I do, however, have concerns about the presentation of the tutorials in the present manuscript, particularly in their described relationship to previous work.
Thank you for your detailed and helpful comments. As described below, we have modified the introduction to position our work in the context of other available open-source packages.
• We regret this significant oversight. Our intention was not to diminish these important contributions and we completely agree that highlighting them and their relationship is important.
The Matlab packages had been excluded simply because we were focusing on packages that did not require paid software licenses. In retrospect, mentioning them serves to acknowledge the historical context and to highlight the relevance of the current work to users of those packages. As creators of the Princeton MVPA Toolbox, we view BrainIAK as the next iteration of that project, bringing it into an open-source framework, with considerably expanded functionality, and with more professional coding, documentation, and high-performance capabilities.
pyMVPA and Nilearn are directly relevant and deserve a more thorough treatment. We have now discussed these packages in the Introduction (p. 5-6, lines 83-95). Nilearn provides an interface with algorithms relevant to the contents of some of the tutorials, namely classification, regression, feature selection, dimensionality reduction, and searchlight. BrainIAK creates an expressive environment in which these and several other cutting-edge methods have been implemented, including ISC, ISFC, SRM, FCMA, TFA, Bayesian RSA. The tutorials highlight many of these functions. We are hopeful that researchers will expand and adapt the tutorials to leverage more functions from BrainIAK and other packages. To reflect these points, we have made the following changes: "The user is also encouraged to make novel contributions using the method that they learned in the tutorial, either by enhancing the method, creating a new visualization of the data, or even using the method on another dataset, e.g., from OpenNeuro (http://openneuro.org)."
It is also unclear why the data downloads on the website link to google drive, when several of the datasets such as Sherlock and Raider are available from open source repositories with better long-term archiving.
In testing prior to release, we discovered that the fastest download speeds were achieved using Google Drive and hence delivered these datasets using Google Drive. We also provide ready-touse masks and smaller extracts of the datasets, which can save time (especially on Google Colaboratory) for the novice user and enable execution on platforms with limited resources. The tutorials website (brainiak.org/tutorials) also links to a Zenodo version of the same datasets, which serves as a long-term archive.
That said, to avoid any confusion, we also now list (p. 12, Thank you for pointing this out. We will clarify this. The exercises are a part of the tutorials and may be used to learn the materials and/or may be used as part of a formal course assignment. We have made the following changes: p. 9 lines 156-161 now read as follows: "For all users, we embed background material and references, prompts for further self-study, and problem set exercises to help them learn how to generate and adapt code. The exercises for each notebook focus on neuroscientific applications of the techniques being learned; thus, by working through the exercises, students learn how to use these techniques to answer meaningful neuroscientific questions (course instructors may contact us for more information)." p. 15 lines 277-281 now read as follows: "The accompanying notebook exercises help the user understand the method and its applicability to the scientific question by requiring that they generate answers or code. These questions are posed in the context of a publicly available fMRI dataset. These questions and exercises can be used to formally evaluate students enrolled in a for-credit course (course instructors may contact us for more information)." • The authors note that the "most powerful analyses are complex and computationally intensive" (line 51-52). This is a subjective statement and depends entirely on the research question at hand. "An exhaustive list of helpful tools and tutorials is available here: https://github.com/ohbm/hackathon2019/blob/master/Tutorial_Resources.md" Reviewer 3
Reviewer #3: # Summary and general comments
In this submission, Kumar and colleagues present a library called BrainIAK for machine learning in functional neuroimaging, and an accompanying set of tutorials. The tutorials are presented in the form of jupyter notebooks, and are accessible either locally through containers or online on the google collab platform. They also include instructions for deployment on high-performance infrastructure. The data used in the tutorial are freely available and specially prepared to be used as part of a training activity. As a strength, some of the material covered in the tutorials include inter-subject correlations and representational similarity analysis, two applications which are not well covered by currently available tutorials, to my knowledge.
Overall, this new library and tutorials are remarkably comprehensive, and I believe will represent a very valuable resource for the community. My only major concern is that the authors did not properly position their work compared to other efforts.
Thank you for these positive impressions and constructive comments. We have modified the Introduction to position our work in the context of other available packages, which we agree was missing before and is important for advancing the field collaboratively (p. We have edited out these references from the abstract (p. 2, lines 43-45): "These notebooks were successfully tested at multiple sites, including as problem sets for courses at Yale and Princeton universities and at various workshops and hackathons." * intro claims several times the lack of existing education material. There is a huge amount of general-purpose tutorials for machine learning, most notably featuring the sklearn documentation.
We agree completely and have benefited tremendously from such materials. Our focus was exclusively on machine learning in the context of neuroimaging and so we did not emphasize general-purpose tutorials, such as from sklearn. In the tutorials we do occasionally provide links to specific aspects of sklearn and nilearn for additional help. We further added a link to the sklearn learning materials in the Introduction when we discuss general-purpose machine learning, and tailored our claims about the lack of education materials to be specifically about machine learning in neuroimaging. We have elaborated on the contribution of BrainIAK and its tutorials with respect to the other packages in more detail in the following paragraphs: In the Author Summary we have removed the statement that materials are unavailable and the modified lines are as follows (p. 4, lines 53-55): "Furthermore, the materials available to learn these methods do not encompass all the methods used, work is often published with no publicly available code, and the analyses are often difficult to run on large datasets without cluster computing." In the Introduction we added (p. We have added examples of preprocessing pipelines that a user could use. We have added a description in Notebook 2 on how to access data in BIDS format by showing users how to form the required file name string with a task name, space name, and run id. This will allow them to read data files in BIDS format. We have also added a link to additional resources in the "Other Resources" section.
p. 13, lines 238-244: "The user is free to use any preprocessing pipeline (e.g., fmriprep, AFNI). Data are exchanged in standard NIFTI and NumPy formats with existing tools such as Nibabel or Nilearn and our tutorials show how to import data into Python structures and use BrainIAK. Data are exchanged in standard NIFTI and NumPy formats with existing tools such as Nibabel or Nilearn and our tutorials show how to import data into Python structures and use BrainIAK. The functions in BrainIAK parse the data in a time x voxels format, with an exception being the searchlight function that takes in 4-D volumes. The BrainIAK package also serves as an ecosystem for users to contribute their own methods while avoiding duplication of methods found in other packages." p. 20, lines 384-386: "An exhaustive list covering multiple helpful tools and tutorials is available here: https://github.com/ohbm/hackathon2019/blob/master/Tutorial_Resources.md." * no material is presented to demonstrate that the proposed material achieves the stated goals. Survey results from a workshop, for example, would add some support to the usefulness of the resources.
From informal discussions at hackathons, workshops, and on our Gitter channel, participants have told us that the tutorials have helped them tremendously. We also have formal evaluations from students who have used these materials in courses at Yale and Princeton and they have been overwhelmingly positive (e.g., the most recent incarnation received a course rating of 4.8/5); we don't think that these institutional evaluations can be made publicly available. We are aware of two final projects from this course that are now in preparation for journal submission. Within our labs, these tutorials are now the starting point for all new lab members. We have also distributed them widely to colleagues, who have expressed gratitude and also use them for graduate training. We realize that this feedback is anecdotal. We will create a survey page on the BrainIAK tutorials homepage when this paper is published for people to provide feedback and suggestions, which could be posted publicly with consent. Thanks to Prof. Bellec and the NeuroLibre team for a detailed review of each notebook (so far Notebooks 1-3 have been reviewed). We have already resolved all comments for Notebooks 1-3 and the pull requests have been merged into the NeuroLibre GitHub repository. As the other notebooks are reviewed, we will work towards incorporating them as well. | 4,334.4 | 2019-05-21T00:00:00.000 | [
"Computer Science"
] |
COVID-19 CG: Tracking SARS-CoV-2 mutations by locations and dates of interest
COVID-19 CG is an open resource for tracking SARS-CoV-2 single-nucleotide variations (SNVs) and lineages while filtering by location, date, gene, and mutation of interest. COVID-19 CG provides significant time, labor, and cost-saving utility to diverse projects on SARS-CoV-2 transmission, evolution, emergence, immune interactions, diagnostics, therapeutics, vaccines, and intervention tracking. Here, we describe case studies in which users can interrogate (1) SNVs in the SARS-CoV-2 Spike receptor binding domain (RBD) across different geographic regions to inform the design and testing of therapeutics, (2) SNVs that may impact the sensitivity of commonly used diagnostic primers, and (3) the recent emergence of a dominant lineage harboring an S477N RBD mutation in Australia. To accelerate COVID-19 research and public health efforts, COVID-19 CG will be continually upgraded with new features for users to quickly and reliably pinpoint mutations as the virus evolves throughout the pandemic and in response to therapeutic and public health interventions.
Introduction
contact tracing efforts and to inform public health decisions -these are paramount to the re-48 opening of countries and inter-regional travel (Collins 2020; Rockett et al. 2020; Oude Munnink, 49 et al. 2020; Gudbjartsson et al. 2020;Pybus et al. 2020). Yet, the quantity and complexity of 50 SARS-CoV-2 genomic data (and metadata) make it challenging and costly for the majority of 51 scientists to stay abreast of SARS-CoV-2 mutations in a way that is meaningful to their specific 52 research goals. Currently, each group or organization has to independently expend labor, 53 computing costs, and, most importantly, time to curate and analyze the genomic data from 54 GISAID before they can generate specific hypotheses about SARS-CoV-2 lineages and 55 mutations in their population(s) of interest. 56
Results 58
To address this challenge, we built COVID-19 CoV Genetics (COVID-19 CG, covidcg.org), a 59 found in only 1.05% of the Australian SARS-CoV-2 sequences before June, now constitutes 84 more than 90% of the sequenced June through August isolates ( Figure 2C). This geographical 85 and temporal variation is important to incorporate into the design and testing of therapeutic 86 antibodies (such as those under development as therapeutics by Regeneron that specifically 87 target the SARS-CoV-2 Spike RBD), as well as mRNA or recombinant protein-based vaccines. 88 This will help to assure developers of the efficacy of their therapeutics and vaccines against the 89 SARS-CoV-2 variants that are present in the intended location of implementation. 90 91 In addition, COVID-19 CG can be harnessed to track changes in SARS-CoV-2 evolution post-92 implementation of therapeutics and vaccines. It will be crucial to watch for rare escape variants 93 that could resist drug-or immune-based interventions to eventually become the dominant 94 SARS-CoV-2 variant in the community. This need was particularly emphasized by a Regeneron 95 study that demonstrated that single amino acid variants could evolve rapidly in the SARS-CoV-2 96 Spike to ablate binding to antibodies that had been previously selected for their ability to 97 neutralize all known RBD variants; these amino acid variations were found either inside or 98 outside of the targeted RBD region, and some are already present at low frequency among 99 human isolates globally (Baum et al., 2020). The authors, Baum et al., suggested that these 100 rare escape variants could be selected under the pressure of single antibody treatment, and, 101 therefore, advocated for the application of cocktails of antibodies that bind to different epitopes 102 to minimize SARS-CoV-2 mutational escape. A recent study by Greaney et al. generated high-103 resolution 'escape maps' delineating RBD mutations that could potentially result in virus escape 104 from neutralization by ten different human antibodies (Greaney et al., 2020). Based on lessons 105 learnt from the rise of multidrug resistant bacteria and cancer cells, it will be of the utmost 106 importance to continue tracking SARS-CoV-2 evolution even when multiple vaccines and 107 therapeutics are implemented in a given human population. 108 Diagnostics developers can evaluate their probe, primer, or point-of-care diagnostic according 110 to user-defined regional and temporal SARS-CoV-2 genomic variation. More than 665 111 established primers/probes are built into COVID-19 CG, and new diagnostics will be continually 112 incorporated into the browser. Users can also input custom coordinates or sequences to 113 evaluate their own target sequences and design new diagnostics. 114
115
Case study of SNVs that could impact the sensitivity of diagnostic primers: A recent 116 preprint alerted us to the finding that a common G29140T SNV, found in 22.3% of the study's 117 samples from Madera County, California, was adversely affecting SARS-CoV-2 detection by the 118 NIID_2019-nCoV_N_F2 diagnostic primer used at their sequencing center; the single SNV 119 caused a ~30-fold drop in the quantity of amplicon produced by the NIID_2019-nCov_N_F2/R2 120 primer pair (Vanaerschot et al., 2020). We used COVID-19 CG to detect other SNVs that could 121 impact the use of this primer pair, discovering that there are SARS-CoV-2 variants in several 122 countries with a different C29144T mutation at the very 3' end of the same NIID_2019-123 nCoV_N_F2 primer ( Figure 3A). As the authors of the preprint, Vanaerschot et al., noted, SNVs 124 could impact assay accuracy if diagnostic primers and probes are also being used to quantify 125 viral loads in patients. We found that at least ten other primer pairs could potentially be at risk in 126 different geographical regions due to SNVs that appear proximal to the 3' ends of primers 127 N_Sarbarco_R1; and Institut Pasteur, Paris 12759Rv. We advocate that labs and clinics use 132 COVID-19 CG (https://covidcg.org) to check their most commonly used primers and probes 133 against the SARS-CoV-2 sequences that are prevalent in their geographic regions. 134
Researchers and public health professionals can use COVID-19 CG to gain insights as to 136
how the virus is evolving in a given population over time (e.g., in which genes are mutations 137 occurring, and do these lead to structural or phenotypic changes?). For example, users can 138 track D614G distributions across any region of interest over time. goal that affects all of humanity, we advocate for the increased sequencing of SARS-CoV-2 209 isolates from patients (and infected animals) around the world, and for these data to be shared 210 in as timely a manner as possible.
Data Pipeline 213
Our data processing pipeline is written with the Snakemake scalable bioinformatics workflow 214 engine (Koster and Rahmann, 2012)
Application Compilation 264
The web application is written in Javascript, and primarily uses the libraries React.js, MobX, and 265 Vega. The code is compiled into javascript bundles by webpack. All sequence data is 266 compressed and injected inline as JSON into the javascript bundle -no server is needed to 267 serve data to end users. The compiled application files can then be hosted on any static server. | 1,579.2 | 2020-09-24T00:00:00.000 | [
"Medicine",
"Computer Science",
"Environmental Science"
] |
Fry Counting Models Based on Attention Mechanism and YOLOv4-Tiny
Accurate counting is difficult in the case of large numbers of overlapping and adhering fry. In this study, we propose a lightweight target detection counting method based on deep learning methods that can meet the deployment requirements of edge computing device for automatic fry counting while obtaining a high counting accuracy. We improve the structure of YOLOv4-tiny by embedding different attention mechanisms in the cross stage partial connections blocks of the backbone network to enhance the feature extraction performance. In addition, the low efficiency of feature fusion in the original model is also addressed by adding different attention mechanisms to the neck network structure to promote the effective fusion of deep feature information with shallow feature information and improve the counting accuracy. The experimental results showed that the six models proposed in this study improved the model accuracy and recall to varying degrees compared with the original YOLOv4-tiny model, while retaining the advantages of the YOLOv4-tiny model in terms of its small number of parameters and fast inference rate. It was also shown that the CBAM(n)-YOLOv4-tiny model obtained by adding the CBAM to the neck network showed the most significant improvement, with an mean average precision (mAP) of 94.45% and a recall of 93.93%. Compared with the YOLOv4-tiny model, there were increases of 27.06% in accuracy, 30.66% in recall, 38.27% in mAP, and 28.77% in the F1-score, along with a 67.82% decrease in LAMR.
urgent need for an accurate, automated, fry-friendly, real-time counting method that can easily be deployed in embedded devices for applications such as fry sales.
With the development of electronics and computer technology, methods have emerged that no longer rely on manual counting. Lemarie et al. [11] designed an electronic fry counting device was to count fish larvae and embryos, and achieved an error of less than 10% compared with manual counting, but the accuracy of the method was affected by particles or sediment in the water. Baumgartner et al. [12] and Ferrero et al. [13] applied optical counters based infrared radiation to estimate the biomass of fry, but the method was difficult to adapt to large migration rates and situations where multiple fish overlap or are in close proximity. These fry counting means brought some convenience to different applications, but their respective drawbacks also limited their use on a large scale.
With the rapid development of computer vision technology in recent years, its application to fry counting began to gain the attention of scholars [14], [15], [16]. Images of fish are collected by cameras and then analyzed to count the fry. Compared with the traditional methods, the method using computer vision has the advantages of a high efficiency, a low workload, and less damage to the fry. Many scholars have conducted studies on image analysis methods [6], [16], [17], which can be divided into image processing using traditional machine methods learning and deep learning methods based on neural networks.
Traditional machine learning-based counting methods use techniques such as image segmentation, which requires the human extraction of features, manual setting of thresholds, and setting of regression functions to perform counting. Researchers like Ibrahin et al [18], Albuquerque et al. [19], Zhang et al. [20], and Garcia et al. [21] adopted traditional computer vision processing methods such as speckle detection and edge contour extraction to achieve the automatic counting of fry. Researchers such as Zhang et al. [22] further utilized binarization and expansion erosion to determine the biomass of fry through independently refined connected domains, improving the accuracy of overlapping regions to some extent. For complex backgrounds due to lighting conditions, researchers like Jing et al. [23] used the sober operator, which is effective for low-noise low-gradient images, to detect the contour edges of fish and count the fish in such backgrounds.
In image segmentation-based counting methods, target adhesion is an important factor affecting the counting accuracy, and for the problem of the presence of adhesions among the fish, Labuguen et al. [4] adopted adaptive thresholding segmentation and edge detection to segment and count fry. Duan et al. [24] applied morphological operations to double layer images, along with a watershed algorithm based on the Otsu automatic threshold segmentation algorithm, and achieved good results in segmenting and counting fish eggs.
Deep learning has powerful feature extraction and data representation capabilities, and can achieve high-accuracy detection with sufficient samples. Compared with the counting methods based on image segmentation, a target detection method based on deep learning locates and detects the region of interest in the selected images, which can effectively solve problems such as adhesion and overlapping among the targets, and can accurately identify individual targets even in complex backgrounds. Thus, the method outperforms traditional machine learning image segmentation in the fry counting task. In recent years, target detection methods in deep learning models have mainly been categorized as one-stage and two-stage methods. The target detection algorithm based on region convolutional neural network (R-CNN) [25] is a typical two-stage method, with flow processing that includes two steps: 1) candidate region acquisition and 2) candidate region classification and regression. Tseng et al. [26] used a mask regional-based CNN (mask R-CNN) for the pixel-level detection of fish images and finally obtained an accuracy of 77.31%. Li et al. [27] achieved an 11.2% improvement compared with the deformable parts model (DPM) by using a fast R-CNN algorithm for underwater fish detection in the laboratory-made dataset of fish images. Although two-stage approaches have advantages in terms of accuracy, they suffer from problems like repeated encoding, which leads to redundant model computation and poor real-time performance. Single-stage target detection algorithms such as the you only live once (YOLO) series and single shot multibox detector (SSD) are based on regression analysis. In contrast to twostage approaches, these methods treat target detection as a regression problem and input candidate frames directly into the model for end-to-end training, resulting in better realtime performance. One study [28] used a multi-column convolutional neural network (MCNN) as the front-end network to extract the features of fish images with different sensory fields, and adjusted the size of the convolutional kernel to adapt to the angle, shape, and size changes caused by fish movement, while applying a wider and deeper dilation convolutional neural network (DCNN) as the back-end network to detect fish targets, achieving fish school counting on that basis. In another study [29], the authors used the multi-scale fusion of anchorless YOLO v3 to perform fish counting, achieving an average accuracy of 90.20%. Although the above methods have good accuracy rates in counting, they require a relatively large number of parameters and high computational performance. Thus, the existing deep learning models are difficult to satisfy both counting accuracy and detection speed requirements, making it unlikely to apply them in edge devices.
Therefore, a lightweight target detection fry counting model based on deep learning is urgently needed. In this study, six lightweight fry counting models were obtained by combining three attention mechanisms, namely, spatial attention module (SAM), channel attention module (CAM) and convolutional block attention module (CBAM) with the backbone network and neck network of YOOv4 tiny lightweight model respectively. In the following section II, we discussed the source and preprocessing of the data set for training, and proposed our model based on the discussion of the attention mechanism. In section III, the accuracy and inference performances of proposed six models were measured and analyzed against four other mainstream models. In section IV, a research conclusion is given.
II. MATERIALS AND METHODS
The shooting location of the dataset was the Guangdong International Fisheries High Technology Park, located at No. 4, Dongyong Section, Shinnan Highway, Nansha District, Guangzhou (longitude 113.4211, latitude 22.8897). The experimental equipment mainly consisted of a white-bottomed fish tray (length: 40 cm, width: 28 cm, depth: 8 cm) and camera mounted directly above the fish tank. The camera was 0.8 m away from the horizontal plane to realize overhead photography of the fry, and the camera transmitted the acquired videos of the fry to a computer and the cloud for saving through a network cable. A schematic diagram of the whole experiment is shown in Figure 1. A frame rate of 30 FPS was used for the camera, and 268, 323, 345, 313, and 181 images were obtained for the five species of fry (black carp, crucian carp, grass carp, squaliobarbus curriculus, and variegated carp) respectively, as well as 184 images of the five kinds of fry mixed together (i.e., a total of 1593 images).
2) DATASET LABELING AND PRELIMINARY ANALYSIS
The dataset format used in this thesis was Pascal Voc2007 standard, and the labeling software was Labelimg, a graphical image labeling tool. The process of making a label using this software is shown in the Figure 2. All the fry in the image were framed using a rectangular marker box labeled as fish.
In fry detection and counting, it is often difficult to accurately count the biomass of fry in practice because of their physiological characteristics, the differences between individuals, and the environment of the experimental equipment. For example, the fry individuals shown in Figure 3(a) are very small, and the density distribution of the fry in the fish tank is uneven because they like to gather in the corners of the tray. Different volumes caused by the individual differences in the fry result in large size differences, as can be seen in Figure 3(b), where the volumes of the fish in the larger frames are three to four times larger than those of the fish in the smaller frames. Figures 3(c) and (d) show that because of the large local density caused by aggregation, the fry images are obscure. Hence, aggregation, which is commonly seen in obtained image datasets, makes it difficult to distinguish individual fry by means of traditional image segmentation, causing challenges to accurate fry counting. The white boxes in Figure 3(e) indicate fry excretions, which can produce interference during fry biomass estimation, and lead to problems such as model misjudgment. However, diverse fry images help to enhance the effectiveness of the model during deep learning training, while simultaneously improving the generalization ability of the model.
B. DATA AUGMENTATION
In order to improve robustness and recognition of the model, the image data in this study were enhanced before the images were used for model training. In this study, Mosaic was used [30] to enhance the data, which used CutMix [31] as a reference. The method selected four original photos at random, and performed data enhancement operations on each of them. As shown in Figure 4, the operations included a color gamut change, scaling, flipping, and rotating. Subsequently, for each of the four processed images, an arbitrary rectangular region was selected, and the images from these four obtained rectangular regions were stitched together to form a new image, which contained information from the original images such as the detection frames marked in the original images. Through this data enhancement process, the new image contained richer image features, which enabled the training model to achieve a better result.
is a target recognition and detection algorithm based on a deep neural network that is able to detect and classify objects in images and videos simultaneously. As a single-stage target detection algorithm, the network divides the image into a H×H grid. If the centroid of the target falls inside a cell, the cell is responsible for the prediction of that target, which will predict the relative coordinates of the bounding box locations and their corresponding target confidence scores. Finally, the detection box with the largest score is filtered as the final target detection box using the non-maximum suppression method. Its core concept is to solve the object detection as a regression problem, which makes the network structure simple, and the detection speed much faster. Since the YOLO detection algorithm was first proposed in 2016, related researchers have been improving and enhancing it, developing derived versions and good applications in the manufacturing and industrial sectors.
YOLOv4-tiny is a lightweight model of YOLOv4. As shown in Figure 5, it has a relatively low detection accuracy compared with YOLOv4, but it only has about one-tenth of the parameters of YOLOv4, which makes it ideal for deployment on edge devices. The network structure can be divided into three parts, the backbone network, neck network, and detection head, and it has a reduced version of the CSPDarknet53 backbone network, which is responsible for the extraction of image features. The neck network adopts feature pyramid networks [33], convolving P5 once and then upsampling twice. The sampling result is channel spliced with P4 to realize the fusion of different high-level semantic information features extracted by the backbone network, and finally enriches the semantic information of P4. However, compared with YOLOv4, YOLOv4-tiny uses only two detection heads, and the detection effect is theoretically weaker than that of YOLOv4. In addition, the backbone network of YOLOv4-tiny has a large number of convolutional layers stacked, resulting in low feature diversity [34].
2) ATTENTION MECHANISMS
Attention mechanisms originated from human research on the visual field, where humans selectively focus on a portion of the information that they see while ignoring the rest of the visible information during the process of information attention. In the field of computer vision, attention mechanisms score the various dimensions of the data in the input models, and weight the features according to their scores to highlight the impact of the main features on the downstream models. The attention mechanism in deep learning can be generalized as a vector of importance weights. In order to improve the accuracy of the model and solve problems such as unrecognized or misrecognized fry caused by their uneven distribution, an attention mechanism module needed to be added to the model to make the model accurately focus on the fry themselves rather than other objects to obtain more accurate results in the process of fry recognition and counting. Because the number of parameters in an attention mechanism module is generally small, its addition usually does not have a significant impact on the size and inference performance of a model. Section III will compare the number of parameters and inference performance of the network structure model after the addition of the attention mechanism.
To obtain more comprehensive information about the features in each dimension of the image, as well as more fine-grained image features, three different attention mechanism modules were used in this study: SAM [35], CAM, CBAM [36], which integrated spatial attention and channel attention. As shown in Figure 6, the CAM module first compresses the spatial dimension of the feature map by Max Pooling and Avg Pooling to generate the feature weights of the two channel dimensions. Then, the two one-dimensional feature weight vectors are compressed and expanded by a multilayer perceptron with a compression rate of 16. The two obtained feature weights are added together, and the weights of each channel are calculated by the activation function to obtain the features of the channel dimension. In terms of spatial attention, as shown in Figure 7, the channel dimension is first compressed through Max and Avg Pooling to generate two spatial attention feature maps with one channel. Then, these two feature maps are stacked to generate a feature map with one channel using a 3 × 3 convolution kernel. Through the sigmoid of the feature map, the feature weights in spatial dimensions are obtained. Figure 8 shows that the feature map of the CBAM module is multiplied by the input features of the channel dimensions after channel attention to obtain a new feature map. Then, the new feature map is multiplied again by the input feature map after the spatial attention mechanism to obtain a final feature map with the same shape as the input feature map.
3) IMPROVED MODELS
Taking YOLOv4-tiny as the basic model, this study proposed two network structures that combined different network regions. One was integrated into the backbone network, and the other was integrated into the neck network, as shown in Figure 9. As shown in Figure 9(a), two attention modules were used, embedded between the resBlock in the backbone network to enable it to extract more useful features through the attention module in the feature extraction stage. When the attention mechanism module was SAM, the obtained model was SAM(b)-YOLOv4-tiny, and when the modules were CAM and CBAM, the obtained models were CAM(b)-YOLOv4-tiny and CBAM(b)-YOLOv4tiny, respectively. As seen in Figure 9(b), a total of three attention mechanism modules were used, all three of which were integrated into the neck network to enable the model to pay attention to dense areas of fish when performing target detection, as well as feature fusion, thus improving the accuracy and efficiency of the model's recognition. Three additional models, SAM(n)-YOLOv4-tiny, CAM(n)-YOLOv4-tiny, and CBAM(n)-YOLOv4-tiny, were eventually obtained. The YOLO detection heads used in these six improved models were the same as those used in YOLOv4-tiny.
D. LOSS FUNCTION
The loss function used in the statistical model for fry counting consisted of three components, the bounding box regression loss (Loss CIoU ), confidence loss (Loss confidence ), and classification loss (Loss class ), as seen in (1). (1) The intersection over union (IoU) metric is used to measure the degree of overlapping between the predicted frame and ground-true frame in anchor-based target detection, with positive and negative samples distinguished based on the IoU value. In addition, it plays two other roles, filtering the predicted frames using non-maximum suppression (NMS) and adjusting the loss function to make the model more accurate. However, the direct use of the IoU in the loss function has disadvantages in the process of model training and optimization (i.e., it does not consider the distance between two frames, and cannot accurately reflect the degree of overlapping between them). Because the IoU between two frames is 0 when they do not overlap, if the IoU is directly used in the loss function for optimization, it is possible that no gradient data will be returned because the loss is 0, which will eventually prevent the training from being performed. Therefore, based on the IoU, researchers have proposed the generalized IoU (GIoU), distance-IoU (DIoU), and complete-IoU (CIoU) [37]. The GIoU solves the problem when the target frame does not overlap with the prediction frame in the IoU. The DIoU considers the centroid distance and overlapping area of the two frames. The CIoU is more comprehensive because it considers the aspect ratio on top of the DIoU. The formulas for the CIoU are as follows.
LOSS = Loss CIoU + Loss confidence + Loss class
Here, w gt and h gt are the width and height of the groundtrue frames, while w and h are the width and height of the predicted frames. In addition, ρ(b,b gt ) denotes the Euclidean distance between the center points of the predicted and target frames, where b is the predicted box, and b gt is the target box, while c denotes the diagonal length of the smallest box that covers both boxes, and a is a weight parameter.
The bounding box regression loss (Loss CIoU ), confidence loss (Loss confidence ), and classification loss (Loss class ) are shown in the following.
Here, Loss BCE n, n represents the binary cross-entropy loss, where n andn are the actual and predicted categories of the j-th anchor in the i-th grid; p is the probability of belonging to the fry; S represents the number of grids; B is the number of anchors in each grid, and the anchor value is taken as 3 in this study. In addition, 1 obj ij denotes that if the j-th anchor in the i-th grid contains an object, the value is 1. Otherwise, it is taken as 0, which means that the object is not included.
E. TRAINING PROCESS
The parameters for the experimental platform and environment for this experiment are listed in Table 1, and training parameters are listed in Table 2. The flow of the experimental training is shown in Figure 10, in which the dataset was divided into a training set (60%), validation set (20%), and test set (20%). The training and validation sets were used for model training and the adjustment of the hyperparameters in each epoch, while the test set was used for model testing and the adjustment of the optimized method until the optimal results were obtained.
Because the model was downsampled five times in the backbone network, the input size had to be a multiple of 32. Taking 416 × 416 as the input size for all the models, a batch size of 32, the optimizer Adam, and an initial learning rate of 5e4, the cosine annealing algorithm was used to adjust the learning rate. In addition, label smoothing [38] set at 0.005 was added to prevent the model from overfitting, and to improve the generalization ability of the model on unknown data.
To verify the superiority of the models, this study compared them with advanced target detection algorithms, including SSD, YOLOv4, YOLOv4-tiny, and GhostNet-YOLOv4. Among these, ghostnet-YOLOv4 replaced the CSPDarknet backbone network in the original YOLOv4 with a more advanced backbone network (ghostnet), which had fewer parameters and less computation. The loss of each model was relatively large at the beginning of training. After 100 iterations of network training, the 10 models tended to converge in both the training and test sets, and the loss reached a peak, at which point training stopped, as shown in Figure 11.
F. EVALUATION INDICATORS
The evaluation indicators for the models fell into two categories. One was related to the detection accuracy, where five evaluation metrics were used in the study, including the accuracy, recall, mean average precision (mAP), and LAMR. The goal of the study was to accurately detect the number of individual fry, resulting in high requirements for both the accuracy and recall. Thus, F1 was added as an evaluation indicator to balance the accuracy and recall. Usually, a higher F1 indicates a better model effect. The other category was the evaluation indicators related to the inference speed, number of neural network parameters, and FPS. Their parameters were calculated as shown in the following.
Here, TP denotes positive samples predicted to be positive by the model; FP denotes negative samples predicted to be positive by the model; FN denotes positive samples predicted to be negative; and Miss denotes the rate of missed detections. FPPI is the number of false tests in each test image, while T represents the total number of test images, and FD is the number of false detections.
In this study, the image speed inference was further tested on Jetson Nano, a development kit released by NVIDIA in 2019 that delivers the power of a modern AI in a compact, easy-to-use platform, featuring a complete range of software programmability, a quad-core 64-bit ARM CPU, and 128-core integrated NVIDIA GPUs at an affordable price, making it suitable for deployment in a variety of edge-end environments.
III. RESULTS AND ANALYSIS A. ACCURACY COMPARISON
The six improved models with the added attention mechanism were computed on the fry training dataset, and their accuracy and recall values at different score_threhold values are shown in Figure 12, from which it can be seen that when the score_threhold was 0.5, the accuracy and recall were stable. As the score_threhold increased, the accuracy of most of the models improved slightly, but the recall rates of the models decreased. In order to accurately detect the number of individual fry, both high accuracy and recall are required. Thus, the score_threhold was set at 0.5 in this study.
As can be seen in Figures 12 and 13, models (b), (d), (e), and (f) all reached an inflection point near a recall of 92% (i.e., they reached an equilibrium point), after which the accuracy dropped sharply. At that point, both the accuracy and recall were at their highest, and the average precision (AP) of the fry were all greater than 91%, with the highest AP for CBAM(n)-YOLOv4-tiny reaching 94.45%. Models (a) and (c) reached the inflection point with recall values of approximately 74%, with average precision values of 72.75% and 70.21%, respectively.
The overall performances of the six detection models in the test set are listed in Table 3. Analyses performed from the perspectives of the different attention mechanism modules showed that the CBAM module was more effective than SAM and CAM when applied to both the neck and backbone networks, where the LAMR of the CBAM(n)-YOLOv4-tiny model was the lowest, merely 0.28, and the F1 value was the highest, reaching 0.94. The CAM(b)-YOLOv4tiny and SAM(b)-YOLOv4-tiny models were less effective, with LAMR reaching 0.86 and 0.85, respectively, and F1 scores for both at 0.76. The results indicated that, among these three attention mechanisms, the detection network with the CBAM module performed better for the fry dataset than the SAM and CAM modules. From the perspective of the positions where the attention mechanism was embedded, a comparison of the additions of the modules to the backbone and neck networks showed that the LAMR value of SAM added to the neck network decreased from 0.85 to 0.44, a decrease of 48.2%, and the F1 score increased from 0.76 to 0.90, an increase of 18.4%, relative to the values when it was added to the backbone network. Relative to the addition of CAM to the backbone network, when it was added to the neck network LAMR decreased from 0.86 to 0.40, a drop of 53.5%, and the F1 score improved from 0.76 to 0.91, an increase of 19.7%. Compared with the values when CBAM was added to the backbone network, when it was added to the neck network LAMR decreased from 0.34 to 0.28, a decline of 17.6%, and the F1 score increased from 0.93 to 0.94, an improvement of 1.1%. The results showed that the addition of the attention mechanism to the neck network significantly reduced LAMR and improved the F1 score compared to the addition to the backbone network, suggesting that combining attention in the neck network was more likely to improve the model count accuracy. Based on the data in Table 3 and the above discussion, CBAM(n)-YOLOv4-tiny was the optimal model among the six proposed improved models.
For the practical application of the models, pictures with low and high fry density were selected to compare the actual results. The results of the unimproved original model (YOLOv4-tiny) and six different improved models are presented in Figures 14-20. Figure 14 shows that the original model (YOLOv4-tiny) detected 18 fry in the fry picture with low density (21 fry) and 133 fry in the picture with high density (169 fry). In Figures 16 and 17, both CAM(b)-YOLOv4tiny and SAM(b)-YOLOv4-tiny detected only 19 fry in the low-density picture, with the two undetected fry being located at the edge of the plate. Only 143 fry and 135 fry were detected in the high-density picture. Figures 15 and 18 show that both CBAM(b)-YOLOv4-tiny and CBAM(n)-YOLOv4tiny detected all the fry in the image with low density and 151 fry in the image with high density. As seen in Figures 19 and 20, CAM(n)-YOLOv4-tiny detected all the fry in the low-density image and 152 fry in the high-density image, while SAM(n)-YOLOv4-tiny detected 20 fry in the low-density image and 151 fry in the high-density image. The above results indicated that compared with the original YOLOv4-tiny model, the detection accuracy was improved by the addition of the attention mechanism modules for both low-and high-density fish populations. Whether it was CAM, SAM, or CBAM, when integrated into the neck network, they could achieve relatively high accuracy, which was basically consistent with the calculated results in Table 3. As can be seen from their corresponding plots, the detection of the fry at the edge of the fish plate, as well as in the dense regions, improved the fry counting accuracy. However, because only two images were tested, the results do not lead to the conclusion that CBAM(n)-YOLOv4-tiny was the best of the six models, as shown in Table 3. In order to verify the performance of the CBAM(n)-YOLOv4-tiny model proposed in this paper, four current mainstream target detection algorithms, namely SSD, ghostnet-YOLOv4, YOLOv4, and YOLOv4-tiny, were also tested on the fry dataset in this study. The results of these tests are listed in Table 4. When the score_threhold was 0.5, the accuracies of the mainstream four advanced models ranged from 74.23% to 95.76%, and the recall rates ranged from 65.27% to 76.88%, among which SSD had the highest accuracy rate of 95.76%, but its recall rate was the lowest at only 65.27%. The CBAM(n)-YOLOv4-tiny model proposed in this paper had the highest mAP and F1 values in the fry test set, with values of 94.45% and 0.94, respectively, in addition to having the lowest LAMR of only 0.28. Compared with the original model YOLOv4-tiny, the accuracy rate was improved by 27.06%; the recall rate was increased by 30.66%; the mAP was increased by 38.27%; the F1 score improved by 28.77%; and LAMR decreased by 67.82%.
B. EFFICIENCY COMPARISON
The counting accuracy of a model is an important indicator in the evaluation of its performance. In addition to accuracy, the model size and detection speed are also important, especially for applications such as fry sales, where these models are often applied in embedded devices to improve the ease of use. Based on the PC parameters listed in Table 1, the frames per second (FPS) value was used as the detection speed indicator, where a larger value usually indicated a better result. Because the FPS calculation was related to the performance of the computing device, and different values could be obtained at different times, each model calculated the FPS five times, and the average was used for the final FPS. Table 5 compares the model size and detection speed values of the fry counting models based on the above 10 target detection networks.
From Table 5, it can be seen that both SSD and YOLOv4 had larger numbers of parameters and poorer performances in terms of efficiency, while ghost-YOLOv4 and YOLOv4tiny had smaller numbers of parameters and performed better on embedded devices. SSD had 90.07 MB of parameters and its FPS on the PC was 105.71, while its speed on the edge device Jeston Nano was only 1.70 FPS. YOLOv4 had a parameter count of 245.53 MB and an FPS of 61.63 on the PC, causing the model to fail to load on the Jeston Nano. Because ghost-YOLOv4 and YOLOv4-tiny both used lightweight backbone networks, they had parameter counts of 43.60 MB and 22.41 MB, and FPS values of 5.29 and 12.27 on Jeston Nano, respectively. Ghost-YOLOv4 had a lower FPS value on the PC than SSD, but on Jeston Nano it performed better than SSD. This was because the PC had ample video memory, while embedded devices have limited video memory, revealing that the model size has a significant impact on the inference speed of embedded devices.
In the models proposed in this paper, it can be seen from Table 5 that the number of parameters for the six models increased very little after the integration of the attention mechanism modules. Compared with YOLOv4-tiny, after adding the SAM and CAM modules to the neck and backbone networks, the CAM(n)-YOLOv4-tiny and CAM(b)-YOLOv4-tiny models increased in size by 0.65 MB and 0.15 MB, respectively, and their inference speeds on Jeston Nano decreased by only 0.68 FPS and 0.18 FPS, respectively.
For SAM(n)-YOLOv4-tiny and SAM(b)-YOLOv4-tiny, the parameters of the models hardly increased at all, and their inference speeds on Jeston Nano decreased by only 0.34 FPS and 0.23 FPS, respectively. After combining spatial attention and channel attention, the numbers of parameters for the CBAM(n)-YOLOv4-tiny and CBAM(b)-YOLOv4-tiny models increased by only 0.66 MB and 0.15 MB, respectively, when combining the CBAM module, and their speeds in Jeston Nano were reduced by only 0.91 FPS and 0.67 FPS, respectively.
C. DISCUSSION
The comparison results for mAP and FPS obtained from the six improved models proposed in this paper, together with the four models SSD, ghostnet-YOLOv4, YOLOv4, and YOLOv4-tiny on the PC and Jeston Nano are shown in Figure 21. The results showed that the mAP was higher when using SSD and YOLOv4 for fry counting, but because of their relatively large model sizes, their detection speeds were slow, and they were difficult to deploy in embedded devices to meet the demand of real-time monitoring. In terms of detection speed, YOLOv4-tiny performed the best in the PC and Jeston Nano, but in terms of accuracy, its mAP was below 70%, which was not sufficient to meet the demand of fry counting. When the attention mechanism module was added, compared with the original model, the precision rate had different degrees of improvement without much reduction in recognition speed, where CBAM(n)-YOLOv4-tiny, CBAM(b)-YOLOv4-tiny, SAM(n)-YOLOv4tiny, and CAM(n)-YOLOv4-tiny performed better in terms of accuracy, with a maximum mAP of 94.45%. The accuracies of CAM(b)-YOLOv4-tiny and SAM(b)-YOLOv4-tiny were slightly improved compared with the original model, but their improvements were lower than those for the other four models. The initial estimation for the cause was that SAM and CAM had weak feature extraction capabilities in the backbone network, while the six improved models proposed in this paper, with their number of parameters and operation speed, could effectively adapt to the embedded devices.
To explore the effect of adding attention mechanisms on the recognition capability of the model, the weights of the last layer of the model were extracted to form a heat map. Figure 22 shows that after adding the attention mechanism, the weight of the fry region was significantly larger than that of the original model, and in the detection map of YOLOv4tiny with the heat map, one can see that the weight of the region in the yellow box was lower, which led to the fry in this region not being recognized and reduced the accuracy of recognition. After adding the attention mechanism, one can see that the weight of the edges of the image increased, which meant that the fry at the edges could be successfully recognized. Thus, after adding the attention mechanism, the FIGURE 22. Heatmaps and detection maps with and without attention. VOLUME 10, 2022 model noticed the region that the original model did not notice, which increased the fry recognition accuracy.
Nevertheless, the aggregation was severe because of the tendency of the fry to gather together, especially in the case of external disturbances. Severe rejection could cause two or more fry to be identified as only one fish, possibly because the fry were too densely packed, resulting in missed detection between detection frames as a result of non-maximum suppression.
IV. CONCLUSION
(1) To develop a fry counting method that is suitable for deployment in edge devices, this study investigated six different lightweight fry counting models by adding three different attention mechanisms (SAM, CAM, and CBAM) to different network structures of YOLOv4-tiny. These combinations all showed different degrees of improvement compared with the YOLOv4-tiny model, with CBAM(n)-YOLOv4-tiny realizing the highest mAP of 94.45% and a recall of 93.93%. The accuracy rate improved by 27.06%; the recall rate improved by 30.66%; mAP improved by 38.27%; the F1 score improved by 28.77%; and the LAMR decreased by 67.82% compared with the YOLOv4-tiny model, while the number of model parameters and inference rate did not change significantly compared with the YOLOv4-tiny model. With its lightweight features, the model is suitable for deployment in various edge computing devices.
(2) The three attention mechanism modules were added to the backbone and neck in the models, and the experimental results showed that the combined effect of the model obtained by adding the attention mechanism to the neck network was better than that of the model obtained by adding it to the backbone network, while the model enhancement obtained by adding the CBAM attention mechanism module was the most obvious.
(3) Compared with SSD, ghostnet-YOLOv4, and YOLOv4, the frame rates of the six models proposed in this paper obtained in different operating environments were significantly improved, and they have a small number of parameters.
DACHUN FENG was born in Nanchong, China, in 1973. He received the Ph.D. degree from the South China University of Technology, China, in 2009. He is currently a Professor with the College of Information Science and Technology, Zhongkai University of Agriculture and Engineering. His current research interests include intelligent information systems for agriculture, the Internet of Things, artificial intelligence, and big data.
JIEFENG XIE received the bachelor's degree in computer science and technology from the Zhongkai University of Agriculture and Engineering, in 2021, where he is currently pursuing the master's degree in computer science. His research interests include intelligent information systems for agriculture and artificial intelligence. He is currently a Professor with the College of Information Science and Technology, Zhongkai University of Agriculture and Engineering. His current research interests include the areas of intelligent information systems for agriculture, artificial intelligence, big data, and computational intelligence. VOLUME 10, 2022 | 8,607.2 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
New Insights on Leucine-Rich Repeats Receptor-Like Kinase Orthologous Relationships in Angiosperms
Leucine-Rich Repeats Receptor-Like Kinase (LRR-RLK) genes represent a large and complex gene family in plants, mainly involved in development and stress responses. These receptors are composed of an LRR-containing extracellular domain (ECD), a transmembrane domain (TM) and an intracellular kinase domain (KD). To provide new perspectives on functional analyses of these genes in model and non-model plant species, we performed a phylogenetic analysis on 8,360 LRR-RLK receptors in 31 angiosperm genomes (8 monocots and 23 dicots). We identified 101 orthologous groups (OGs) of genes being conserved among almost all monocot and dicot species analyzed. We observed that more than 10% of these OGs are absent in the Brassicaceae species studied. We show that the ECD structural features are not always conserved among orthologs, suggesting that functions may have diverged in some OG sets. Moreover, we looked at targets of positive selection footprints in 12 pairs of OGs and noticed that depending on the subgroups, positive selection occurred more frequently either in the ECDs or in the KDs.
INTRODUCTION
Receptor-like kinases constitute one of the largest gene families in the plant kingdom. They are typically composed of an amino-terminal ECD, a TM, and an intracellular domain (ICD) containing the KD. Several phylogenetic studies of the RLK family were conducted, initially focusing on Arabidopsis and later including other plant species Bleecker, 2001b, 2003;Shiu et al., 2004;Lehti-Shiu et al., 2009;Liu et al., 2009;Sakamoto et al., 2012;Zan et al., 2013). Using a phylogeny inferred from their KD alignment the Arabidopsis RLK genes were classified into 44 SGs or subfamilies (Shiu and Bleecker, 2001a). Fifteen SGs have been described containing common motifs in their ECD. The ECD of the largest SG possesses LRR and this SG has therefore been named LRR-RLK (Kobe and Deisenhofer, 1994;Kajava, 1998;Shiu and Bleecker, 2001a;Shiu et al., 2004;Lehti-Shiu et al., 2009). The first members of this large family were cloned in the 90s and their signaling pathways were extensively studied. Those members are ERECTA (ER), CLAVATA1 (CLV1), BRASSINOSTEROID INSENSITIVE 1 (BRI1), SOMATIC EMBRYOGENESIS RECEPTOR-LIKE KINASE (SERK), HAESA-RLK5, and Xa21 (Horn and Walker, 1994;Song et al., 1995;Torii et al., 1996;Clark et al., 1997;Li and Chory, 1997;Schmidt et al., 1997). To date, functions have been assigned to ∼35% of the ∼230 LRR-RLK members in A. thaliana and -to a lesser extent -other species (Wu et al., 2016). They are important mediators of cell-cell communication to relay developmental cues and environmental stimuli or to activate defense/resistance against pathogens (Mu et al., 1994;Muschietti et al., 1998;Antolin-Llovera et al., 2014a;Belkhadir et al., 2014;Jaouannet et al., 2014).
Functional analyses conducted on LRR-RLK genes over the last twenty years raveled the role of the domains located in the ECD of these receptors. The LRR domains are highly versatile in number allowing a whole range of protein-protein interactions. These include homo-or hetero-dimerization of receptors, in addition to ligand binding. Furthermore, some LRR-RLK receptors possess island domains -devoid of LRRs -located between LRR motifs (Li and Chory, 1997). They were identified in the BRI1 receptor as the binding site for the brassinosteroid (BR) hormone (Kinoshita et al., 2005;Hothorn et al., 2011;She et al., 2011). Few studies have also described the functions of other ECD domains. For example, two Cys-pair have been reported. The first one is located in the N-terminal part of the LRR-RLKs, approximately 60 AA from the start codon between the SP and the first LRRs. The second one -if present -can be found between the last LRR and the TM domain (Dievart and Clark, 2003). Mutations in the Cys-pairs have been shown to affect the function of some LRR-RLKs, e.g., FLAGELLIN SENSING 2 (FLS2), a gene participating in the perception of the bacterial elicitor flagellin. However, there is also an example of a LRR receptor like protein (CLAVATA 2) for which mutations in Cys-pairs had no effect on the function of protein -at least in the meristem and roots (Noguchi et al., 1999;Song et al., 2010;Sun et al., 2012). In BRI1, a mutant harboring a mutation in Cys-pairs appears to be functional but seems to be retained in the endoplasmic reticulum and degraded. This suggests that this mutant protein does not pass the endoplasmic reticulum quality control (Hong et al., 2008). Although no general conclusions can be drawn so far on the importance of this motif, all the variations observed in Cys-pairs likely play a role in the folding, trafficking and/or the binding to other proteins. It was therefore suggested that this motif influences the signaling pathways activated downstream of the LRR-RLKs . Another ECD, the MLD lying in between the SP and the LRRs, has also been described in one LRR-RLK SG (Hok et al., 2011). In legumes and actinorhizal plants, the SYMBIOSIS RECEPTOR LIKE KINASE (SYMRK, also known as NORK or DMI2) receptor, involved in phosphateacquiring arbuscular mycorrhiza and in nitrogen-fixing root nodule symbiosis, possesses a malectin domain but the exact function of this receptor is still unclear (Antolin-Llovera et al., 2014a). It has been recently demonstrated that the SYMRK receptor is likely cleaved at the plasma membrane to release the N-glycosylated MLD (Antolin-Llovera et al., 2014b). Moreover, this cleavage would permit a physical interaction between the SYMRK and the LysM-type RLK NOD FACTOR RECEPTOR 5 and induces a rapid degradation of the SYMRK protein lacking its MLD. Thus, all the domains lying in the ECD with the LRRs play essential and complementary roles for specific LRR-RLK receptor functions.
Their central role in plant development and perception of environmental condition or stresses, their ubiquity in all angiosperms, and the complexity of their relationships make LRR-RLK genes an interesting candidate family to be studied in a phylogenetic framework (Shi et al., 2014). Such an analysis will be helpful to identify groups of orthologous genes and to compare functions between orthologs. However, inferring the phylogeny of such a large family raises several challenges. First, the vast number of sequences to be analyzed poses a problem of computational time and space. Second, the high rate of gene gains and losses during the evolution of the family, species-specific characteristics, and annotation errors result in complex orthologous relationships that are not always identified correctly by automatic gene annotation. For these reasons, large gene families -such as LRR-RLKs -are not well characterized on platforms like GreenphylDB or Phytozome dedicated to automatic clustering (Conte et al., 2008;Rouard et al., 2011;Goodstein et al., 2012) and significant manual expertise is required to produce reliable results.
In the present article, we conducted a phylogenetic analysis of the LRR-RLK genes from 33 plant genomes with the objective to investigate the characteristics of genes belonging to the same OGs, expected to be conserved among most monocot and dicot species analyzed. To do so, we first looked for and identified 101 OGs of genes present in most genomes analyzed defined them as the LRR-RLK angiosperm "core" sets. We observed that ECD structural features were not always conserved in some OGs, suggesting that functions may have diverged among these orthologs. We also looked at selection footprints that led to the differentiations of pairs of OGs. This allowed us to investigate the putative role and function of uncharacterized genes in recently sequenced genomes from experimentally characterized LRR-RLK genes in model organisms.
LRR-RLKs Extraction, Phylogeny, and OGs
On each of the 33 plant proteomes, the hmmsearch program was run to extract peptide sequences containing both LRR(s) and a KD (Eddy, 2009). Sequences containing both LRRs and KD were classified into SGs using a global phylogenetic analysis (Fischer et al., 2016). First, the KD of all these sequences was aligned using MAFFT with a progressive strategy (Katoh et al., 2002). Then the alignment was cleaned with TrimAl configured to remove every sites with more than 20% of gaps or with a similarity score lower than 0.001 (Capella-Gutierrez et al., 2009). A similarity matrix was computed using ProtDist with a JTT model, and then a global distance phylogeny was inferred using FastME configured with default settings and SPR movements to optimize the tree topology (Felsenstein, 1989;Desper and Gascuel, 2002). SGs were defined manually in the global phylogeny using the Arabidopsis genes as reference (Shiu et al., 2004;Lehti-Shiu et al., 2009;Fischer et al., 2016). To extend this dataset to receptor kinases devoid of LRRs in their ECD (sequences annotated "No_LRR"), the BLASTP algorithm (default parameters) has been run SG per SG, using each of the 7,767 KD sequences to search a database composed of the 33 proteomes as query (Altschul et al., 1997). Blast outputs were parsed to keep only homolog sequences sharing more than 90% identity with the query sequence. The new "No_LRR" sequences retrieved by blast were assigned to the same SG as the query sequence. Then, phylogenies were inferred for each of the 20 SGs. Each group of sequences was aligned using MAFFT with an iterative strategy (maximum of 100 iterations) (Katoh et al., 2002). Alignments were cleaned using TrimAl configured this time to remove sites with more than 80% of gaps (Capella-Gutierrez et al., 2009). Then maximum likelihood phylogenies were inferred using PhyML 3.0, configured with LG+gamma model, and the best of NNI and SPR topology optimization (Guindon and Gascuel, 2003). Statistical branch supports were computed using the aLRT/SH-like strategy (Guindon et al., 2010;Anisimova et al., 2011). Each of the 20 phylogenetic trees has been reconciled with the species tree using RAP-Green (Dufayard et al., 2005) 1 . By comparing the gene tree with the species tree, this analysis allows us to root phylogenetic trees (Dufayard et al., 2005). We tested this approach of rooting (by minimizing the number of inferred duplications and losses) and compared it with rooting with outgroups (data not shown). The two methods provided very close root locations that did not change the overall conclusions.
To define the monocots dicots (MD) OGs, monocots/dicots bifurcations (branch support threshold >0.85) have been manually located in each of the 20 SG-specific trees. To be considered as MD OGs, the minimum number of monocots and dicots species represented was 3 and 4, respectively, to avoid keeping groups of misannotated sequences as MD OGs. Thus, 101 MD OGs have been defined. For each of them, the number of sequences in each of the 31 studied angiosperm species was recorded. If no sequence was discovered in one species, it was considered lost in this species.
Structural Features
Number of motifs and positions (LRRs (PF00560.24) and KDs (PF00069.16)) are outputs of the hmmsearch program (default parameters) (Eddy, 2009). Island domains were determined based on predicted LRR positions in sequences. For SP and TM domains, the TMHMM and TOPPRED softwares have been used (Claros and von Heijne, 1994;Krogh et al., 2001). For the malectin domains in SG_I and SG_VIII-2, sequences of the domains were extracted in SMART and aligned to build hmm motifs with the hmmbuild program (Eddy, 2009;Letunic et al., 2009). For Cys-pairs, hmm motifs were built based on subsets of sequences known to possess these motifs (Eddy, 2009).
Test of Positive Selection on Ancestral Branches
Twelve sub-trees were considered: the OGs selected were those with a 'simple' organization, i.e. with a gene topology fitting approximately with the species tree. Three to four sequences among those that are the most closely related to the OG were selected from the whole SG tree as outgroup (Supplementary Figure S4). The sequences were re-aligned and the alignment was cleaned as described previously (Fischer et al., 2014(Fischer et al., , 2016. We ran codeml branch/site models implemented in the PAML4 software (Yang, 2007). For each OG, the following branch partition was defined: all branches but one were tagged as 'background' branches and the branch between the duplication node and the node corresponding to the split between monocot and dicot tagged as 'foreground' branch. Then two models were compared: the null model (A 0 ), in which sites on the foreand background branches evolved under the same selective pressure (purifying or neutral), and a model including positive selection (model A) in which some sites on the foreground branch evolved under positive selection whereas sites on the background branches still evolved under purifying selection or neutrality. The most likely model was inferred by a likelihood ratio test (LRT). To take into account multiple testing, a Bonferroni correction was applied: the significance threshold of 0.05 was divided by the number of tested branches (24). Sites detected to be under positive selection at the codon level were manually curated for alignment quality and reliability. In branches identified to have evolved under positive selection, Bayes empirical Bayes was used to calculate the posterior probabilities at each codon and detect those under positive selection (i.e., those with a posterior probability of ω > 1 strictly above 95%).
More than 200 LRR-RLK Genes on Average Per Angiosperm Species
We conducted a phylogenetic analysis of the LRR-RLK gene family in 33 fully sequenced plant genomes to classify them into SGs and to highlight and describe general characteristics of these LRR-RLK gene sets (Fischer et al., 2016). Briefly, besides 31 angiosperms genomes -represented by eight monocots (including six poaceae) and 23 dicots -one bryophyte genome of Physcomitrella patens (PHYPA, moss) and one lycopodiopsida genome of Selaginella moellendorffii (SELML, spikemoss) were included (Supplementary Table S1, see Section "Materials and Methods" for details and five-digit species code). As it has been done previously, we based our classification of LRR-RLK genes into SGs on the KD phylogeny (Shiu et al., 2004;Lehti-Shiu et al., 2009;Fischer et al., 2016). In our previous study (Fischer et al., 2016), the LRR-RLK dataset contained 7,767 sequences possessing at least one LRR in their ECD. Since we focus on structural features of ECDs and presence/absence of the genes in LRR-RLK OGs in this present study, we included LRR-RLK homologues for which LRRs were completely lost or were degenerated in this new dataset. Thus, 593 sequences (prefixed "No_LRR") were added to the original set of 7,767 sequences which lead to a total of 8,360 sequences (Supplementary Table S2). Within each of the 20 SGs, KDs were aligned and SG-specific trees were obtained using a likelihood-based method (PHYML) (Figure 1A and Supplementary Material). Altogether, these LRR-RLK genes represent on average 0.71 and 0.66% of the monocot and dicot proteomes, respectively. Interestingly, in moss (PHYPA) and spikemoss (SELML), the proportions of LRR-RLKs per genome (0.41 and 0.36%, respectively) are approximately half the ratio observed in angiosperms. Likewise, the average number of LRR-RLK genes in angiosperms is 263.6, with 260.7 LRR-RLK proteins [±20.2 (SE)] in dicots and 268.8 [±18.2 (SE)] in monocots. In PHYPA and SELML, 134 and 81 LRR-RLK genes have been retrieved, respectively. There is no significant difference in the average number of LRR-RLK genes between monocots and dicots but the number almost doubled in most angiosperms compared to PHYPA and SELML. It has to be noted that in some genomes (e.g., CARPA and LOTJA), the number of LRR-RLK genes is particularly low compared to other angiosperm genomes, suggesting that retention rates vary among genomes, and that many losses may have occured in some genomes (Fischer et al., 2016). Nevertheless, our results highlight the fact that, after the first wave of expansion in early land plants (Embryophyta), a second large amplification occured in angiosperm genomes which shaped the current LRR-RLK family size of more than 200 gene copies on average per genome.
Among the 20 SGs, SG_III, SG_XI, and SG_XIIa are the largest as they contain ∼50% of the total number of LRR-RLK genes in the analyzed plant genomes ( Figure 1A). The extensive expansions leading to their size do not follow the same amplification pattern ( Figure 1B). This observation is highlighted in Figure 1B by the color code used for each species in the 20 SG-specific trees (with branches of monocots species in pink and red, and branches of dicots species in yellow, blue and green). Our results reveal that the high numbers of SG_XIIa genes is the consequence of many lineage-specific expansions (LSE) (See Fischer et al., 2016 for details). These LSEs are relatively recent as they can be observed in phyla as well as species-specific lineages. On the contrary, in SG_III and SG_XI, expansions occured mainly before the early divergence of angiosperm lineages, even though LSEs can also be observed at different levels of resolution in the trees. Therefore, these numerous and diverse modes of expansions lead to complex paralogous and orthologous relationships.
OGs of Monocot and Dicot Genes Retained Along Angiosperm Evolution
With the aim of transfering functional annotation from well studied genes from model species to orthologous genes in other genomes, we first analyzed in depth orthologous relationships between monocots and dicots LRR-RLK genes in each SG. This analysis led us to define what we named the "core set" of LRR-RLK genes in angiosperms: i.e., orthologous genes which have not been completely lost in either monocots or dicots throughout the angiosperms evolutionary history. To do so, the 20 SG-specific trees were scaned to locate monocots/dicots bifurcations (Figure 2). Based on this analysis, 101 OGs containing monocots and dicots sequences (named MD OGs) were characterized and defined as the "core" set of LRR-RLK genes in angiosperms.
The SG analysis revealed that these 101 MD OGs are present in 19 of the 20 SGs, with the majority of them in SG_III and SG_XI (Table 1). This highlights again the fact that most of the LRR-RLK genes which underwent expansions before the monocots/dicots split have been retained in these SGs. In order to go further in the description of orthologous relationships, we qualified OGs as either "simple" or "complex". In "simple" OGs, the presence or absence of duplications within the monocot or dicot clades can clearly be inferred from the phylogenetic tree.
On the other hand, if several duplications occurred disorderly with no obvious connection to the species tree we described these OGs as "complex" (OG_c). Interestingly, these OG_c are over-represented in SG_IV, VIII-1, VIII-2, and XI. A total of 6739 genes are contained in the 101 OGs, representing 82.7% of the entire LRR-RLK gene family. However, while 2956 genes are included in the 24 OG_c (average of 123.2 genes per OG), 3783 genes belong to the 77 non-complex OGs (average of 49.2 genes per OG), highlighting differences in expansion/retention rates between these OGs. Moreover, looking at the percentage of genes contained in OGs per SG reveals that, except for SG_VIII-2 and XIIb, more than 70% of the LRR-RLK genes belong to OGs, with genes mainly in complex OGs in SG_I, IV, VIII-1, VIII-2, XI, and XIIa ( Figure 3A). In some SGs, some OGs contain a very large number of genes, such as 405 genes in one of SG_I OG (SG_I-3c), or 307 and 708 genes in two SG_XIIa OGs ( Figure 3B and Supplementary Table S2 for details). In these large OGs, many species-specific duplications occured, and among the 10 OGs containing more than 100 genes, 8 are complex. In SG_I-3c for example, several genes have been studied in Arabidopsis, such as IMPAIRED OOMYCETE SUSCEPTIBILITY 1 (IOS1), FLG22-INDUCED RECEPTOR-LIKE KINASE 1 (FRK1), and light-repressible receptor protein kinase (LRRPK) (Deeken and Kaldenhoff, 1997;Asai et al., 2002). All have been reported to be involved in abiotic and biotic responses in dicots but no gene from the same OG have been described so far in monocots. However, the fact that these genes are classified into the complex mode of expansion suggests that in monocots too, these genes could be involved in stress response. Branches are color coded according to Figure 1 with branches of monocots (M) species in pink and red, branches of dicots (D) species in yellow, blue and green, and branches of moss and spikemoss in light and dark brown, respectively. OGs containing M and D genes are represented as MD OGs and numbered by SG. Within these OGs, the orthologous relationships can be "simple" or "complex" (OG_c). Note that the presence and number of paralogs after monocots and dicots divergence is not taken into account. Numbers at monocots/dicots bifurcations represent nodes with statistical branch supports aLRT/SH-like >0.85.
More than 10% of the LRR-RLK Core Sets Are Absent in Brassicaceae
For each OG, we investigated whether some species were lacking members, focusing particularly on the Brassicales, for which six species are included in our analysis (Figure 4). Moreover, this clade contains the model plant Arabidopsis which is the reference for many studies on LRR-RLK functions (Wu et al., 2016). Interestingly, among the 101 OGs, 14 (13.8%) have been completely lost in the Brassicaceae, and 3 of them are even absent in all the Brassicales. This observation is an incentive to the extensive study of these receptors in other plants than Arabidopsis, adding an argument to the fact that functions or interactions are sometimes phylum-specific. For example, the SG_I-2 OG contains the SYMBIOSIS RECEPTOR LIKE KINASE (SYMRK, also known as NORK or DMI2) receptor which is involved in actinorhizal plants and legumes, respectively, in phosphate-acquiring arbuscular mycorrhiza and nitrogen-fixing root nodule symbiosis (Antolin-Llovera et al., 2014a). For this gene, we noticed that besides being absent in the Brassicaceae, which do not form mycorrhizal associations and root nodule symbiosis with rhizobia, some other characteristics of these receptors have been observed in monocots (see below).
Inference of Functional Information from Experimentally Characterized LRR-RLK Genes to Uncharacterized Genes
The use of orthologous relationships to infer functional annotations relies on the fact that orthologs are expected to carry equivalent functions in different organisms. However, this can only be reliably inferred if, at least, structural characteristics and domain architecture are conserved between othologs. In our analysis, the phylogeny of the LRR-RLK proteins was computed on the well conserved KDs. However, LRR-RLK sequences are composed of several domains and, notably, of LRRs in their ECD. One could wonder whether the domains belonging to genes of the same OG are conserved. First, we took a detailed look at the predicted number of LRRs of all these receptors. The number of LRR motifs per protein is an important feature for homo-and hetero-complex formation between LRR-RLKs (Macho and Zipfel, 2014). Second, we investigated the presence of island domains in between LRRs. These domains have been described to be the binding site for the BR hormone in some receptors (Kinoshita et al., 2005; et al., 2011). Third, we analyzed the presence of the MLD, a carbohydrate-binding domain, and the GDPC, a protein cleavage motif, which were shown to be located before the LRRs in some SG_I receptors. Fourth, we investigated the presence of Cys-pairs surrounding the LRRs in some SGs. The presence and organization of these domains is functionally important and has to be taken into account for transfering functional informations between orthologous genes. The description of structural features localized in the ECDs of these receptors allows a subclassification which, although reflecting the phylogeny of the KD of these receptors, also takes the structural differences of the ECD into account. Therefore, we subdivided the 20 SGs further according to these characteristics in their ECD (Figure 5 and Supplementary Table S3 for details).
Number of LRR Motifs
First, for the moss and/or spikemoss genes which are orthologous to the angiosperm core sets of genes, we investigated if some sets of receptors varied in the number of LRR motifs. All those genes have a common ancestor, predating the divergence between moss and/or spikemoss and angiosperms and their KDs have all evolved in concert for ∼450 MYA. Despite speciation events, the close phylogenetic relationship of all these LRR-RLK genes with moss and spikemoss orthologs suggests that signaling pathways downtream of these receptors could be conserved. In the OG containing the FLS2 receptor (SG_XIIa), we noticed that the number of LRRs in the PHYPA orthologs was lower than in angiosperms ( Figure 6A). This peculiar differences noted in the PHYPA ECDs of the FLS2 orthologs could affect ligand binding or even suggest that ligands are not conserved. This would be in agreement with publications stating that the moss Physcomitrella patens does not carry an FLS2 ortholog and also shows no response to flg22 (Boller and Felix, 2009;Tanigaki et al., 2014). All other core gene sets for which Physcomitrella/Selaginella ECDs are conserved compared to angiosperms, provide interesting cases for which it would be worth to verify if the functions described for monocots and/or dicots members are entirely conserved in bryophytes and lycopsids. Second, we focused on the number of predicted LRRs in the 7,767 LRR-containing sequences. Even if the number of LRRs per sequence is very variable, the distribution of the number of LRR per sequences shows three peaks at 5, 20, or 21 ( Figure 6B). This observation suggests that these numbers of LRRs per sequences may be optimal for the 3D conformation of these receptors and their interactions in homo-or heterocomplexes. To our knowledge, this observation has not been explicitly made in animal LRR-containing proteins but could also be true (Ng et al., 2011). In plants, one hetero-oligomeric protein complex has been described between the BR receptor BRI1 (SG_Xb M4 in Figure 5) and BAK1/SERK3 or SERK1 receptors (SG_II B.3 in Figure 5) (Aker and de Vries, 2008;Chinchilla et al., 2009;Santiago et al., 2013;Sun et al., 2013). The complex crystal structure of SERK1 and BRI1 has revealed that the BRI1 C-terminal LRRs form a docking platform for the LRRs of the SERK1 co-receptor (Hothorn et al., 2011;Santiago et al., 2013). The SERK proteins have also been shown to serve various other BR-independent functions by forming heterocomplexes with SG_XIIa receptors (FLS2 and EFR, structure O in Figure 5) and the PEP1 RECEPTOR proteins (PEPR1, SG_XI N4 in Figure 5) (Chinchilla et al., 2007;Heese et al., 2007;Albrecht et al., 2008;Schulze et al., 2010;Roux et al., 2011;Koller and Bent, 2014). In rice, OsSERK2 (SG_II B.3 in Figure 5) forms a constitutive complex with the LRR-RLK Xa21 (SG_XIIa O in Figure 5) (Chen et al., 2014). Thus, these SERK co-receptors (4-5 LRRs) seem to play a central role in the regulation of multiple LRR-RLKs (>20 LRRs) by interacting directly with them (Aker and de Vries, 2008;Chinchilla et al., 2009;Kim et al., 2013;Santiago et al., 2013;Sun et al., 2013). Interestingly, SERK3/BAK1 has also been found in complex with the BAK1-INTERACTING RECEPTOR KINASE1 and 2 (BIR1 and BIR2) proteins, two receptors belonging to SG_Xa, another SG possessing five LRRs in its ECD (structure L in Figure 5) (Gao et al., 2009;Halter et al., 2014). The association between SERK3/BAK1 and BIR1/2 prevents the FLS2-BAK1 interaction before elicitation of the Indeed, other SGs such as SG_XI, SG_XIIIa, and SG_XIV corresponding to structures N2, Q and S1, respectively (Figure 5), also possess five LRRs in their ECD. In SG_XI-22 (structure N2 in Figure 5), the receptor SUPPRESSOR OF BIR1 1 (SOBIR1) also named EVERSHED (EVR) has been shown to be involved in floral organ shedding and in the regulation of several resistance signaling pathways with the BIR1, BAK1, and FLS2 receptors (Gao et al., 2009;Leslie et al., 2010). Moreover, the SOBIR1 receptor is also described as a coreceptor/adaptor forming complexes with many LRR-Receptor like proteins (LRR receptors devoid of a KD), suggesting that the SERK-type receptors (5-LRRs) could be considered as general adaptors important for functionality in complex with their receptor partners (Liebrand et al., 2013;Gust and Felix, 2014). In SG_XIIIa (structure Q), the FEIs receptors (FEI1 and FEI2, named after the Chinese word for fat), whose single mutants were indistinguishable from the wild type in development, work against a co-receptor function (Xu et al., 2008). However, for other SGs possessing five LRRs in their ECD, the question about the putative coreceptor function will remain unanswered until further molecular characterization is performed. The receptors belonging to SG_II B.3, contrary to SG_II B.1 and B.2, contain also a Pro-rich motif in their ECD. The question on the functionality of the Pro-rich domain in the ECD of the SERK proteins also remains to be answered. This domain of unknown function could provide a flexible hinge to the ECD (Schmidt et al., 1997;Kay et al., 2000;Baudino et al., 2001;Hecht et al., 2001;Chevalier et al., 2005). Interestingly, other SGs possess these kinds of motifs, e.g., SG_VI F3.2, for which no receptor has been studied yet.
Island Domains
We positioned all the predicted LRRs on the 7,767 proteins and searched for islands between them (Supplementary Table S4). These islands are of particular importance as they are the brassinolide hormone binding sites for the BRI1 and BRI1-like (BRL) receptors (structures M2-M4 in Figure 5) belonging to SG_Xb (Hothorn et al., 2011;She et al., 2011). We found that in SG_Xb, all sets of orthologs possess an island encompassing two (Structures M1 in Figure 5) or at least three (structures M2-M5) LRRs. One could ask if the island domain in genes of structure M5, for which no function has been described up to now and very similar to the M2-M4 structures, is also a binding site for the BR hormone. The remaining OGs in SG_Xb (structure M1) contain the three Arabidopsis genes: PSKR1, PSKR2, and PLANT PEPTIDE CONTAINING SULFATED TYROSINE 1 RECEPTOR (PSY1R). These receptors have overlapping functions in promoting cellular proliferation, longevity and expansion (Matsubayashi et al., 2006;Amano et al., 2007;Hartmann et al., 2013). The PSKR subfamily is also required for PSK peptide signaling in sexual reproduction in plants (Stuhrwohldt et al., 2015). Moreover, these receptors play a role in modifying responses to biotic pathogens and wounding (Loivamaki et al., 2010;Mosher et al., 2013). In the three Arabidopsis PSKRs, one island of ∼60 AA was detected in addition to other smaller ones specific to each receptor. These islands could be important for hormone binding like in BRI1 and BRL receptors (Hothorn et al., 2011;She et al., 2011). Indeed, it has been shown that the BR hormone could play a role in the signaling pathways activated downstream of these receptors (Hartmann et al., 2013). In SG_IX (structure K) and SG_XV (structure T1), islands encompassing the size of at least two LRRs are also present. In SG_IX, the crystal structure of the Arabidopsis TRANSMEMBRANE KINASE 1 [TMK1, also known as BLK1 (BARK1-like Kinase 1)] suggests that the islands could be critical for structural integrity. In SG_XV, the crystal structure of RECEPTOR-LIKE PROTEIN KINASE 2 (RPK2, also known as TOAD2) suggests that the islands could be the site for ligand binding as in BRI1 (Liu et al., 2013;Song et al., 2014).
Additional Domains
As mentioned previously, a MLD lying in between the SP and the LRRs has been described in some SG_I receptors (Figure 5) (Hok et al., 2011;Antolin-Llovera et al., 2014a). One of them is the SYMRK receptor (structure A1.1) involved in mycorrhizal associations and rhizobium-legumes symbiosis, but its exact function is still unclear (Antolin-Llovera et al., 2014a). Recently, it has been demonstrated that the SYMRK receptor is cleaved at a GDPC motif placed at the end of the MLD to release the N-glycosylated ectodomain in the absence of symbiotic stimulation (Antolin-Llovera et al., 2014b). Moreover, protein cleavage on this motif would permit a physical interaction with the LysM-type RLK NOD FACTOR RECEPTOR 5 (NFR5) and induces a rapid degradation of the SYMRK protein lacking its MLD. In this form, SYMRK could act as a co-receptor to initiate symbiotic signaling with NFR5 and would mirror the role played by the receptors of the SERK family. After MLD release, the structure of SYMRK could indeed resemble that of BAK1/SERK3. Our analysis of structural features of ECDs reveals that the monocot orthologs of the SYMRK receptor were much shorter than the dicot ones and that the monocot receptors are devoid of the MLD present in dicots (structure A1.2). Thus, the SYMRK activation mechanism will have to be further investigated in monocots to evaluate if it can fit into the dicots model, or if other receptors possessing a MLD (SG_I or others, see below) are involved in this process.
We therefore looked for malectin domains in all LRR-RLK sequences and found that SG_VIII-2 receptors also contain one. However, in SG_VIII-2, this domain is not located just after the SP but between the LRRs and the TM domain (Structure J in Figure 5). We also found the GDPC cleavage motif in most of the SG_IV, V and SG_VIII-1 receptors. However, contrary to SG_I, the GDPC site is located just before the first Cys-pairs in all other SGs. In these SGs, the Cys of the GDPC motif is the first site of the Cys-pair. It is still unknown if these receptors are also cleaved at this site and what the functional consequences would be. In SG_VIII-2, which contains a malectin domain C-terminal of the LRRs, no GDPC sites are present. This does not exclude the possibility that another cleavage site could be used to truncate the protein. Thus, the function of the malectin domains in SG_VIII-2 will have to be explored in the future to decipher their exact functional role and their potential involvement in protein stabilization.
Positive Selection in the Divergence between Ancestrally Duplicated OGs
Twelve pairs of MD OGs present in almost all monocot and dicot species, and harboring a gene topology fitting approximately the species tree, appear to be issued from ancestral duplications predating the monocot/dicot divergence ( Table 2). As these OG pairs had a similar ECD structural organization and were kept in almost all species studied here, we searched for potential positive selection footprints in the divergence leading to their differentiation. Although these genes do not all have a known function, it is expected that they underwent amino-acid changes leading to their sub-and/or neo-functionalization, and one can wonder if and how positive selection could have driven these changes.
To answer this question, we tested whether some sites underwent positive selection on the two branches starting from the ancestral duplication and ending at the monocot/dicot divergence node of each OG. The detailed results of this analysis are presented in Table 2 and Supplementary Table S5. Two pairs showed no signal on either of the two branches (SG_Xa-1/2, SG_XIIa-2/3). For the pairs SG_II-3/4 and SG_XIIIa-1/2, a signal was detected for one branch only but the signal on SG_XIIIa-1/2 may be a false positive or the sign of a lack of power, since no sites appeared to be significant (see Materials and Methods for details). The eight other pairs showed a signal of positive selection on each branch. Although the model indicating positive selection performs significantly better than the null model, two pairs have no significant sites for one of the two tested branches. This again indicates either a false positive or a lack of power. It is also possible that positive selection acted on a large number of sites which results in none of them exceeding the significance threshold. Finally, five pairs (SG_III-8/9, SG_XI-8/9, SG_XI-14/15, SG_XI-17/18, and SG_XIIIb-1/2) have a strong signal with up to 26 sites validated after manual curation. This result shows that in about half of the tested cases, several amino acid changes fixed in the divergence between these genes are compatible with a signal of positive selection.
A total of 141 sites were manually validated as having experienced an episode of positive selection during MD OG genes divergence. For the five pairs with a strong signal, the repartition of these sites across the different domains of the LRR-RLK protein showed that the LRRs and KDs are the most affected (Figure 7). More than half of the sites (78) fall in the ECD, among which 68 are in the LRR domain; 51 sites fall in the ICD, among which 42 are in the KD ( Table 2). Considering that these LRR and KD are the largest domains, we normalized the number of positively selected sites by domain size. Kinase and LRR appeared then to be affected equally (Chi-square test, p = 0.26) by positive selection. This result is very different from positive selection signatures observed in the recent paralogs, for which LRR is the most strongly affected domain (Fischer et al., 2016). The number of sites laying in the LRR domain allowed us to look for any specific distribution across the 24 amino acids composing the motif. The repartition of the sites affected by positive selection within the LRR is not homogeneous and the majority of them (67) fall in the 13 non-canonical positions (Supplementary Table S5). However no notable pattern emerges from their distribution (data not shown). Again, this result contrasts with what is observed in lineage-specific expanded genes for which four positions are predominantly affected (Fischer et al., 2016). The remaining domains are affected by a number of sites varying from 1 to 9. This approach revealed a prevalence of sites targeted by positive selection in the ECD for the three couples of genes belonging to SG_XI as well as SG_XIIIb. On the opposite, a tendency to target ICDs can be observed for SG_III. Two pairs of OGs for which positive selection footprints are detected in the ECD correspond to OGs whose duplication gave rise to the PXY and PXLs clades in SG_XI-17/18 (Fisher and Turner, 2007;Jung et al., 2015); and to the ERECTA (ER) and ERECTA-like (ERL) clades in SG_XIIIb-1/2 (Torii et al., 1996;Sanchez-Rodriguez et al., 2009). Other pairs of OGs concern differentiation of the PEPR1 and 2 clade (SG_XI-9), of the STERILITY-REGULATING KINASE MEMBER 2 (SKM2) gene or of the DspA/E-interacting protein of Malus x domestica Borkh 1 and 3 (DIPM1 and 3) clade (SG_III-9), from their respective sister clades, SG_XI-8, SG_XI-15, and SG_III-8 (Meng et al., 2006;Krol et al., 2010;Kang and Hardtke, 2016). In these clades, no gene has been described yet. The strong signal of positive selection detected for these five groups of genes indicates that the divergence between ancestral copies may have procured a selective advantageous: in the domain involved in ligands or partners binding for ECDs, or in the domain affecting downstream signaling pathways for ICDs. Indeed, during the early expansion of LRR-RLK that took place before angiosperm split, some duplicated LRR-RLK differentiated by fixation of a higher number of non-synonymous than synonymous mutations at some amino acid sites, indicating the emergence of probably new advantageous functions.
CONCLUSION
In this report, we provide a framework to aid in the classification and give new insights to new prospects for functional analysis of some plant LRR-RLKs. We have defined the "core set" of the large LRR-RLK gene family and classified these receptors based on their ECD features. These analyses reveal that even if the KDs of the LRR-RLKs are phylogenetically related, the ECDs may have been subjected to major (e.g., loss of LRRs revealed by the structural features characterization) or minor (e.g., point mutations revealed by the traces of positive selection analysis) modifications during the evolution of orthologs. These alterations could affect ligand recognition sites, dimerization with other receptors, and/or other processes involved in signal transduction. Indeed, the proper signal transduction via receptor kinases is not restricted to the binding of ligands to receptors located at the plasma membrane. Tightly regulated steps for proper folding of the proteins, trafficking from endomembranes to plasma membranes, and finally internalization and recycling of the receptors after ligand binding play essential roles in signal transduction (Shah et al., 2002;Robatzek et al., 2006;Salomon and Robatzek, 2006;Irani and Russinova, 2009;Beck et al., 2012;Di Rubbo et al., 2013;Offringa and Huang, 2013;Martins et al., 2015). Recently, an enthusiastic wave swept over the plant receptor kinases community concerning endoplasmic reticulum quality control since most of these steps take place in this cellular compartment (Saijo, 2010;Su et al., 2011;Huttner and Strasser, 2012;Tintor and Saijo, 2014). Newly synthetised membrane-resident proteins translocate first into the endoplasmic reticulum where they are subjected to folding and modifications like formation of disulfide bridges. It is also the place where nascent polypeptides are glycosylatedthe most common post-traductional modification which is a crucial event during protein folding and quality control processes (Bieberich, 2014). The LRR-RLKs are part of the large family of plant proteins which are N-glycosylated and many N-glycosylation acceptor sequences are present in all ECDs. In some pattern recognition receptors and receptors involved in developmental processes, proteins with mutations at residues which will create misfolded proteins have been shown to be part of endoplasmic reticulum protein complexes and directed to degradation (Hong et al., 2008(Hong et al., , 2009Li et al., 2009;Nekrasov et al., 2009;Lee et al., 2011;Su et al., 2011;Huttner and Strasser, 2012;Sun et al., 2012;Park et al., 2013;Robatzek and Wirthmueller, 2013;Huttner et al., 2014). The significance of all the structural feature modifications which have been mentioned above are still mostly unknown but classic biochemical and cell biological studies (e.g., domain swapping among orthologs and/or targeted point mutations using CRISPR/Cas9) should help to explore their functions in details and will provide many novel insights into the molecular characterization of LRR-RLKs.
AUTHOR CONTRIBUTIONS
NC, CP, EG, and AD designed the study; GD and AD performed the LRR-RLK extraction; J-FD and AD performed the phylogenetic clustering; NC and IF performed the selection footprint analysis; IF, NC, J-FD, MB, and AD analyzed the data; J-FD, AD, NC, and IF wrote the article.
FUNDING
This work was supported by the German Research Foundation (DFG) grant number FI 1984/1-1 to IF; the Agropolis Resource Center for Crop Conservation, Adaptation and Diversity (ARCAD); the Centre de coopération Internationale de Recherche en Agronomie pour le Développement (CIRAD) Ph.D. fellowship to MB; and the Agence Nationale de la Recherche (ANR, France) ANR-08-GENM-021 to AD, CP, and EG.
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at: http://phylogeny.southgreen.fr/kinase2/ | 9,592.8 | 2017-04-05T00:00:00.000 | [
"Biology"
] |
Dynamic and Static Topic Model for Analyzing Time-Series Document Collections
For extracting meaningful topics from texts, their structures should be considered properly. In this paper, we aim to analyze structured time-series documents such as a collection of news articles and a series of scientific papers, wherein topics evolve along time depending on multiple topics in the past and are also related to each other at each time. To this end, we propose a dynamic and static topic model, which simultaneously considers the dynamic structures of the temporal topic evolution and the static structures of the topic hierarchy at each time. We show the results of experiments on collections of scientific papers, in which the proposed method outperformed conventional models. Moreover, we show an example of extracted topic structures, which we found helpful for analyzing research activities.
Introduction
Probabilistic topic models such as latent Dirichlet allocation (LDA) (Blei et al., 2003) have been utilized for analyzing a wide variety of datasets such as document collections, images, and genes. Although vanilla LDA has been favored partly due to its simplicity, one of its limitations is that the output is not necessarily very understandable because the priors on the topics are independent. Consequently, there has been a lot of research aimed at improving probabilistic topic models by utilizing the inherent structures of datasets in their modeling (see, e.g., ; Li and Mc-Callum (2006); see Section 2 for other models).
In this work, we aimed to leverage the dynamic and static structures of topics for improving the modeling capability and the understandability of topic models. These two types of structures, which we instantiate below, are essential in many types of datasets, and in fact, each of them has been considered separately in several previous studies. In this paper, we propose a topic model that is aware of both of these structures, namely dynamic and static topic model (DSTM).
The underlying motivation of DSTM is twofold. First, a collection of documents often has dynamic structures; i.e., topics evolve along time influencing each other. For example, topics in papers are related to topics in past papers. We may want to extract such dynamic structures of topics from collections of scientific papers for summarizing research activities. Second, there are also static structures of topics such as correlation and hierarchy. For instance, in a collection of news articles, the "sports" topic must have the "baseball" topic and the "football" topic as its subtopic. This kind of static structure of topics helps us understand the relationship among them.
The remainder of this paper is organized as follows. In Section 2, we briefly review related work. In Section 3, the generative model and the inference/learning procedures of DSTM are presented. In Section 4, the results of the experiments are shown. This paper is concluded in Section 5.
Related Work
Researchers have proposed several variants of topic models that consider the dynamic or static structure. Approaches focusing on the dynamic structure include dynamic topic model (DTM) , topic over time (TOT) (Wang and McCallum, 2006), multiscale dynamic topic model (MDTM) (Iwata et al., 2010), dependent Dirichlet processes mixture model (D-DPMM) (Lin et al., 2010), and infinite dynamic topic model (iDTM) (Ahmed and Xing, 2010). multinomial distribution over subtopics for the d-th doc. in s-th supertopic at epoch t φ t k multinomial distribution over words for the kth subtopic at epoch t 2 α t s static structure weight (prior of 2 θ t d,s ) β t dynamic structure weight between topics at time t − 1 and those at epoch t Table 1: Notations in the proposed model.
These methods have been successfully applied to a temporal collection of documents, but none of them take temporal dependencies between multiple topics into account; i.e., in these models, only a single topic contributes to a topic in the future.
For the static structure, several models including correlated topic model (CTM) , pachinko allocation model (PAM) (Li and McCallum, 2006), and segmented topic model (STM) (Du et al., 2010) have been proposed. CTM models the correlation between topics using the normal distribution as the prior, PAM introduces the hierarchical structure to topics, and STM uses paragraphs or sentences as the hierarchical structure. These models can consider the static structure such as correlation and hierarchy between topics. However, most of them lack the dynamic structure in their model; i.e., they do not premise temporal collections of documents.
One of the existing methods that is most related to the proposed model is the hierarchical topic evolution model (HTEM) (Song et al., 2016). HTEM captures the relation between evolving topics using a nested distance-dependent Chinese restaurant process. It has been successfully applied to a temporal collection of documents for extracting structure but does not take multiple topics dependencies into account either.
In this work, we built a new model to overcome the limitation of the existing models, i.e., to examine both the dynamic and static structures simultaneously. We expect that the proposed model can be applied to various applications such as topic trend analysis and text summarization.
Dynamic and Static Topic Model
In this section, we state the generative model of the proposed method, DSTM. Afterward, the procedure for inference and learning is presented. Our notations are summarized in Table 1.
Generative Model
In the proposed model, DSTM, the dynamic and static structures are modeled as follows. Dynamic Structure We model the temporal evolution of topic-word distribution by making it proportional to a weighted sum of topic-word distributions at the previous time (epoch), i.e., where φ t k denotes the word distribution of the k-th topic at the t-th time-epoch, and β t k,k is a weight that determines the dependency between the k-th topic at epoch t and the k -th topic at epoch t − 1. Static Structure We model the static structure as a hierarchy of topics at each epoch. We utilize the supertopic-subtopic structure as in PAM (Li and McCallum, 2006), where the priors of topics (subtopics) are determined by their supertopic.
Note that the above process should be repeated for every epoch t. The corresponding graphical model is presented in Figure 1.
Inference and Learning
Since analytical inference for DSTM is intractable, we resort to a stochastic EM algorithm (Andrieu et al., 2003) with the collapsed Gibbs sampling (Griffiths and Steyvers, 2004). However, such a strategy is still much costly due to the temporal dependencies of φ. Therefore, we introduce a further approximation; we surrogate φ t−1 This compromise enables us to run the EM algorithm for each epoch in sequence from t = 1 to t = T without any backward inference. In fact, such approximation technique is also utilized in the inference of MDTM (Iwata et al., 2010).
Note that the proposed model has a moderate number of hyperparameters to be set manually, and that they can be tuned according to the existing know-how of topic modeling. This feature makes the proposed model appealing in terms of inference and learning.
E-step
In E-step, the supertopic/subtopic assignments are sampled. Given the current state of all variables except y t d,i and z t d,i , new values for them should be sampled according to where n t k,v denotes the number of tokens assigned to topic k for word v at epoch t, n t k = v n t k,v , and n t d,s and n t d,s,k denote the number of tokens in document d assigned to supertopic s and subtopic k (via s), at epoch t respectively. Moreover, n t ·\i denotes the count yielded excluding the i-th token.
M-step
In M-step, 2 α t and β t are updated using the fixed-point iteration (Minka, 2000).
. (4) Here, Ψ is the digamma function, 2 α t s = k 2 α t s,k , and Overall Procedure The EM algorithm is run for each epoch in sequence; at epoch t, after running the EM until convergence,φ t k,v is computed bŷ and then this value is used for the EM at the next epoch t + 1. Moreover, see Supplementary A for the computation of the statistics of the other variables.
Datasets
We used two datasets comprising technical papers: NIPS (Perrone et al., 2016) and Drone (Liew et al., 2017). NIPS is a collection of the papers that appeared in NIPS conferences. Drone is a collection of abstracts of papers on unmanned aerial vehicles (UAVs) and was collected from related conferences and journals for surveying recent developments in UAVs. The characteristics of those datasets are summarized in Table 2. See Supplementary B for the details of data preprocessing. Table 3: Means (and standard deviations) of PPLs averaged over all epochs for each dataset with different values of K and S. The proposed method, DSTM, achieved the smallest PPL. Figure 2: Part of the topic structure extracted from Drone dataset using the proposed method. The solid arrows denote the temporal evolution of "planning" topics. The dotted arrows mean that "planning" topics are related to "hardware", "control", and "mapping" topics via some supertopics (filled circles).
Evaluation by Perplexity
First, we evaluate the performance of the proposed method quantitatively using perplexity (PPL): For each epoch, we used 90% of tokens in each document for training and calculated the PPL using the remaining 10% of tokens. We randomly created 10 train-test pairs and evaluated the means of the PPLs over those random trials. We compared the performance of DSTM to three baselines: LDA (Blei et al., 2003), PAM (Li and Mc-Callum, 2006), and the proposed model without the static structure, which we term DRTM. See Supplementary C on their hyperparameter setting. The means of the PPLs averaged over all epochs for each dataset with different values K are shown in Table 3. In both datasets with every setting of K, the proposed model, DSTM, achieved the smallest PPL, which implies its effectiveness for modeling a collection of technical papers. For clarity, we conducted paired t-tests between the perplexities of the proposed method and those of the baselines. On the differences between DSTM and DRTM, the p-values were 4.2 × 10 −2 (K = 30), 7.9 × 10 −5 (K = 40), and 6.4 × 10 −7 (K = 50) for the NIPS dataset, and 1.3 × 10 −4 (K = 15), 8.8 × 10 −5 (K = 20), and 4.9 × 10 −6 (K = 25) for the Drone dataset, respectively. It is also noteworthy that DRTM shows more significant improvement relative to LDA than PAM does. This suggests that the dynamic structure with multiple-topic dependencies is essential for datasets of this kind.
Analysis of Extracted Structure
We examined the topic structures extracted from the Drone dataset using DSTM. In Figure 2, we show a part of the extracted structure regarding planning of the UAV's path and/or movement. We identified "planning" topics by looking for keywords such as "trajectory" and "motion." In Figure 2, each node is labeled with eight most probable keywords. Moreover, solid arrows (dynamic relations) are drawn if the corresponding β t k,k is larger than 200, and dotted arrows (static relations) are drawn between a supertopic and subtopics with the two or three largest values of 2 α t s,k . Looking at the dynamic structure, we may see how research interest regarding planning has changed.
For example, word "online" first emerges in the "planning" topic in 2016. This is possibly due to the increasing interest in realtime planning problems, which is becoming feasible due to the recent development of on-board computers. In regard to the static structures, for example, the "planning" topic is related to the "hardware" and "control" topics in 2013 and 2014, whereas it is also related to the "mapping" topic in 2015 and 2016. Looking at these static structures, we may anticipate how research areas are related to each other in each year. In this case, we can anticipate that planning problems are combined with mapping problems well in recent years. Note that we cannot obtain these results unless the dynamic and static structures are considered simultaneously.
Conclusion
In this work, we developed a topic model with dynamic and static structures. We confirmed the superiority of the proposed model to the conventional topic models in terms of perplexity and analyzed the topic structures of a collection of papers. Possible future directions of research include automatic inference of the number of topics and application to topic trend analysis in various domains. | 2,898 | 2018-05-06T00:00:00.000 | [
"Computer Science"
] |
DDNB—Doubly Decentralized Network Blockchain Architecture for Application Services †
: Decentralization and immutability characteristics of blockchain technology has attracted numerous blockchain-based systems and applications to be proposed. However, technical shortcomings such as low transaction speed, complexity, scalability, and vulnerability to certain attacks have been identified, making it challenging to use the technology on general consumer applications and services. To address the problem
Introduction
Bitcoin is one of the most representative cryptocurrency, and has contributed to building a reliable and decentralized cryptocurrency environment on a P2P network without the need of central trusted authorities [1]. Blockchain is the underlying data structure technology of Bitcoin, which is a continuously growing chain of blocks each having a set of transactions that occur between participating peers. Each participant maintains a distributed ledger consisting of sequentially chained blocks, and these blocks are propagated and validated by full nodes with the proof-of-work consensus protocol (in Bitcoin) between validator nodes. When an untrusted third party participates in the service operation, blockchain suppresses any possible data manipulation through the consensus process between the peers.
Characteristics such as immutability, irreversibility, and decentralization has attracted increasing interest in blockchain for purposes other than cryptocurrency, and numerous blockchain-based systems and applications have been proposed [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17]. However, the architecture of the early blockchain systems, such as Bitcoin, also had several drawbacks which made it difficult to apply to applications demanding more sophisticated operations than just 'transfer-of-ownership'. That is because most core parts of the system, like data structure and operations, are designed specifically for the purpose of cryptocurrency to avoid various network attacks (e.g., 51% majority attack).
Despite the drawbacks, the potential of blockchain technology shown by Bitcoin has led to a number of studies to improve it as a distributed or decentralized platform applicable to diverse types of application. By applying the 'smart-contract' concept which makes it possible to derive deterministic and immediate output under pre-defined conditions, the blockchain technology seemed effective for developing decentralized applications (DApps). However, a fundamental problem driven by the consensus protocol still remained; the network was far too slow to implement practical applications at a large scale. Although the Proof-of-Work consensus protocol is an excellent way to deter arbitrary attackers in an public environment with ruthless attack potential, it works as an obstacle upon the DApps, restricting it from providing general services to users.
There have been several studies for more decisive but generous consensus protocols working on a permissioned/private network [18][19][20][21], but these attempts still have the limits of scalability as the network governs the access of the nodes to demarcate from outside. That is, the scalability and performance are incompatible trade-off factors in running an application on blockchain based platforms. In addition to the limitations mentioned above, service providers encounter several difficulties when implementing actual applications running on blockchain based platforms. Service providers not only need to handle the business logic specific to their services, but also tasks of designing and maintaining blockchain systems. In Section 2, the problems that have been encountered to apply blockchain for DApps will be discussed in more detail.
This paper presents 'Doubly Decentralized Network Blockchain' (DDNB), a platform architecture that suppresses untrusted participants' malicious behavior by separating the service and blockchain layers, and allows the service providers to conveniently implement general business logic on DApps. In DDNB, blockchain layer and the service layer are architecturally separated so that they depend only on each other's interfaces. While using the 'chaining' and 'distributed' concepts of blockchain, it provides a structure for service developers to develop DApps without the need of deeply understanding the blockchain technology.
Problem & Motivation
Blockchain has received explosive interest following the success of Bitcoin presented in Satoshi Nakamoto's paper [1]. Then, as the first blockchain based platform which supports smart contract, Ethereum [22] has brought the possibility of immutability and decentralization to various fields. Furthermore in Hyperledger Fabric [23], the network consists of permissioned nodes working upon the 'Practical Byzantine Fault Tolerance' algorithm as its consensus protocol, providing faster and cheaper finality for the blockchain system. Many attempts to improve blockchain have allowed the technology to be used to ensure reliability of arbitrary data in a decentralized network, more than just being used for cryptocurrency. For example, blockchain technology has been used in industry fields such as Internet of Things [2][3][4][5][6]24], E-commerce [14][15][16], Health Care [7][8][9][10], and Digital Rights Management [11][12][13] to take advantage of its properties for developing a variety of services.
However, there are several challenges that need to be addressed before applying blockchain technology on industrial applications that are developed for server-client systems in general. First and the most critical is the low transaction processing speed. Bitcoin adopted PoW (Proof-of-Work) mechanism for its block consensus algorithm. This process requires huge amount of computing power from participating full nodes (The nodes that can verify all of the rules of Bitcoin) in blockchain network, and it limits block creation cycle to 10 min on average (for Bitcoin). Although PoW has the great advantage of substantial ability in suppressing arbitrary participants' malicious behavior, the amount of resources wasted in the race for creating blocks is not negligible.
Second is the issue of scalability [25]. Blockchain is literally a chain of blocks sequentially linked by the hash value of previous block which contains a set of transactions. To maintain a consistent chain of blocks, a consensus process between network participants is required. Hence, system performance degrades significantly as the total network size increases due to the increase in amount of data that need to be exchanged. If the block size is increased to contain more transactions per block, the amount of information in a block can expand but the total propagation delay including the block verification time required for synchronizing blocks between nodes increases [26,27]. The longer the time spent for network synchronization, the longer the blockchain states remain inconsistent among the participants. This can make the blockchain network vulnerable to attacks such as double spending. A remedy to solve these defects is to establish a private network with permissioned nodes. By eliminating the possibility of malicious user threats in advance, fastidious consensus protocols are no longer needed. Instead, it is able to use optimized consensus algorithm within trusted environment resulting in increase of throughput.
Third is related to the irreversibility of blockchain. Blocks recorded upon consensus in blockchain network cannot be modified (without a full fork) even if the parties involved in the transaction agreed to change. While the irreversibility guarantees transparency of blockchain, it works as a critical flaw against several applications that need to provide certain reversible services. For example, when a smart contract is automatically executed upon certain preset conditions, there is no way to reverse the changes even if the outcome was an unexpected result due to a (malicious) bug in the smart contract code.
Last one is the complexity of blockchain system. Due to the complex logic of blockchain and issues mentioned above, arbitrary developers who want to launch application services with smart contract cannot concentrate entirely on their own business logic, and it is required for them to have advanced knowledge on the blockchain technology. Thus, it is still challenging to adopt blockchain technology to applications. But still, numerous attempts are being made to take advantage of its benefits. In this respect, DDNB would be a troubleshooter which allows to utilize the existing strong points of blockchain and ensure sufficient service reliability at the same time. Table 1 shows a high-level comparison of four different architectures that can be used for application development; the traditional server-client architecture, permissioned and permissionless blockchain, and our DDNB. Each approach has their own pros and cons, but our DDNB has its strong point in conducting business logic as fully-supported as in traditional development while it takes all advantages of private blockchain.
Related Work
This section explores several existing studies related to various attempts for adapting blockchain to applications. In particular, we surveyed three directions; (1) stability or performance of the network nodes maintaining blockchains, (2) new blockchain platforms for applications, and (3) applications on the basis of existing blockchain platforms such as Ethereum and Hyperledger.
Blockchain Network
At a high-level, a blockchain network is (mostly) a peer-to-peer (P2P) overlay network that changes dynamically and exchanges messages continuously. Nodes can join and leave at any time, and the protocols work in a completely distributed manner. To understand this network, Decker et al. experimentally observed that increasing network delays with PoW leads to increased forks, which means an inconsistent state of the blockchain [26]. In 2014, a list of 872,648 different IP addresses known to run Bitcoin node are revealed by Donet et al. They used a client called 'BTCdoNET', a modified version of Bitcoin P2P Network Sniffer, and presented information on the geographic distribution, network stability, and information propagation latency [28]. Most recently, Park et al. collected information of participants of Bitcoin network for 39 days, and analyzed geographical distribution, protocol/client version, and the type of nodes (full-node vs. lightweight-node) [29].
Blockchain Platform
Among several proposals, HyperLedger presented by the Linux Foundation incubates and promotes a range of business blockchain technologies. Hyperledger includes Fabric; an enterprise grade permissioned distributed ledger framework, Besu; an open source Ethereum client written in Java, Sawtooth; a modular platform for building, deploying, and running distributed ledgers and so on. Changes to the consensus protocol are also significant developments. Delegated Proof-of-Stake literally delegates to the 'major node' determined by the voting result of all nodes in the network. The low number of major nodes reduces consensus time and costs [30][31][32][33].
To take advantage of P2P distributed nature of blockchain, several attempts to apply it on IoT systems [34] have emerged. In particular, as the early blockchain systems require a lot of computing resources and huge storage, most IoT related researches have focused on lightening the system or adopting additional module to fit in the hardware limits of IoT devices. Dorri et al. suggests an a lightweight blockchain based architecture for IoT which maintains most of legacy blockchain's security and privacy benefits while virtually eliminating the overheads [35]. This architecture also implemented a hierarchical structure; smart home, overlay network, and cloud storage; with Cluster Head, a cluster of constituent nodes. In a similar way, Novo et al. proposed a decentralized access management system where access control information is stored and distributed using blockchain technology [4]. They separated wireless sensor networks (WSN) from blockchain network and implemented a Management Hub which is an interface between the two. More recently, Lei et al.
proposed Groupchain [36], a public blockchain of a two-chain structure designed for fog computing of IoT services by taking advantage of the security of blockchain while enhancing scalability. It employs the leader group to collectively commit blocks for higher transaction efficiency and introduces incentive mechanism to supervise behaviors of members in the leader group.
Most of these researches focused on addressing the resource-constraint challenge of IoT devices. However, to decrease blockchain network overhead, they compromised the distributed aspect with an aspect of server-client structural. It is a significant difference to DDNB which has no single-point standing for other things (e.g., devices, nodes). Although DDNB is not designed specifically to be well-suited for IoT devices, it is still able to adopt those attempts while assuring distributed Service Nodes. For example, it can be imagined that the additional nodes for certain IoT networks such as Cluster Head or Management Hub run on our service layer of DDNB. DDNB is possible to scale up, not only scale out.
Distributed Application
There has been several attempts to build distributed applications on top of blockchain. For example, Herbert et al. presented a decentralized P2P software validation scheme using blockchain where a user purchasing the license of a software sends cryptocurrency to the vendor on Bitcoin or Bespoke model [37]. Schaubs et al. suggested a blockchain-based trustless reputation system where every user of the system evaluates each other after a transaction, and all the evaluation data are safely stored in the blockchain to prevent manipulation [38]. Xia et al. proposed an access control platform based on blockchain which utilizes blockchain's immutability to record and keep track of every access to patients' medical information data [10]. For this purpose, the structure for blocks and transactions were re-designed to store medical information. More recently, Zhaofeng et al. proposed BlockTDM [39], a blockchain-based trusted data management scheme for edge computing. BlockTDM is a configurable blockchain architecture that includes mutual authentication protocol, flexible consensus, smart contract, block/transaction data management, and blockchain nodes management to provide trust and security in edge computing environment for the large amount of data gathered from edge terminals or Internet of Things (IoT) devices.
Aforementioned are just a few examples, and there are/were many more attempts. However, most blockchain-based application developers face the common problems: they need to understand how blockchain system works or build a blockchain platform which is specially designed to provide specific services. With our 'DDNB', they can focus on implementing the service logic they want to provide to clients while utilizing all of the blockchain's properties.
Design of DDNB
DDNB is composed of two separated decentralized networks, one is a blockchain network that forms the distributed ledger, and the other is a service network that performs application business logic. This architecture allows service providers to treat service network as a server-side endpoint and alleviate concerns about the blockchain network. Compared to prior distributed applications that implement their services directly in blockchain network using smart contract, DDNB reduces the fundamental difficulties that need to be considered. To do so, however, it is required for the nodes right above blockchain layer to prove their reliability and trustworthiness. To accomplish this in DDNB, service layer nodes self-verify each other by going through a mutual verification process. Overlaying blockchain layer with verified nodes allows the blockchain nodes to remain as permissioned-only without any additional verification. Terminal Node (TN) takes on a client-side user application role. It passes requests from users to Service Nodes (SN) to provide a specific service. Depending on how service providers, which consist of TNs and SNs, are designed, a number of logics can be implemented diversely.
Node Constitution
Service Node (SN) provides and processes actual services. Therefore, SN is where actual service is designed and implemented. SN can act like a server in traditional server-client architecture, and so is the scope of logics which can be implemented in SN is almost unlimited. Thus, various types of services can be provided, more than any smart contracts can do. SN performs a pre-defined function according to the request of TN. It is responsible for delivering the results of the execution to Blockchain Node (BN), and returning them to TN. In this process, no individual data are stored in SN. For this reason, unlike typical web applications, session and sensitive information are managed in TN, not in SN.
Blockchain Node (BN) is the actual database for storing service related data. DDNB is designed to use permissioned blockchain so that BN can provide high transactions per second (TPS) and security while maintaining characteristics of blockchain. Also, by placing BNs behind VPN and firewall, it is possible to access control SNs to only those that are authorized. Therefore, service providers do not need to build their own nodes to participate in the blockchain. Instead, they only need to consider the consortium of the service networks. There can be multiple service networks consisting of individual domains to provide different services on a single blockchain network.
Blockchain Layer
Node Agent
DDCP (Dynamic Decentralized Certification Protocol)
SN provides pre-defined services to TN, and TN generates specific transactions to use those services. Before those transactions can access data in BN, DDNB verifies the integrity of the transaction through DDCP (Dynamic Decentralized Certification Protocol). In DDCP, TN segments a transaction logic into several query stages and sends a first query to a randomly selected SN. The selected SN executes one of the divided processes sequentially as shown in Figure 2. During this process, BN generates a short hashed string, nonce. DDCP uses this nonce to act as an "one-time authentication key" for the transaction. This nonce is removed from the blockchain if the transaction uses the nonce (one-time use), or after a period of time (timeout). From BN's perspective, it is one of the effective ways to verify transactions. When the transaction reaches BN as a final step, BN will not refuse the execution as long as it is a completed form signed by TN. For this reason, the transaction packet can be copied and sent over and over even if it has already been sent and there is no such nonce value. Through the above process, it is possible to determine whether the packet is forged or not, and also the integrity of the transaction can be ensured. In addition, this process can be executed in parallel since multiple SNs with guaranteed integrity would work on concurrently.
DDNC (Dynamic Decentralized Network Consensus)
To implement the service layer as an overlay network on top of a permissioned blockchain layer, the authenticity of all service nodes must be verified first. As anyone can participate in service layer, a verification process to ensure SNs' reliability and trustworthiness is essential. SN should act as intended by the application business logic, and every SN in a single domain acts like a clone of itself. This means that all SNs must always output a consistent value given some input. Based on this fact, a simple execution and comparison is carried out between SNs to determine whether they behave identically. This verification process is named DDNC (Dynamic Decentralized Network Consensus).
In DDNC, if a new node wishes to participate in the service layer, it will be verified by preexisting nodes (verifier nodes) as shown in Figure 3. Candidate node sends a request for the list of SNs to seed node, then asks for 'add node' permission to all SNs in the list. Each verifier node examines whether the candidate node behaves identically or not via the 'Service Node Verification Protocol' in Figure 4, and adds it to the host list if the candidate node behaves equally to a verified node. Otherwise, it will be purged. This 'peer-review' process runs not only when 'add node' is requested, but also periodically to make sure that all nodes continue to act as intended. Therefore, service layer can detect abnormal SNs and remove them from the network automatically and autonomously at runtime. With this verification process, SNs are assured and confirm the share of same domain and mutual trust. Furthermore, it gives service providers the advantage of being able to scale out their SNs flexibly while reducing unexpected risks.
New node
'add new node' process (service node join)
SN-BN Communication Process
One final component of our layered architecture is the node agents between SN and blockchain network. Node agents are mediators between two networks and all accesses from SNs to the blockchain network pass through the node agents. There can be multiple node agents. If a node agent receives a request from an unauthorized SN, the node agent registers the SN into a blacklist and blocks additional requests for a certain period of time. In addition, node agents serve as a load balancer to prevent load concentration in one blockchain node and as a dispatcher of events from BNs.
Finally, using this 3-layer architecture, DDNB implements various functions to support application business logic. For example, TN can connect to a randomly selected SN (with function 'FindHost'), TN can get nonce from BN through SN (with function 'GetNonce'), TN can get an account balance of specific address (with function 'GetBalance'), and TN can send a coin amount (virtual currency) from an account to another one (with function 'SendCoin').
•
FindHost is the function that TN uses to connect to randomly selected SN through seed node. As the function does not reach BN, execution time is expected to be fast and stable. Changes in the number of SNs is what the function is most sensitive to.
•
GetNonce requests a nonce, a hashed string sent from BN, to identify the transactions which are disassembled by several steps. Since any step in any transaction is processed on a randomly selected SN, BN publishes a tiny hashed string for discrimination.
•
GetBalance is a function that returns the balance of an account. It internally executes FindHost twice, GetNonce once, and then finally a GetBalance query. This represents a single step transaction that queries the network several times.
•
SendCoin is the most representative transaction which invokes changes in the blockchain. It consists of InfoTrans query which validates syntactic errors first, and then CreateTrans query which validates semantic errors and executes the function as a final step. Every steps of the process is shown in Figure 5 in detail.
API documentations of other functions will be available on our website at [40].
Performance Evaluation
This section presents the methodology and evaluation results of DDNB. Two properties of DDNB are investigated in this evaluation; performance and stability.
Experiment Setup
The environment on which the experiments are conducted are as follows. Terminal Nodes run on a desktop PC with Intel(R) Core(TM) i5-4670 CPU @ 3.40 GHz and 8 GB RAM, with Windows 10 Pro x64 multi-threading as the operating system. Service Nodes and Blockchain Nodes are established on four different enterprise level servers. Service layer is composed with Docker containers to run a large number of SNs. Blockchain layer is built as a permissioned blockchain based on Hyperledger.
On this setup Figure 6, four most frequently used functions (among a larger set) were chosen to evaluate DDNB as follows: FindHost, GetNonce, GetBalance, and SendCoin. In order to analyze the correlation between execution time and the number of steps a function has, each function was chosen according to the number of query steps they require. Each function is executed total of 1000 times by multiple TNs (end-users) on various topologies having different sets of TNs and SNs. All functions are executed asynchronously, that is, all requests are submitted without waiting for response from BN. The number of TNs are set to 5, 10, 20, 50, 100, 250, 500, and 1000 (emulating the number of end users), and SNs 1, 5, 10, and 20 (emulating service provider servers). The interactions between TN and SN are accomplished with HTTP requests on C++ application, and all queries between SN and BN are transmitted upon the RESTful APIs.
We assess the performance of DDNB in term of the average execution time and transaction per second (TPS). For this purpose, following data are collected for each transaction. Step (p) start time (T p_start ) and Step (p) end time (T p_end ) are the start and end times of every step (p) in a transaction (when the request was responded from other node). Although detailed records have been made to determine the cause of outliers, no step-by-step analysis is conducted for the performance evaluation of the overall architecture.
Then for each topology, execution time is the total amount of time which the system took to execute and confirm all transactions in the data set (T execute = ∑ 1000 i=1 T deploy − T complete ). Since we are more interested in the average execution time for each TN, we divide it by the number of TNs performed in the experiment. Figure 7 plots the execution time of 1000 transactions experimented on various topologies for the four functions. Commonly, the average execution time decrease as the number of TNs increases (thanks to increase in parallel, simultaneous execution of transactions), whereas the change with increasing SN is not striking. This is an interesting result as we were concerned that a large number of SNs might result in slower execution time due to validation overhead of DDNC. It turns out that, thanks to parallel execution, the overhead of DDNC does not scale proportionally with the number of SNs. The outliers for GetNonce and GetBalance are from packet drops between SN and BN due to authentication failure. For example, if they wait for a nonce that has already expired or disappeared, the transaction will wait until timeout before it retransmits. This is an obvious problem observed in the experiments and worth a future work. SendCoin, on the other hand, shows similar aspects overall with no noticeable singularities, although it has the largest average execution time in every experiment due to its complexity (in terms of query steps required).
Transactions Per Second
TPS is the most commonly used indicator for network performance evaluation in blockchain architectures. Table 2 presents the TPS results for the GetBalance and SendCoin functions from the same experiment in Figure 7. It shows that in general, TPS is higher with larger number of TNs thanks to parallelism and randomization of SN selection, and the number of SNs do not have direct correlation with TPS due to the same reasons as the execution time; that is, verification overhead from the increase in the number of SNs is cancelled out thanks to the improvement from parallelism. The combination with the highest in the fixed number of TNs is highlighted in gray.
Comparison with other Platforms
To better understand the performance characteristics of DDNB, we compared it with other blockchain application platforms that employ the concept of smart contract. Nasir et al. measured the performance of two different versions of Hyperledger Fabric: v0.6 and v1.0 [41]. Between the two versions, there was a major re-architecture in the overall platform, including changes in the consensus model and the addition of support for channels. They deployed a simple money transfer application (chaincode) which has both the 'invoke' and 'query' functions. Pongnumkul et al. measured the performance of two permissioned blockchain implementations-Ethereum and Hyperledger Fabric v0.6 [42]. While the main Ethereum platform uses a public blockchain network, the software is open-source and allows developers to configure the network to work as a private network, where only granted nodes can participate in the network.
The two studies evaluated performance using various metrics, and we decided to focus on a common metric for all studies; TPS of executing 1000 transactions. The functions used in the comparison are the two representative functions in all compared schemes: invoke which changes the state of the blockchain, and query, for simple queries. As the performance result data for Hyperledger Fabric v0.6 differ in the two studies, we used median values for comparison. Table 3 presents the TPS of the transactions executed on the respective platforms. Since the Blockchain Layer of DDNB is based on Hyperledger, the difference in performance between Hyperledger is not remarkable. However, DDNB's query function (GetBalance) outperforms Ethereum by 10.5 times, and 6.05 times for the invoke function. The benefits of separating the business logic in DDNB can be maintained while there is no performance penalty.
Stability Assessment
We now evaluate the stability of DDNC. Since DDNC's verification protocol is based on filtering out abnormalities among 'normal clones', the entire SN system can be subverted if the abnormal nodes occupy a 51% majority simultaneously. Therefore, the shorter the time it takes to execute the verification protocol, the more overhead it may have, but the more robust it is. In order to conduct experiments to evaluate the stability of DDNC, we measure how much time it took to restore after applying an intentional forging for arbitrary SNs in a single domain of the Service layer. Each experiment was conducted to obtain the time taken to blacklist all forged SNs with a DDNC set to 10-minute periods. Then we randomly and deliberately forged SNs among normal SNs. Table 4 shows the average time taken for the experiments which were performed 10 times respectively. Intuitively, the more nodes are forged, the longer it takes to cleanse the entire system. Not only it has more forgeries to find, but it takes more time for each node because larger ratio of forgery makes more difficult to determine whether it is normal or forged. As can be seen from Figure 8, the case of (a) 9 forgeries in 20 SNs is mostly blacklisted in the first period except the last or second node. On the other hand, in case of (b) 3 forgeries in 20 SNs which forged less number of SNs, DDNC blacklists all forged nodes within first period in every experiments. This result is consistent with when comparing (b) and (c). Also in (c) 3 forgeries in 10 SNs, several experiments were observed that took more than one period to find the last forgery.
Application Case Study
We now present an application case study using DDNB, implementations of sample application services that benefit from the use of DDNB architecture.
Nodehome is a development environment for blockchain-based applications and also an operating platform for running developed services on blockchain. Even without a complex understanding of blockchain, developers are able to implement their own services according to the API rules (Terminal Layer ↔ Service Layer) provided by the Nodehome platform [40]. Developers need not worry about the terminals on which they run their services, nor how to prepare the hardware and network infrastructure for their blockchain. If you can provide hardware that can run pre-developed services, you can share the profits generated by services on Nodehome platform. Nodehome platform is equipped with various blockchain based means to ensure trust, such as proof of assets, preservation of goods, movement of goods, registration of property, evidence of actions, and protection of information. Many blockchain-based services are currently being implemented and executed on the Nodehome platform, and the information and assets are exchangeable between different services. Nodehome platform is serviced currently in the form of a testnet, and web pages (https://nodehome.io/) provide block explorers that provides detailed information about blocks, addresses, and transactions. In addition, a simple guide for architects, developers, and users is provided as well.
As shown in Figure 9, a few client applications are also available. NH Token Wallet is a wallet service that manages tokens issued from various applications in the DDNB's Service Layer (of course, some applications may not provide token-related services intentionally). fMusic-music streaming service is one of the practical applications built on DDNB. Since it is provided as a distributed service contrasting to existing services, it is able to connect singer-to-listener without sound production or distribution company. Not only this simple difference, but in intellectual property or derived user data perspective, it can also seek several new attempts that have not been possible before.
Conclusions
This work presented DDNB, a blockchain-based application service platform that allows application developers to take advantage of the integrity and reliability of blockchain technology while tackling the disadvantages of existing blockchain systems. DDNB consists of three layers: Application clients run in the public Terminal layer on top of the Service network, and accesses the permissioned Blockchain network only through the service network. DDNB enhances the reliability of the system through our proposed self-regulating mutual node verification and distributed query-chain process while reducing the consensus delay of blockchain with the aid of the unique service layer that separates the applications from the permissioned blockchain layer. Moreover, developers are not required to have deep understandings of the blockchain technology to develop blockchain based services on the Terminal layer, and end-users do not need to maintain blockchain nodes by themselves for using the services. With all of these benefits, the performance comparisons with other blockchain platforms showed that DDNB has no performance penalty, and the robustness of DDNC against random malicious attack was also evaluated. We also presented real-world application case studies that are currently being developed and operated. We anticipate various invigorating blockchain based services to emerge via DDNB. | 7,371 | 2020-07-29T00:00:00.000 | [
"Computer Science"
] |
Climate sensitivity, agricultural productivity and the social cost of carbon in FUND
We explore the implications of recent empirical findings about CO2 fertilization and climate sensitivity on the social cost of carbon (SCC) in the FUND model. New compilations of satellite and experimental evidence suggest larger agricultural productivity gains due to CO2 growth are being experienced than are reflected in FUND parameterization. We also discuss recent studies applying empirical constraints to the probability distribution of equilibrium climate sensitivity and we argue that previous Monte Carlo analyses in IAMs have not adequately reflected the findings of this literature. Updating the distributions of these parameters under varying discount rates is influential on SCC estimates. The lower bound of the social cost of carbon is likely negative and the upper bound is much lower than previously claimed, at least through the mid-twenty-first century. Also the choice of discount rate becomes much less important under the updated parameter distributions.
Introduction
The marginal social cost of carbon dioxide emissions, usually shortened to the social cost of carbon (SCC), is typically derived using integrated assessment models (IAMs). While over 20 global-scale IAMs have been developed, three are specifically tailored to aggregate benefit-cost analysis and thus are most widely used in SCC estimation (Weyant 2017). These are called Dynamic Integrated Climate-Economy (DICE, Nordhaus 1993), Framework for Uncertainty, Negotiation, and Distribution (FUND, Tol 1997;Anthoff and Tol 2013) and Policy Analysis of the Greenhouse Effect (PAGE, Hope 2006). Modified versions of these three models were used by the US Interagency Working Group (IWG 2010(IWG , 2013 for regulatory SCC estimates that have been particularly influential on climate and energy regulations in the US and elsewhere.
While sharing many similarities, these IAMs also have some key differences. FUND, for instance, allows the agricultural sector in some regions to benefit from increased atmospheric carbon dioxide fertilization, while the others set such benefits to zero. PAGE incorporates the possibility of catastrophic damages due to abrupt and extreme "tipping point" events 1 in the form of a long upper tail of costs at positive probabilities that the other models assume to be zero. These differences imply SCC estimates with a predictable ranking, from lowest to highest: FUND, DICE and PAGE.
Common to all models is the fact that, because of the long time horizons of the computations (over 200 years), the choice of discount rate is very influential on the results (Anthoff and Tol 2013). The assumed structure of the damage function is also critical (Marten 2011), as is the choice of equilibrium climate sensitivity (ECS), which represents the long-term temperature change from doubling atmospheric CO 2 , after allowing sufficient time for the deep ocean to respond to surface warming. Choices of these parameters dominate SCC estimates (Webster et al 2008;Wouter et al. 2012). 2 Anthoff and Tol (2013) further report that agricultural productivity and air conditioning energy demand are critical parameterizations for determining the SCC in FUND.
Monte Carlo analysis of plausible ranges of a small number of parameters has yielded such a wide dispersion of SCC results that IAMs have been criticized as arbitrary and potentially meaningless (Tol 2017;Pindyck 2013). But as Weyant (2017) points out, the models are still useful when understood as "if-then" statements. Rather than seeking a single canonical SCC estimate, the models allow traceability from assumptions to implications. Thus increased precision of SCC estimates will not primarily come from increasing the complexity of IAMs themselves but from debating structural assumptions and reducing the uncertainty over key parameter values (Gillingham et al. 2018). Consequently, there is a need to bridge efficiently between empirical research on climate-related parameters and Monte Carlo IAM analysis.
Here we focus on agricultural productivity and climate sensitivity. We do not explore the role of future extreme or abrupt events because of the lack of empirical basis for Monte Carlo simulations. The distributions in question are influential on simulated damages but difficult to specify with sufficient precision. Weyant (2017) notes that exclusion of just the top 1% of the damage estimates in the PAGE model 1 3 Environmental Economics and Policy Studies (2020) 22:433-448 causes the standard deviation of its SCC estimates to fall from $266 per ton to $56 per ton, but there is no observational basis for parameterizing either the probabilities of the events or the economic costs. In lieu of an observed distribution PAGE makes use of expert elicitations, the value of which are debatable, while FUND and DICE have been used to examine extreme events under special parameterizations (Link and Tol 2011;Nicholls et al. 2008;Cai and Lontzek 2019). For our purposes herein, we simply note that allowing for the possibility of low-probability catastrophic events will imply a higher SCC estimate, with the amounts entirely dependent on the scenario and the assumed probabilities.
Because of the structural differences, it is not valid simply to average across FUND, DICE and PAGE in the hope of obtaining an unbiased mean. It is also factually incorrect to set CO 2 fertilization to zero as is done in DICE and PAGE. We confine our analysis to the FUND model (version 3.8.1 as used by the US IWG 2013), since it has the appropriate structure to allow CO 2 fertilization benefits, and we examine changes in the SCC based on new parameter distributions. We review research published long after the original calibration of FUND regarding agricultural productivity changes under higher CO 2 levels to update the Monte Carlo simulation range.
We also consider uncertainty regarding climate sensitivity, as did the IWG (2010,2013), Dayaratna et al. (2017), and Gillingham et al. (2018). While IAMs have been run under a customary ECS range derived from climate models, little attention has been paid to the many empirically based estimates published in the last half decade which have tended to be lower than the model-based range. Dayaratna et al. (2017) show that use of an empirically constrained ECS parameter distribution substantially reduces the SCC estimate from DICE and FUND, which we also find herein.
We will examine results under a range of discount rates from 2.5 to 7.0%. The high end rate is above that typically used in long-term climate studies, but remains part of US Office of Management and Budget Guidelines so is commonly used for cost-benefit analysis (US Office of Management and Budget 2003). We primarily focus our discussion on the results under a 3.0% discount rate. Under the modelbased ECS range, the choice of discount rate matters acutely, with a change from 2.5 to 5.0% sufficient to reduce the SCC by about 90% through 2050. However, under an empirically based ECS, the choice of discount rate ceases to matter. Such distributions still induce proportionately large changes in SCC and also causes sign changes, but the absolute changes are very small because the SCC itself collapses to a very small amount.
Agricultural productivity change
It has been known for decades that increasing the atmospheric concentration of carbon dioxide enhances plant growth (Idso and Idso 1994;Cuniff et al. 2008) both by raising the rate of net photosynthesis and increasing water use efficiency within the plant. For numerous crop types around the world, CO 2 fertilization more than offsets 1 3 negative effects of climate change on crop water productivity, with some of the largest gains likely in arid and tropical regions (Derying et al. 2016). An additional benefit of climate warming arises from lengthening the growing season-the time between the last killing frost of the spring and the first one in the fall. Studies of US maize that take into account farmer adaptation to changing growing conditions confirm the potential for net yield gains under climate change (Butler et al. 2018).
FUND attempts to capture these changes in a simple form. The FUND model estimates agricultural output as a fraction of total output, where the fraction declines over time at a rate consistent with historical data. Specifically, the output share of agriculture at year t is the product of the 1990 agricultural output share and y 1990,r ∕y t,r 0.31 where y t,r is GDP per capita in year t for region r. From 1990 to 2050, this expression declined steadily from 1.0 to about 0.7, so if 5.0% of an economy's output is agricultural as of 1990, by 2050 that would decline to about 3.5%. This equation determines the potential welfare associated with regional agricultural output, and actual welfare is then determined by a parameterized function that depends on the temperature level, speed of climate change, and atmospheric CO 2 . Changes in each of these affect welfare by changing regional agricultural yields, which in turn affect prices and trade patterns. Consequently, the function parameters vary among regions. A reduction in the yield of a particular crop, for example, will tend to harm import-dependent regions but might benefit exporting regions. The temperature-level effect is represented by a quadratic equation with an implicit peak that a region may either be approaching or diverging from as it warms. The temperature change effect penalizes productivity in intervals when temperature changes rapidly from one year to the next. We take the parameter distributions associated with these effects as given.
The CO 2 fertilization effect A r for region r is determined by a logarithmic function: where CO 2 (t) is the current atmospheric CO 2 concentration with 275 parts per million assumed to be the preindustrial level and r is a region-specific constant derived by calibration to the results of a number of studies done using computable general equilibrium models, chiefly Tsigas et al. (1997) who separated out the CO 2 fertilization effect. An earlier global general equilibrium study, Kane et al. (1992), reported potential yield decline due to temperature increase based on simulations that did not include CO 2 fertilization, but added that based on the limited amount of information then available, doubling the amount of CO 2 in the atmosphere could increase yield by about 15%. By the time of Tsigas et al. (1997) more information was available and they incorporated global yield gains averaging between 20 and 30% for CO 2 doubling. The effects were large enough effectively to negate the losses from moderate climate changes and generate some regional net gains. The authors thus emphasized in their conclusions the importance of including CO 2 fertilization effects in future studies so as not to overstate the net damages of climate change in agriculture. The parameterizations in FUND are consistent with this early evidence. Of particular note, while the categories wheat and other crops experience net gains from the combination of warming and CO 2 fertilization, rice does not, based on the limited studies then available that suggested CO 2 fertilization would insufficiently offset damages due to warming. Because of the importance of rice in China and Asia, this assumption is influential on overall climate damages (Tsigas et al. 1997, Table 11.2).
Three forms of evidence gained since then indicates that the CO 2 fertilization effects in FUND may be too low. First, rice yields have been shown to exhibit strong positive responses to enhanced ambient CO 2 levels. Kimball (2016) surveyed results from free-air CO 2 enrichment (FACE) experiments, and drew particular attention to the large yield responses (about 34%) of hybrid rice in CO 2 doubling experiments, describing these as "the most exciting and important advances" in the field. FACE experiments in both Japan and China showed that available cultivars respond very favorably to elevated ambient CO 2 . Furthermore, Challinor et al (2014), Zhu et al. (2015) and Wu et al. (2018) all report evidence that hybrid rice varietals exist that are more heat-tolerant and therefore able to take advantage of CO 2 enrichment even under warming conditions (2013). Collectively, this research thus indicates that the rice parameterization in FUND is overly pessimistic.
Second, satellite-based studies have yielded compelling evidence of stronger general growth effects than were anticipated in the 1990s. Zhu et al (2016) published a comprehensive study on greening and human activity from 1982 to 2009. The ratio of land areas that became greener, as opposed to browner, was approximately 9 to 1. The increase in atmospheric CO 2 was just under 15% over the interval but was found to be responsible for approximately 70% of the observed greening, followed by the deposition of airborne nitrogen compounds (9%) from the combustion of coal and deflation of nitrate-containing agricultural fertilizers, lengthening growing seasons (8%) and land cover changes (4%), mainly reforestation of regions such as southeastern North America.
Zhu et al. used satellite-sensed leaf area index (LAI), which does not directly translate into grain yields-rather it is a measure of direct fertilization and the production of dry matter. However, for grassland, the most common agricultural land use, LAI in fact does relate directly to yield since grassland vegetation is consumed by grazing animals, and it is harvested for hay to feed livestock in the nongrowing season as well as to feed livestock removed from pasture. Also, in a new analysis of satellite LAI data, Gao et al. (2018) reported that agriculture-related trends were more than double those of natural vegetation, indicating that trends in LAI are likely indicators of increased agricultural productivity. Munier et al. (2018) likewise found a remarkable increase in the yield of grasslands. In a 17-year (1999-2015) analysis of satellite-sensed LAI, during which time the atmospheric CO 2 level rose by about 10%, there was an average LAI increase of 85%. A full 31% of earth's continental land outside of Antarctica is covered by grassland, the largest of the three agricultural land types they classified. Also, for summer crops, such as maize (corn) and soybeans, greening increased by an average of 52%, while for winter crops, whose area is relatively small compared to those for summer, the increase was 31%. If 70% of the yield gain is attributable to increased CO 2 , the results from Zhu et al (2016) imply gains of 60%, 36% and 22% over the 17-year period for, respectively, grasslands, summer crops and winter crops, associated with only a 10% increase in CO 2 , compared to parameterized yield gains in the range of 20-30% for CO 2 doubling in FUND.
Third, there has been an extensive amount of research since Tsingas et al. (1997) on adaptive agricultural practices under simultaneous warming and CO 2 enrichment. Challinor et al. (2014) surveyed a large number of studies that examined responses to combinations of increased temperature, CO 2 and precipitation, with and without adaptation. In their metanalysis, average yield gains increased 0.06% per ppm increase in CO 2 and 0.5% per percentage point increase in precipitation, and adaptation added a further 7.2% yield gain, but warming decreased it by 4.9% per °C. In FUND, 3 °C warming negates the yield gains due to CO 2 enrichment, but this is not what the Challinor et al. results imply. Suppose that over the next 100 years, CO 2 doubles from 400 to 800 pm while temperatures rise by 3 °C and precipitation increases on average by 2%, Challinor et al.'s regression coefficients would imply an average yield increase of 2.2% in the tropics without adaptation versus 9.3% with; and 5.0% outside the tropics without adaptation versus 12.1% with, indicating the productivity increase in FUND is likely too small. Figure 1 provides further evidence based on the recent historical record. It shows total global output of maize, rice, soybeans and wheat per year from 1980 to 2017. Over this interval, the global average land surface was estimated to have warmed by 1.0 °C, 3 the CO 2 concentration rose by 68 ppm 4 and crop output doubled. Hence, the record since 1980 provides prima facie evidence that the combined effects of warming, CO 2 fertilization and adaptation can have positive net growth results at the global level, and the meta-analysis results indicate the direction of this balance is likely to persist.
In light of these issues, we examine the effects of increasing the r parameters in FUND by 15% and 30%, namely multiplying them by 1.15 and 1.30. These changes are conservative in view of the evidence on CO 2 -driven growth enhancement; however, they provide guidance on the sensitivity of the SCC to the emergent information on agricultural productivity.
Climate sensitivity
ECS is the most basic measure within an IAM of CO 2 impacts on climate. Secondary impacts, such as sea-level rise, changes in storm intensity, depth and frequency of droughts and floods, etc., all depend on a reliable ECS.
The mean ECS of the climate models used in the most recent Assessment Report of the United Nations' Intergovernmental Panel on Climate Change (IPCC 2013) was 3.2 °C by the IPCC and 3.4 °C in the peer-reviewed literature describing those models (Andrews et al. 2012). The IWG applied Monte Carlo analysis to the ECS parameter using a distribution published in Roe and Baker (2007), based on climate models, which has a median value of 3.0 °C, a 90% confidence interval range from 1.91 to 5.86 °C, and is truncated at an upper limit of 10 degrees Celsius (IWG 2010). Roe and Bauman (2013) criticized the application of the Roe and Baker (2007) distribution in IAMs, because the higher climate sensitivities imply time spans to equilibrium which are inconsistent with the assumed speed of adjustment (via ocean heat uptake efficiency) in IAMs. The time to equilibrium in simple climate models goes up with the square of ECS, and the fat upper tail of ECS values implies such long adjustment times that realization of such warming takes over a 1000 years (Roe and Bauman 2013, p. 653). IAMs apply these high-sensitivity estimates on much shorter time scales, which Roe and Bauman argue involves physically impossible outcomes.
More fundamentally, the climate model-based ECS distributions have been challenged within the climate literature as potentially being arbitrary. There are numerous tunable parameters in climate models (Hourdin et al. 2017), and a range of sensitivity values can be made to fit the historical record equally well as long as tunings that increase climate sensitivity are accompanied by compensating adjustments elsewhere, which appears to be the case (Kiehl 2007).
A valid ECS estimate for use in IAMs must therefore be based on empirical constraints. Use of climate model-based metrics to construct Bayesian models may not get around the problem of arbitrariness. Lewis (2013) criticized the use of informative priors in Bayesian ECS derivations similar to that used in Olsen et al. (2012), in which likelihoods are derived from diagnostics of model-observational discrepancies, which are in turn functions of the model parameters. Because of parameter interdependence in the models, the diagnostics do not strongly constrain the ECS distribution and the posterior density is typically very close to the subjective prior. As an example, he reproduced an earlier Bayesian ECS estimation that had yielded a distribution similar to that in Roe and Baker. He found that under an informative prior, large sections of the posterior ECS distribution were unresponsive to the observations. Application of an objective Bayesian method on the same data set, however, yielded a lower and more tightly constrained distribution with a mode of 1.6 °C and a 90% credible interval of 1.2-2.2 °C. Lewis (2013) noted that this mode was identical to that found in two contemporaneous empirical studies (Aldrin et al. 2012;Ring et al. 2012) that had estimated relatively simple energy balance models on observational data. This latter approach has subsequently been widely applied yielding modal ECS values consistently below 2.0 °C and much narrower confidence or credible intervals (Otto et al. 2013;Masters 2014;Lewis and Curry 2015;Skeie et al 2014 Lewis andCurry 2018). Of particular interest is the distribution in Lewis and Curry (2018) since it is conditioned on a joint estimation with ocean heat uptake efficiency, uses up-to-date estimates of aerosol forcing from the IPCC and explicitly addresses concerns about spatial variation in effective forcing and other potential deficiencies of empirical energy balance model methods. Based on the post-1850 Hadley Centre surface temperature data set their best estimate of ECS is 1.50 with a 5-95% range of 1.05-2.45 °C. 5 By conditioning the estimate on ocean heat uptake efficiency the method yields an ECS distribution consistent with the main observed constraint on time to equilibrium, addressing the concern in Roe and Baumann (2013).
Beyond energy balance models, there are other even more strictly empirical methods. One approach is to estimate transient climate sensitivity (TCS, the estimated warming from doubling greenhouse gas levels over a 70-year span without allowing the oceans fully to adjust) then scaling it up to an ECS estimate based on an estimated ratio of the two. Christy and McNider (2017) used satellite bulk atmospheric temperature data from 1979-2016 and estimated a TCR of 1.1 ± 0.26 °C which is similar to the Lewis and Curry (2018) estimate of 1.2 °C (5-95% 0.9-1.7 °C). Using the estimated ECS/TCR ratio of 1.3 in Lewis and Curry (2018) implies a corresponding ECS mode of 1.4 °C in Christy and McNider (2017).
These are very different ECS ranges from the ones used by the IWG and unsurprisingly they yield much lower SCC estimates. We will review arguments in the final section why the lower estimates are relevant for IAM studies.
In our implementation herein, the Christy and McNider (2017) and Lewis and Curry (2018) distributions were sampled using inverse transform sampling. We had the full ECS distribution from Lewis and Curry (2018) from which to sample. For the Christy and McNider (2017) distribution, we fit a generalized gamma distribution to 5th, 50th, and 95th percentiles of the associated distributions via the method of least squares. Figure 2 shows plots of these probability density functions, as well as the Roe-Baker (2007) distribution used in this study.
Discount rate
The long-running debate about appropriate discount rates for climate change policy analysis will not be reviewed here. Considerations of uncertainty and the Environmental Economics and Policy Studies (2020) 22:433-448 ethical argument against time preference lead to a preference for a low discount rate, whereas viewing the discount rate as an opportunity cost of capital leads to a preference for a higher discount rate. We will present results using 2.5%, 3%, 5% and 7%.
Economic intuition
Simulations of the economic impacts of CO 2 emissions differ from those of conventional pollution in a few important ways, which taken together give rise to the possibility that the SCC can be negative as well as positive. First, whereas air contaminants like particulates and nitrogen oxides are directly injurious to human health, CO 2 is not. Because exhaled breath has a very high CO 2 concentration, when people travel in cars or spend time in crowded buildings (such as office towers) they routinely experience CO 2 exposure at levels far higher than outdoors, without noticeable effects. Second, CO 2 is a principal food source for plants, and if the only environmental effect of CO 2 were its aerial fertilization of plant life then emissions would almost certainly be a net benefit. But (third) its other main environmental effect is its infrared absorption property, which gives rise to projected atmospheric warming as outdoor CO 2 levels increase. Here again the effects on plants, animals and people are complex and may involve gains as well as losses. A longer growing season and less harsh winters may be a net benefit in some regions, whereas more drought and heat stress would reduce agricultural productivity. Also, if changing temperatures increase (decrease) the risk of extreme weather events, economic damages will in consequence increase (decrease).
IAM simulations attempt to represent these offsetting factors to give an idea of circumstances under which the net effects would be positive or negative. If the overall economic effects of increased CO 2 are positive, this implies a negative SCC, and vice versa. By assuming away CO 2 fertilization effects, DICE and PAGE leave out a potential beneficial side effect and their resulting SCC estimates are, as a consequence, higher. Since DICE and PAGE assume CO 2 only causes damages their SCC estimates can never go negative. The damage function in FUND is not a single function in temperature and sea-level rise (as is the case with DICE), instead it is an accumulation of sector-and region-specific effects that depend on temperature and other climatic parameters. A fitted line through global net economic damages from warming has, in the case of FUND, a segment that goes below zero for low levels of warming, implying a negative SCC. Some previous analyses have shown this outcome (see IWG 2010, Fig. 1a, also see Dayaratna et al. 2017). The fact that DICE and PAGE cannot generate a negative SCC does not make FUND an "outlier", the restriction is imposed on DICE and PAGE by assumption. If they allowed for CO 2 fertilization effects comparable to those in FUND, then they would likely generate negative SCC estimates over moderate warming intervals as well. Table 1 reports on results using the Roe and Baker (2007) distribution (corresponding to the IWG 2013) allowing for a 15% and 30% increase in the agricultural productivity parameters. We only show the 3.0% discount rate case since the relative changes are similar for the other discount rates. Decadally from 2020 to 2050, the mean SCC grows from $19.33 to $27.06 per ton of CO 2 under the base case. 6 A 15% increase in r reduces this estimate only modestly, becoming $18.14-$26.30 per ton, a change of − 6.2% in 2020 but only − 2.8% by 2050. But a 30% productivity coefficient increase yields a proportionately larger effect, with the SCC ramp becoming $14.75-$20.38, a consistent reduction of about 24% compared to the base case values in all years. This percentage change indicates that the increased CO 2 productivity component has a nonlinear impact on the SCC, though the gains do level out. In an unreported sensitivity analysis, we increased the r parameters by 75%, and the 2020-2050 SCC ramp became, respectively, $10.72, $12.76, $15.04 and $17.56, Table 1 Mean social cost of carbon in FUND model using Roe-Baker (2007) ECS distribution and original agricultural CO 2 fertilization parameter (Ag + 0% column), then with CO 2 fertilization parameter increased by 15% and 30% (Ag + 15%, Ag + 30%, respectively)
Results
Percentage changes in brackets are relative to the base case in column 1 Year a drop of 45% as of 2020 shrinking to 35% as of 2050 relative to the base case. Consequently, we conclude that updating the agricultural effects of CO 2 fertilization yields a modest but important change in the SCC estimates. If the productivity gains are only 15% higher the effect is relatively minor, but they apparently grow quickly thereafter and a 30% gain causes the SCC estimate to fall by nearly one quarter. Table 2 shows the effect of changing to the Lewis and Curry (2018) ECS distribution (column labeled LC18), then introducing higher agricultural productivity parameters (columns labeled LC18 + 15 and LC18 + 30, respectively). Results for discount rates of 2.5-7% are shown in different panels. As has also been illustrated in prior research (Dayaratna et al. 2017), it is clear that the ECS parameter choice is highly influential on the SCC estimates. Under a 3.0% discount rate, changing to the LC18 case causes the mean SCC estimate to drop from $19.33 in 2020 to only $1.61. Adding 15% and 30% to the agricultural productivity coefficient drops this further to − $0.82 and − $2.74, respectively, implying that, after taking into account enhanced agricultural productivity and recent empirical evidence on climate sensitivity, CO 2 is not a negative externality in FUND as of 2020. Even as far forward as 2050, and even under a 2.5% discount rate, the SCC remains negative using the Lewis and Curry ECS estimate under a 30% gain in agricultural productivity. Figure 3 compares the results for the 2.5% discount rate case, clearly showing that the choice of ECS parameter is very influential even at a low discount rate. Table 2 Mean social cost of carbon in FUND model at discount rates of 2.5%, 3%, 5% and 7%, using Roe and Baker (2007) ECS distribution and Lewis and Curry (2018) ECS distribution ("LC18"), under the base case (second column), and with 15% and 30% increases in the CO 2 fertilization parameters (LC18 + 15%, LC18 + 30%, respectively) In the last three columns the entry shows the SCC estimate and the associated probability of a negative SCC
Roe-Baker LC18
LC18 + 15% LC18 + 30% Associated with each entry in the last three columns of Table 2 is the probability of a negative SCC. The SCC shown is the mean of a distribution and the probability measures the fraction of the distribution lying below zero. Under the LC18 + 15 case, even at a discount rate of 2.5% the probability of a negative SCC exceeds 0.45 out to 2050. At a 5% discount rate and a 30% gain in agricultural productivity, there is at least a 0.65 probability that SCC is negative out to 2050. If instead we use a 2.5% discount rate, the probability nonetheless remains over 0.5.
It is also noteworthy that once the ECS distribution is changed to that in Lewis and Curry (2018), the SCC estimates are, whether positive or negative, very small. As a public policy matter, after downscaling these estimates by the marginal cost of public funds (Sandmo 1975), the model's implication would be that the optimal (2018) ECS distribution (LC18); LC18 and 15% increase in CO 2 fertilization parameter (LC18 + 15%); LC18 and 30% increase in CO 2 fertilization parameter (LC18 + 30%) Table 3 Mean social cost of carbon in FUND model using Christy and McNider (2017) ECS distribution and original agricultural CO 2 fertilization parameter (Ag + 0% column), then with CO 2 fertilization parameter increased by 15% and 30% (Ag + 15%, Ag + 30%, respectively) Each entry shows the SCC estimate and the associated probability of a negative SCC emission tax would be so small as to be practically equivalent to business as usual, or even negative. Finally, in Table 3 we present the results for the 3% discount rate case using the ECS estimate derived from Christy and McNider (2017). It is notable that this ECS estimate is based on a different temperature data set than Lewis and Curry (2018), and focuses only on the last 40 years and on the lower troposphere where models project a somewhat stronger warming response than at the surface. The results are similar to those based on Lewis and Curry (2018) but are three or four dollars lower in each configuration.
Discussion: IAMs as if-then statements
IAMs cannot provide a single, canonical social cost of carbon. As Weyant (2017) notes, they are best thought of as elaborate "if-then" statements. Researchers must decide on their preferred premises, and the IAMs provide the implied SCC range. As shown herein, user judgment is unavoidable, and a researcher prescribing an SCC for policy purposes must be able to defend the "if" statements that give rise to it.
It is already well known that if the appropriate discount rate is 5% or higher, then the SCC will be relatively small compared to 2.5% or 3% cases. We do not propose to resolve herein the ethical arguments over time preference; instead, we note that once climate sensitivity is changed to an empirically constrained distribution, the choice of discount rate matters a lot less.
While some studies have considered ranges of ECS values, the IAM literature as a whole has been wedded to climate model-based distributions with modal values around 3 °C and thick upper tails extending above 6 °C. However, there is now a substantial climatological literature showing that distributions with modal values below 2 °C and small upper tails match historical (post-1850) data better. The debate over which distribution best describes the real climate system must ultimately be resolved within the climatology literature, but economists need to be aware that it exists and the outcome has significant ramifications for SCC estimates. If ECS values like those estimated in Lewis and Curry (2018) turn out to be approximately correct, then the FUND model indicates that CO 2 is for all practical purposes not a negative global externality through mid-century. Even if we consider possible catastrophic tipping points, the possibility of reaching such a threshold any time in the next 1000 years diminishes substantially.
IAM practitioners should therefore study the empirically constrained ECS estimates rather than relying exclusively on model-derived distributions. Kiehl (2007) noted the puzzle that climate models can differ in their implied ECS by a factor of 3 yet all fit the historical surface temperature record equally well. One of the compensating parameterizations emphasized by Lewis and Curry (2018) is aerosol cooling: a model with high ECS paired with strong aerosol cooling fits the surface trend as well as one with low ECS and weak aerosol cooling. The Lewis and Curry (2018) empirical ECS distribution is conditioned on the IPCC's updated estimates of observed historical aerosol forcing, lending it increased credibility. Specifically, the IPCC's preferred estimate of aerosol forcing (cooling) has declined over time, which leads to a lower preferred ECS estimate in empirical energy balance models. The methodology of Christy and McNider (2017) provides an independent and model-free check on this approach. Also, while climate models with high ECS values can be made to fit the surface warming trend, they have shown demonstrably excess warming elsewhere, especially in the troposphere over the tropics (Fu et al. 2011;McKitrick and Christy 2018). We therefore believe that the LC18 results in Table 2 are more credible than the ones conditioned on the Roe-Baker distribution.
Another if-then statement concerns CO 2 fertilization of agriculture. If adding CO 2 to the air has no effect on plant growth, then the assumption in DICE and PAGE that the effect is non-existent is appropriate. However, there is overwhelming evidence that CO 2 increases do have a beneficial effect on plant growth, so models that fail to take these benefits into account overstate the SCC. Indeed, the initial studies on which the FUND parameterizations were based cautioned against ignoring this line of benefit (Kane et al. 1992;Tsigas et al. 1997). The recent literature on global greening and the response of agricultural crops to enhanced CO 2 availability suggests that the productivity boost is likely stronger than that parameterized in FUND. If the effect is 30% stronger, and if the Lewis and Curry ECS distribution is valid, then the mean social cost of carbon is negative even at discount rates as low as 2.5% at least through mid-century. | 8,050 | 2020-01-18T00:00:00.000 | [
"Economics"
] |
Conditions DataHandling in the Multithreaded ATLAS Framework
In preparation for Run 3 of the LHC, the ATLAS experiment is migrating its offline software to use a multithreaded framework, which will allow multiple events to be processed simultaneously. This implies that the handling of non-event, time-dependent (conditions) data, such as calibrations and geometry, must also be extended to allow for multiple versions of such data to exist simultaneously. This has now been implemented as part of the new ATLAS framework. The detector geometry is included in this scheme by having sets of time-dependent displacements on top of a static base geometry.
Introduction
During Run 1 and Run 2 at the LHC, ATLAS [1] utilized a serial event processing framework called Athena [2] [3] and its multiprocess capable variant AthenaMP [4]. However it was determined that neither Athena nor AthenaMP would scale to the projected computational requirements or Run 3, and a new data flow driven, multithreaded implementation which enables concurrent processing of multiple independent events, was required. This framework has been called AthenaMT [5] [6].
Asynchronous or time varying data, also commonly referred to as conditions data, are data whose lifetime can be longer than one event. Some data may remain the same for multiple runs, while some may sometimes change as often as every event. Managing this sort of data in a concurrent framework poses many challenges beyond those necessary for a serial event processing environment.
Multiple versions of the data that are referenced by different time points may be in use at the same time when the framework processes concurrent events, and the framework must be able to correctly return the appropriate version to a client in an efficient manner when requested. As a processing job progresses, multiple versions of the conditions data are loaded, necessitating some form of garbage collection to reduce memory consumption. Adding the complexities of multithreading to the equation only increases the difficulty of the task. In this paper we present the solution that ATLAS has implemented to manage its time varying data as implemented in the AthenaMT framework.
Conditions Store
In Athena, the serial ATLAS framework, there is one instance of the event store, called StoreGate [7], and one instance of the conditions store, which is implemented as a special version of StoreGate. Algorithms, the main processing unit of the framework, access event data by means of smart references called DataHandles, where individual data objects are referenced via unique keys.
At the end of each event the event store is flushed, to prepare for the next event to be read in. It is unnecessary to do the same for the conditions store, as the data therein changes at points defined by its associated interval of validity (IOV), so the data only needs to be updated, usually being read from a database, when a new IOV is entered. Clients can register callback functions with a particular condition data object, which are triggered when a condition object is read in as it enters a new IOV. This is done to generate "derived" conditions data, which may in fact be dependent on multiple raw conditions objects. This derived data is also written to the conditions store.
This workflow for accessing conditions data fails when multiple events are processed concurrently. Since only a single instance of the conditions data can be held at any one time in the conditions store, if two events are processed concurrently, with associated conditions data from different IOVs, one will overwrite the other.
In AthenaMT, the concurrent, multithreaded ATLAS framework, there is one instance of StoreGate created for each concurrent event. When an Algorithm or other client needs access to an object in the event store, it does so via the use of a DataHandle, and an EventContext object, which contains information about the current event, such as an event and run number. The object identifier key encoded in the DataHandle, along with the EventContext object, is sufficient to identify the appropriate object in the correct instance of the event store associated with the current event that the Algorithm is processing.
While the same mechanism could be used for conditions data, i.e. creating separate instances of the conditions store for each concurrent event, it would be grossly inefficient, from both a memory and processing point of view, as much of the data would be identical between all stores, and the callback functions that generate derived data would have to be executed multiple times.
After investigating a number of different designs, with two key requirements of minimizing changes to client code, and minimizing memory usage, the implementation that ATLAS chose was a single instance of multi-cache condition store, shared amongst all concurrent events (see Figure 1). Instead of holding individual condition objects, the store holds containers of them, where the elements in each container correspond to individual IOVs. This is implemented as a ConcurrentRangeMap templated in the type of the contained condition object, and indexed by the IOV, which allows for efficient lookup with no locking, and locked writing with concurrent reading.
Clients access condition objects via smart references, with a similar idiom to DataHandles, called CondHandles, which implement logic to determine which element in any condition container is appropriate for a given event. The callback functions from serial Athena which are used to populate the derived conditions objects, are migrated to fully-fledged Condition Algorithms, that are managed by the framework like any other Algorithm, but only executed on demand when the conditions objects they create need to be updated. The Algorithm Scheduler, which executes Algorithms in an order determined by their data dependencies, is aware of the IOV associated with each condition object, and will only trigger the execution of the associated Condition Algorithm when a new IOV is entered.
While some conditions data are derived, and created by Conditions Algorithms which can perform extensive post-processing, some are merely read from a conditions database and placed directly into the conditions store. In order to facilitate this process, a special Algorithm called the CondInputLoader is configured with the list of database folders and keys from which the data is to be read. During initialization, this Algorithm uses special factory macros to automatically create the appropriate conditions containers in the conditions store. These containers are then automatically populated via appropriate database accesses on IOV boundaries when the CondInputLoader is executed by the Scheduler.
Condition Handles
One of the fundamental requirements for the client code needed for the migration to AthenaMT is that all access to event data must be done via DataHandles. DataHandles are declared as member variables of Algorithms, and provide two functions: to perform the recording (WriteHandles) and retrieval (ReadHandles) of event data, and to automatically declare the data dependencies of the Algorithms to the framework, so that the Algorithms can be executed by the Scheduler only after the data they require has become available. We capitalized on the migration to DataHandles by requiring that all access to conditions data be done via related CondHandles. By using CondHandles in the Condition Algorithms to write data to the conditions store, the framework solves the problem of Algorithm ordering for us, ensuring that the Condition Algorithm is executed, and the updated Condition Objects are written to the store before any downstream Algorithm which needs to use them (via a declared ReadCondHandle) are executed.
Upon initialization, Condition Algorithms register themselves and the WriteCondHandles that they will create conditions data in the conditions store with a special Conditions Service (see Figure 2). This makes an association between the WriteCondHandle and the Condition Algorithm that creates it, which the Scheduler needs to know in order to trigger the execution of the Algorithm at the appropriate time.
When a CondHandle is initialized during the initialization phase of its parent Algorithm, it will look in the conditions store for its associated container, identified by a unique key, and creating it if necessary. This container holds a set of objects of the same type and their associated IOVs.
At the start of the event, the Scheduler queries the Condition Service to analyze the subset of the objects held in the condition store that have been registered with it at the start of the job by the Condition Algorithms, and determines which are valid or invalid for the current event. If an object is found to be invalid, the Condition Algorithm that produces that object will be scheduled for execution. If an object is found to be valid, then the Scheduler knows it can be ignored. If all conditions objects associated with a Conditions Algorithm are valid for an event, then the Scheduler will not execute it.
When a Condition Algorithm is executed, it queries the conditions database for data corresponding to the current event, as well as its associated IOV, creates the new object for which it is responsible, and adds a new entry in the conditions container that is associated with the WriteCondHandle. By the time a downstream Algorithm that needs to access conditions data via a ReadCondHandle is executed by the Scheduler, the data is guaranteed to be present. The CondHandle uses the information in the current EventContext (such as event and run numbers, lumi-block number or nanosecond time stamp) to identify which element in the container is the appropriate one, and returns its value.
Detector Description and Geometry
The detector geometry model used in ATLAS (GeoModel), is a hierarchical tree that is built from several components (see Figure 3): a Physical Volume (PV) which are the basic building blocks; a Transform (TF) that is fixed at construction; and an Alignable Transform (ATF), which accounts for the movement of the detector component as a function of time, reading Deltas (D) from a database. When a client requests the position of a Detector Element, the Full Physical Volume (FPV) is assembled, and the position is cached (C). As the detector alignment changes, new Deltas are read in by the ATF, and the cache held by the FPV is invalidated, until the position of the element is again requested, recomputed, and cached.
When multiple concurrent events are processed, this design will fail, as there is only a single shared instance of the GeoModel tree, and the ATF and FPV can only keep track of single Delta or cache at any one time. We can solve this problem in the same way as for the conditions. The time dependent information (i.e. the Deltas and cache) held by the GeoModel is decoupled from the static entries, and held in a new AlignmentObject located inside the conditions store. The ATF and FPV use ConditionHandles to access this data, and they are updated by a new GeoAlignAlg which is scheduled on demand by the framework. Clients of the DetectorElements are entirely blind to this change, and the only code that needs to be modified are base classes inside the GeoModel structure.
Garbage Collection
Since the concurrent implementation of the conditions store holds containers of conditions data, and new elements are added to the containers as new IOVs are entered, the size of the store will grow as the job progresses. Depending on how long the job runs, the number of conditions data needed, and how many IOV boundaries are crossed, this can result in significantly more memory consumption than was used in the serial implementation. However, not all container elements are actually needed at any one time -only the ones that are referenced by events that are currently being processed. This means that significant memory savings can be had through judicious use of garbage collection.
One complication is that events are not necessarily guaranteed to be processed in the same order that they were taken. This means that if conditions data is aggressively pruned during a reprocessing run as soon as they are no longer in use, a subsequent event which is in fact from a previous instant in time, may require reloading the just deleted data, and triggering a sequence of execution of Condition Algorithms. This is unwelcome as it results in unnecessary database access, as well as extra processing. Instead a certain delay in the removal of old objects can be beneficial. This is implemented as follows (see Figure 4): whenever a new conditions object is created, the framework notes that conditions container should be examined for old conditions N delay events later. The actual garbage collection is performed from the event loop, at the start of each event. First, the IOV keys for the current event are saved in a ring buffer with N event entries. Each conditions container that was earlier scheduled for cleaning at this time is then examined. The earliest conditions objects in these containers that do not match any event currently being processed or the keys for any of the past N event saved in the ring buffer are then deleted. The parameters N delay and N event are preliminarily set to 100, but will be tuned based on further experience.
Migration Status
The framework and infrastructure components of the concurrent condition store handling in AthenaMT is feature complete, and is in the process of undergoing optimization. It supports both serial and concurrent processing environments.
The migration of clients to this environment has proved to be more challenging than initially anticipated. This is largely due to the construction of the callback functions which were used to update derived conditions in serial Athena. These were often implemented as components called AlgTools, which act as callable functions that can be shared between multiple parent Algorithms. They tended to do significant caching of event related data. Both of these implementation methodologies are not allowed in AthenaMT, as they result in thread and concurrent event unsafe behavior. Converting these into Conditions Algorithms has required re-writing significant amounts of the code, and redesigning interfaces.
In general, rewriting client code to read conditions data from the conditions store has been much more straightforward, as it usually only requires replacing direct access to the store with an equivalent ReadCondHandle. This aspect of the migration is largely complete.
Significant effort is now being directed to the client migration, and we hope to have the majority completed by the end of Q4 2018.
Conclusion
Designing a conditions handling mechanism that can efficiently manage multiple conditions data belonging to different concurrent events has been challenging. There is a constant trade off between processing and memory efficiency, and complexity. However, given the computational requirement and available resources for Run 3 at the LHC, it is essential to implement a design that minimizes the use of resources. We have done so with a shared multi-cache implementation that has a adjustable memory profile, where the aggressiveness of the garbage collection can be tuned to meet the specific job requirements.
The migration of client code has proved to be more time consuming than originally anticipated. This is due to the structure of the code which updates derived conditions, which was both thread and concurrent event processing hostile, and needed significant changes to operate in AthenaMT. The migration of clients is underway, and effort to do so has increased as the scale of the challenge has become apparent. | 3,521 | 2019-01-01T00:00:00.000 | [
"Physics",
"Computer Science"
] |
Flavor, CP and Metaplectic Modular Symmetries in Type IIB Chiral Flux Vacua
We examine symmetries of chiral four-dimensional vacua of Type IIB flux compactifications with vanishing superpotential $W=0$. We find that the ${\cal N}=1$ supersymmetric MSSM-like and Pati-Salam vacua possess enhanced discrete symmetries in the effective action below the mass scale of stabilized complex structure moduli and dilaton. Furthermore, a generation number of quarks/leptons is small on these vacua where the flavor, CP and metaplectic modular symmetries are described in the framework of eclectic flavor symmetry.
Introduction
The string theory predicts a huge number of low-energy effective field theories, the so-called string theory landscape.In particular, background fluxes in extra-dimensional spaces lead to the rich and attractive vacuum structure of the string landscape, which will be quantified by a statistical study [1][2][3] as well as the swampland program [4][5][6] 1 .It is known that the statistical study of Type IIB flux vacua is a powerful approach to address the vacuum distribution and selection rules on the moduli spaces.
In Type IIB flux compactifications on T 6 /(Z 2 × Z ′ 2 ) orientifolds, the distribution of complex structure moduli fields was known to be clustered at fixed points of SL(2, Z) modular symmetry of the torus [8,9], where the fixed points in the SL(2, Z) moduli space correspond to τ = i, ω, i∞ with ω = −1+ √ 3i 2 , each with enhanced symmetries.Remarkably, probabilities of moduli values are peaked at a Z 3 fixed point τ = ω, indicating that discrete Z 3 symmetry remains in the low-energy effective action of moduli fields [9].Such a novel feature about the distribution of flux vacua was explored in this simple toroidal orientifold but it is expected to appear in a more generic Calabi-Yau moduli space with symplectic modular symmetry.
In this paper, we further examine semi-realistic four-dimensional (4D) vacua with Standard Model (SM) spectra.Since a generation number of fermions is determined by background fluxes on magnetized D-branes, the generation number and three-form fluxes stabilizing moduli fields will be correlated through tadpole cancellation conditions of D-branes.It is interesting to reveal how much 3-generation models are distributed in the flux landscape.Furthermore, the flavor symmetries of quarks and leptons will also be related to the modular symmetry of the torus, because moduli-dependent Yukawa couplings transform under the moduli symmetry [10].For illustrative purpose, we deal with the simple T 6 /(Z 2 × Z ′ 2 ) orientifolds.By analyzing physically-distinct configurations of background fluxes leading to vanishing superpotential W = 0, we find that the generation number of quarks and leptons is restricted to be small due to the tadpole cancellation condition.Furthermore, flavor symmetry, CP, and modular symmetry in semi-realistic 4D vacua are uniformly described in the context of eclectic flavor symmetry [11,12] as developed in both the top-down and bottom-up approaches [11][12][13][14][15][16].
This paper is organized as follows.In Sec. 2, we first review the flux compactifications on T 6 /(Z 2 × Z ′ 2 ).Next, we incorporate specific magnetized D-brane models without and with a discrete B field in Secs.2.2 and 2.3, respectively.It turned out that the string landscape leads to the small generation number of quarks and leptons.In Sec. 3, we begin with the metaplectic modular symmetry in Sec.3.1, which can be realized in T 2 and T 2 /Z 2 orbifold with magnetic fluxes as discussed in Secs.3.2 and 3.3, respectively.The CP transformation will be unified in the context of generalized modular symmetry in Sec.3.4.Finally, we discuss the unification of flavor, CP, and modular symmetries in Type IIB chiral 4D flux vacua in Sec.3.5.Sec. 4 is devoted to the conclusion.
Moduli distributions in Type IIB flux vacua with SM spectra
In Sec.2.1, we first review the vacuum structure of Type IIB flux compactification on T 6 /(Z 2 × Z ′ 2 ) orientifolds.Next, we introduce semi-realistic magnetized D-brane models in Type IIB flux vacua, taking into account the tadpole cancellation conditions in Secs.2.2 and 2.3.It is found that the generation number of quarks and leptons is restricted to be small due to the tadpole cancellation condition.
It was known that the 4D kinetic terms of the closed string moduli, i.e., three complex structure moduli τ i , the axio-dilaton S and three Kähler moduli T i are derived from the following Kähler potential in units of the reduced Planck mass M Pl = 1: where V denotes the torus volume in units of the string length l s = 2π √ α ′ .The moduli superpotential is induced by background three-form fluxes in Type IIB string theory.Throughout this paper, we focused on the stabilization of complex structure moduli and axio-dilaton.Let us introduce the background Ramond-Ramond (RR) F 3 and Neveu-Schwarz three forms H 3 as follows: where {a 0,1,2,3 , b 0,1,2,3 , c 0,1,2,3 , d 0,1,2,3 } correspond to the integral flux quanta.They lead to the flux-induced superpotential in the 4D effective action [19]: In Ref. [20], the moduli stabilization was performed in the isotropic regime, namely with overall flux quanta: (2.13) The moduli vacuum expectation values (VEVs) are given by for the axio-dilaton and for the overall complex structure modulus, respectively.Here, we redefine the flux quanta Here, we focus on supersymmetric W = 0 minimum.To stabilize Kähler moduli, we will assume non-perturbative dilaton-dependent superpotential W ∼ e −aS to realize constant superpotential below the mass scale of axio-dilaton and complex structure moduli.For more details, see, Ref. [21].
Since the effective action is invariant under the SL(2, Z) τ ≡ SL(2, Z) 1 = SL(2, Z) 2 = SL(2, Z) 3 and SL(2, Z) S modular symmetries2 , one can count finite number of physicallydistinct flux vacua. 3Note that we have to be careful about the tadpole cancellation condition of D-brane charges because we deal with a compact manifold.In particular, we focus on the cancellation condition of the D3-brane charge, and other conditions will be analyzed in the next subsections.Specifically, the flux-induced D3-brane charge In general, it is difficult to stabilize all the moduli fields including twisted moduli localized at orbifold fixed points in addition to untwisted moduli we focused.If Type IIB orientifolds are uplifted to the F-theory in the strong coupling regime, N max flux = O(10 5 ) will be a largest value as discussed in Refs.[23,24].In our analysis, we adopt a phenomenological approach such that we simply ignore the concrete tadpole bound and explore the interplay between moduli stabilization and model building.This approach allows us to understand the vacuum structure of the string landscape more specifically, as will be shown later.Furthermore, each flux quantum is in multiple of 8, that is, {a 0 , a, b, b 0 , c 0 , c, d, d 0 } ∈ 8 Z, and correspondingly N flux ∈ 192 Z. Since the effective action, as well as the tadpole charge, are invariant under the modular symmetry, one can map the moduli VEVs into the fundamental domains.The number of stable vacua is shown in Figure 1, from which there is huge degeneracy at the fixed points in the SL(2, Z) τ moduli space.In particular, the τ = ω vacuum is realized by a high probability such as 62.3 % for N max flux = 192 × 10 and 40.3 % for N max flux = 192 × 1000 [9].It can also be justified in a statistical argument.By taking the flux quanta as the continuous one, the number of supersymmetric W = 0 vacua is analytically estimated as [8] Here, gcd(l, m, n) = 1 is adopted in the analysis of Ref. [8], but the results are the same with our results as pointed out in Ref. [9].Remarkably, τ = ω corresponding to (l, m, n) = (1, −1, 1) is invariant under the discrete Z 3 symmetry, generated by where S and T are generators of SL(2, Z) τ : with (ST ) 3 = 1.Thus, the effective action in Type IIB flux landscape enjoys the discrete Z 3 symmetry.However, it is unclear whether such a Z 3 symmetry still remains in the effective action with the SM spectra.In the next section, we will engineer the semi-realistic SM-like models on magnetized D-branes and discuss the role of discrete symmetry.
Distribution of g-generation models without discrete B field
In addition to O3-and O7-planes located at fixed loci, we construct semi-realistic models on N a stacks of magnetized D(3 + 2n)-branes wrapping 2n-cycles on T 6 /(Z 2 × Z ′ 2 ) orientifolds.
We turn on the background U (1) a gauge field strength F a on (T 2 ) i , where wrapping numbers of N a D(3 + 2n)-branes on (T 2 ) i are represented by integers m i a with non-vanishing 0, 1, 2 and, 3 values on D3-, D5-, D7-, D9-branes, respectively.Note that {n i a , m i a } for each a and i are assumed to be coprime numbers, and only the wrapping number m i a transforms as ΩR : m i a → −m i a under ΩR.For practical purposes, let us introduce the homology classes of each (T 2 ) i , that is, [0] i and [T 2 ] i for the class of point and of the two-torus with Then, the stack a of D-branes has an associated homology class: Similarly, the 64 O3-and 4 O7 i planes are expressed by RR charges -32 times the following homology classes: Remarkably, these gauge fluxes will lead to semi-realistic D-brane models, that is, gauge groups G SM × G ′ with chiral spectra.In particular, the index theorem tells us that the number of chiral zero-modes between two stacks a and b of D-branes on T 6 = (T 2 ) 1 × (T 2 ) 2 × (T 2 ) 3 is counted by However, some of the couplings of zero-modes are projected out by Z 2 × Z ′ 2 projection (2.1).Indeed, internal fermionic wavefunctions transform as where s i = sign(I i ab ) corresponds to the chirality on each torus and its product (s 1 s 2 s 3 ) corresponds to the 4D chirality.Thus, there exist Z 2 -even and -odd modes on each torus whose explicit form of zero-mode wavefunctions is shown later.Note that the two conditions should be consistent with each other; that is, from which allowed zero-modes are given by a specific combination of Z 2 -even modes (ψ i even ) and Z 2 -odd zero-modes (ψ i odd ) on (T 2 ) i : with i ̸ = j ̸ = k.Since the number of these Z 2 -even and -odd zero-modes is counted by [25] with f i = 1 for odd I i ab and f i = 2 for even I i ab , the total number of zero-modes is still described by Here, we assume I i ab ̸ = 0.If one of the indices is 0, e.g., I 3 ab = 0, the spectrum is not chiral, and the index is counted by [25] On N a stack of D-branes that does not lie on one of the O-planes, the mass spectra consist of U (N a /2) vector multiplets and three adjoint chiral multiplets (called a aa sector) 4 .On the other hand, when 2N a stack of D-branes lies on one of the O-planes, the mass spectra consist of U Sp(N a ) vector multiplets and three antisymmetric chiral multiplets, which we also call a aa sector.In addition, there are chiral multiplets that arise from intersections of two different stacks a and b of D-branes or the stack a and its orientifold image a ′ , as summarized in Table 1.
Sectors
Representations Multiplicities ab + ba ( a , b ) Table 1.Multiplicities of chiral zero-modes in each sector.
Since the magnetic fluxes induce the D3-and D7-brane charges, we have to be careful about their tadpole cancellation conditions: If there exists D9-branes with constant magnetic fluxes, they are mapped to anti D9-branes with the opposite magnetic fluxes under the orientifold involution.Thus, D9-brane tadpole charges are canceled.Similar things happen for D5-branes with constant magnetic fluxes as well.These conditions play a role of the cancellation of 4D chiral anomalies, but K-theory conditions require extra constraints.Indeed, probe D3 and D7-branes with U Sp(2) ≃ SU (2) gauge group suffer from a global gauge anomaly if the number of 4D fermions charged in the fundamental representation of SU (2) is odd [26].It imposes the following K-theory constraints [27]: Since magnetized D9-branes with negative n 1,2,3 will carry anti D3-and D7-brane charges, it will be possible to construct semi-realistic 3-generation models on the flux background (see, e.g., [28,29]).In these analyses, we have not introduced anti-D3 branes satisfying tadpole cancellation condition, but it would be possible to construct realistic models, taking into account the effect of anti-D3 brane annihilations with flux [30].Note that the N = 1 supersymmetry on the orientifold background will be preserved when the following condition is satisfied [31]: with where A i denote the area of the torus (T 2 ) i .
For concreteness, let us consider the local brane configurations with SM spectra as shown in Table 2 [28], leading to g generation of quarks and leptons Table 2. D-brane configurations leading to left-right symmetric Minimal Supersymmetric Standard Model (MSSM).The magnetic flux g determines the generations of quark and lepton chiral multiplets in the visible sector.
The supersymmetry condition (2.34) is satisfied when Furthermore, some of U (1)s become massive by absorbing axions associated with Ramond-Ramond fields through the Green-Schwarz mechanism.Indeed, the dimensional reduction of the Chern-Simons couplings in the D-brane action induces the corresponding 4D couplings: with To satisfy the tadpole cancellation conditions, we have also supposed the existence of magnetized D9-branes to satisfy the tadpole cancellation conditions.It means that the D3-brane charge induced by the magnetic flux on D9-branes Since there are several possibilities for the choice of magnetized D9-brane sectors,we freely change the value of Q hid D3 to reveal the mutual relation between the generation number g and the flux quanta N flux . 6In Fig. 2, we change the maximum value of Q hid D3 as |Q hid D3 | = 400, 1200, 2000, each which we analyze the distribution of flux vacua at fixed τ with respect to g.It turns out that the number of flux vacua increases when g is smaller.Thus, the small generation number is favored in the string landscape.Furthermore, when we restrict ourselves to three-generation models, that is, g = 3, left-right MSSM-like models are still peaked at the Z 3 fixed point τ = ω as shown in Fig. 3.This phenomenon is similar to the analysis of Sec.2.1, but the percentage of three-generation clustered regions differs from before.One can further study the Yukawa couplings derived in Type IIB magnetized D-brane models [10].In the current brane configuration, the Yukawa couplings of quarks and leptons are rank one, and the flavor structure is trivial due to the fact that the flavor structure is realized from two different tori.Thus, we move on to the other magnetized D-brane model, inducing the non-trivial flavor structure of quarks and leptons.
Distribution of g-generation models with discrete B field
In this section, we add a discrete value of Kalb-Ramond B-field along one of the two-tori [32], in particular, (T 2 ) 3 , corresponding to the twisted torus in the T-dual IIA string theory.Since B-field induces the half-integer flux, the magnetic flux on the third torus is modified as ñ3 a = n 3 a + 1 2 m 3 a .According to it, the tadpole cancellation conditions are given by [33] 7 D3 : (2.40) The cancellations of D5-and D9-brane charges are realized as mentioned below Eq. (2.32).
The other SUSY condition (2.34) and K-theory condition (2.33) are also written in terms of ñ3 a .For concreteness, let us consider the local brane configurations with SM spectra as shown in Table 3, leading to g generation of quarks and leptons I ab = I ca = g. 8The supersymmetry condition (2.34) is satisfied when In this model, there U (1)s in the gauge symmetry Table 3. D-brane configurations leading to Pati-Salam-like model.The magnetic flux g determines the generations of quark and chiral chiral multiplets in the visible sector, where ñ = n + m/2.
For the same reason as the analysis of the previous section, we allow several values of 7 When we consider a different tilted direction, the effective flux is given by m3 a = m 3 a + 1 2 n 3 a as discussed in the T-dual IIA side [34]. 8Similar brane configurations are discussed in T-dual Type IIA string theory, e.g., [35].
to reveal the mutual relation between the generation number g and the flux quanta N flux .In Fig. 4, we change the maximum value of Q hid D3 as |Q hid D3 | = 200, 400, 800, each which we analyze the distribution of flux vacua at fixed τ with respect to g.It turns out that the number of flux vacua also increases when g is smaller, although the behavior is different from the previous analysis.Thus, the string landscape leads to the small generation number.Furthermore, when we restrict ourselves to three-generation model, that is, g = 3, Pati-Salam models are still peaked at the Z 3 fixed point τ = ω in a similar to the analysis of Sec.2.1.In contrast to the previous models in Sec.2.2, the Yukawa couplings of quarks and leptons are rank 3 and the flavor structure is non-trivial due to the fact that the flavor structure is originated from one of tori.We will discuss the relation between flavor symmetries and modular symmetries in the next section.
Eclectic Flavor Symmetry in Type IIB flux vacua
So far, we have studied distribution of moduli fields and remaining modular symmetry in the low-energy effective action.In this section, we discuss a flavor and CP symmetries of degenerate chiral zero-modes on D-branes and its relation to modular symmetry.In Secs.3.2 and 3.3, we show that the metaplectic modular symmetry introduced in Sec.3.1 is useful to describe the matter wavefunctions and Yukawa couplings on T 2 and T 2 /Z 2 with magnetic fluxes in an uniform way.Remarkably, the CP symmetry can be regarded as an outer automorphism of the modular symmetry as discussed in Sec.3.4.These 6D bottom-up models can be embedded in 10D Type IIB magnetized D-brane models with stabilized moduli.In Sec.3.5, we discuss the metaplectic modular flavor symmetries together with traditional flavor and CP symmetries in the framework of eclectic symmetry.
Metaplectic modular symmetry
Since the Yukawa couplings of quarks and leptons are described by a half-integer modular form, they are formulated in the context of metaplectic group M p(2, Z).Following Ref. [36], let us briefly review the notion of M p(2, Z) which is a twofold covering group of SL(2, Z).
Let us rewrite SL(2, Z), its quotient group and metaplectic group by respectively.Note that the complex structure moduli space of the torus τ is governed by Γ due to the fact that τ is invariant under S 2 .By introducing the principal congruence subgroups: with v(γ) = c d being the Kronecker symbol, one can define the finite modular groups: where Γ 2,3,4,5 correspond to S 3 , A 4 , S 4 , A 5 discrete groups, respectively. 9In addition, the finite metaplectic modular groups are given by where the generators satisfy10 and additional relations are required to ensure the finiteness for N > 1, e.g., for Γ4N=8 of order 768 ([768, 1085324] in GAP system [37]), for Γ4N=12 of order 2304, respectively.Under the finite modular groups, modular forms of the modular weight k/2 and level 4N transform as where ρ r (γ) α β denotes an irreducible representation matrix in Γ4N .
T 2 with magnetic fluxes
As discussed in Secs.2.2 and 2.3, the magnetic fluxes generate the semi-realistic MSSMlike models with 3 generations of quarks and leptons.In the following, we address the flavor structure of chiral zero modes with an emphasis on the transformations of these wavefunctions under the modular symmetry.It was known that the magnetic fluxes on extra-dimensional spaces induce the degenerate chiral zero-modes, which are counted by the index theorem.
For concreteness, let us begin with the six-dimensional (6D) Super Yang-Mills theory on T 2 .The Kaluza-Klein reduction of 6D Majorana-Weyl spinor λ is given by with ψ n (z) denotes the n-th excited mode of two-dimensional (2D) Weyl spinors on T 2 .In particular, we focus on zero-mode wavefunctions ψ(z):11 where ψ + and ψ − denote the positive and negative chirality modes on T 2 .The U (1) magnetic flux is given by obtained by the corresponding vector potential with ζ being a Wilson line phase.Note that the boundary conditions of the gauge field as well as the 2D Weyl spinors are respectively chosen as with Here, the normalization of the wavefunctions is fixed as12 The Yukawa couplings of chiral zero-modes are obtained by integrals of three wavefunctions: Remarkably, these wavefunctions show non-trivial transformations under the modular symmetry [10,[38][39][40][41][42][43][44].Indeed, when |M | = even, under S and T transformations of the modular symmetry: the zero-mode wavefunctions respectively transform indicating the wavefunctions with the modular weight 1/2. 13Note that the Wilson line with α, β = 0, 1, ..., 2|M ′ | − 1 and Indeed, the representation matrix ρ( γ) is unitary and satisfies It was known that the boundary conditions of the fermions in Eq. (3.20) and the T transformation are consistent with each other only if M is even.The S transformation is consistent with the boundary conditions.However, the existence of Wilson line modifies the boundary condition as well as the modular transformation [43].Taking into account the modular transformation of the Wilson line ζ in the case of M = odd14 , the wavefunction transforms under the T transformation: with Note that the authors of Ref. [44] proposed that this expression holds for vanishing Wilson lines even in the case of odd units of magnetic flux M .Recall that the exponential factor can be canceled in the Yukawa coupling due to the U (1) gauge invariance as argued in Ref. [44].Thus, T -transformed wavefunction with odd M cannot be expanded in terms of the original wavefunction, but it will be possible to be written in the different coordinate z + 1/2.Since this statement is also true for even units of M , we adopt the T transformation of wavefunction is described by Eq. (3.33) for a general M .The S transformation is still given by Eq. (3.29) with odd units of M .
The modular transformations also act on the 4D fields.When the 4D N = 1 SUSY is preserved, the 4D Lagrangian is written in terms of Kähler potential and superpotential.The Kähler potential and the superpotential of matter fields are derived from the dimensional reduction of 6D Super Yang-Mills theory: where {I ab , I ca , I cb } denote the generation number counted by the index theorem, corresponding to one of the torus in Eq. (2.25).It satisfies I ab + I bc + I ca = 0 to preserve the U (1) gauge symmetry.Then, the modular transformations of matter superfield are given by where explicit forms of ρ(γ) are given in Eqs.(3.29) and (3.34) by replacing M with I ab .In addition, it was known that the holomorphic Yukawa couplings (3.25) are also described by Jacobi theta function [10]: with σ abc = sign(I ab I bc I ca ), where ζa = n a ζ a /m a denotes the redefined Wilson lines and we omit the 6D gauge coupling in the above expression.Since the Yukawa couplings are described by the half-integer modular form, the Yukawa couplings belong to r representation of Γ4N whose transformation is of the form15 : Recalling the condition I ab + I bc + I ca = 0, the Yukawa terms are invariant under the following U (1) symmetry: Φ α,I ab → e iqαI ab Φ α,I ab , (3.40) with q being the U (1) charge of Φ α,I ab .Thus, we redefine the S transformation of matter fields following Ref.[44]: Although we add e 3iπI ab /4 in the S transformation, it is still the unitary representation matrix.Such a redefinition will be convenient to discuss the metaplectic modular symmetry as will be shown later.In this way, the T 2 compactifications with magnetic background fluxes lead to the metaplectic modular flavor symmetries.Before going into details about the relation between the metaplectic modular flavor symmetries, the traditional flavor and CP symmetries, we will discuss the metaplectic modular symmetry on T 2 /Z 2 background.
T 2 /Z 2 with magnetic fluxes
On the T 2 /Z 2 orbifold, the wavefunctions of Z 2 -even and -odd modes are given by the linear combination of these on T 2 as mentioned in Sec.2.2.The explicit forms are given by [47] with . (3.43) The modular transformations are extracted from the matter wavefunctions on T 2 (3.34) and (3.41): for Z 2 -even mode with α, β = 0, 1, ..., I even and for Z 2 -odd mode with α, β = 0, 1, ..., I odd .Here, I even and I odd are defined in Eq. (2.29).In contrast to the analysis of Ref. [41], we added extra phase factors as argued in the previous section.Since Yukawa couplings on T 2 /Z 2 are described by those on T 2 [48]: with where m = 0 and m = 1 respectively correspond to Z 2 -even and -odd modes, they transform under the metaplectic modular symmetry as in the matter fields.
We find that the unitary matrices (3.45) and (3.47) obey the required relations in the metaplectic modular symmetry (3.12).Specifically, for even |M | and odd |M |, modular transformations of both the Z 2 -even and -odd modes are described by Γ2|M| and Γ4|M| , respectively, We checked the additional relations (3.13) and (3.14) for Γ8 and Γ12 , respectively.However, we have not checked these additional relations with Γ4N (N ≥ 4) which will be reported elsewhere.In the next section, we derive such 6D bottom-up models from 10D Type IIB magnetized D-brane models with stabilized moduli.In particular, we discuss the metaplectic modular flavor symmetries together with traditional flavor and CP symmetries in the framework of eclectic symmetry.
Generalized CP
We first discuss the unification of the metaplectic modular symmetry and 4D CP symmetry, which was discussed in T 2 background [41].It was known that the 4D CP and 6D orientation reversing are embedded into the 10D proper Lorentz symmetry [49,50]. 16Since the 6D orientation reversing is realized by z i → −z i for the coordinates of (T 2 ) i , the torus modulus transform under the CP symmetry τ i → −τ i .Note that such a transformation leads to the negative determinant in the transformation of 6D space.In the following, we focus on the CP transformation on T 2 and T 2 /Z 2 .
The multiplication law in the context of metaplectic modular symmetry is defined as When we redefine φ(γ * , τ ) = ±(cτ + d) 1/2 =: ϵJ 1/2 (γ * , τ ) with ϵ = ±1, the above law is written by where the two-cocyle Here, we define and introduce the Hilbert symbol: Since the CP transformation of matter wavefunctions on T 2 is given by corresponding to the basis of canonical CP transformation 18 , it allows us to define the CP transformation in the framework of metaplectic modular symmetry: Thus, Eq. (3.59) is rewritten as Remarkably, the CP transformation does not commute with the metaplectic modular transformations.When we consider the following chain: From the results of Sec.2.3, it turned out that the string landscape leads to a few number of generation of quarks and lepton.In the following analysis, we thus focus on matter wavefunctions on (T 2 ) 1 with vanishing Wilson lines whose explicit forms are given in Eq.
(3.42) with with M = I ab , I bc , I ca .Note that the orbifold projections split the wavefunctions to Z 2 -even and Z 2 -odd modes.For illustrative purposes, we focus on traditional flavor symmetries on three Z 2 -even modes with g = 4.19 • g = 4 The traditional flavor symmetries are described by whose generators {Z ′ , P, C, Z} are of the form [55]: for Z 2 -even mode and for Z 2 -odd mode.Note that such flavor symmetries do not change Yukawa couplings and only act on three generations of quarks and leptons.
In this case, the wavefunctions on T 2 /Z 2 enjoy the modular flavor group Γ 8 and the explicit representations are given in Eq. (3.68).In particular, the flavor generators do not commute with those of modular flavor symmetries G modular = Γ 8 .Indeed, we find that S even C even S −1 even = Z even , S even Z even S −1 even = C even , T even C even T −1 even = C even Z even (Z ′ even ) 2 , T even Z even T −1 even = Z even ,
Conclusions
In this paper, we have examined the vacuum structure of Type IIB flux vacua with SM spectra.The background fluxes play an important role in stabilizing moduli fields and determining the generation number of chiral zero-modes.Since the background fluxes are constrained by the tadpole cancellation conditions, the moduli distribution and the generation number are mutually related with each other.By studying the T 6 /(Z 2 ×Z ′ 2 ) orientifolds with magnetized D-brane models in Secs.2.2 and 2.3, it is found that the string landscape leads to the small generation number of quarks and leptons.Furthermore, the moduli values are peaked at Z 3 fixed point in the complex structure moduli space.It motivates us to study whether such a discrete symmetry is related to the flavor and/or CP symmetries in the low-energy effective action.
To investigate the relation between the modular symmetry of the torus and the flavor symmetries of quarks and leptons, we have focused on the concrete magnetized D-brane model of Sec.2.3.Since the wavefunctions of chiral zero-modes and the corresponding Yukawa couplings are written by Jacobi theta function with the modular weight 1/2, they are described in the framework of metaplectic modular flavor symmetry.Note that the flavor structure of quarks and leptons is originated from one of tori.We found that the modular transformations of both the Z 2 -even and -odd modes are described by Γ2|M| and Γ4|M| for the magnetic flux with even |M | and odd |M |, respectively, Furthermore, the CP symmetry can be regarded as the outer automorphism of the metaplectic modular group.For illustrative purposes, we focus on the M = 4 case, where three Z 2 -even modes transform under a certain traditional flavor symmetry.We found that the traditional flavor, modular flavor and CP symmetries in Type IIB chiral flux vacua are uniformly described in the context of eclectic flavor symmetry: (G flavor ⋊ G modular ) ⋊ G CP as discussed in the heterotic orbifolds [11,12].It would be interesting to explore the realization of eclectic flavor symmetry on other corners of string models.Furthermore, we have stabilized the moduli fields in the framework of flux compactifications.Although the moduli vacuum expectation values are distributed around Z 3 fixed point 20 , a part of eclectic flavor symmetry (G flavor ⋊ G modular ) ⋊ G CP still remains in the low-energy effective action.Since the coefficient of 4D higher-dimensional operators will be described by the product of modular forms with half-integer modular weights, the eclectic flavor symmetry would control the flavor structure of higher-dimensional operators.We leave a pursue of these interesting topics for future work.
Figure 1 .
Figure 1.The numbers of stable flux vacua on the fundamental domain of τ for N max flux = 192 × 10 in the left panel and for N max flux = 192 × 1000 in the right panel, respectively [9].
Figure 2 .Figure 3 .
Figure2.The numbers of models as a function of the generation number g at τ = i and τ = ω, respectively.Note that there exists Z 2 symmetry at τ = i generated by {1, S}.Here, the vertical axis represents the ratio of the number of models to the total number of models.There are three plots in each panel, and each of them corresponds to the maximum value of the D3-brane charge |Q hid D3 | = 400, 1200, 2000.
absorb axions through the Green-Schwarz couplings (2.37).The remaining gauge symmetry is described by SU (4) C × SU (2) L × SU (2) R .Furthermore, the Pati-Salam gauge symmetry can be broken to MSSM gauge group by the splitting of a and c stack of D-branes, but we leave the detailed study of open string moduli for future work.
Figure 4 .Figure 5 .
Figure 4.The numbers of models as a function of the generation number g at τ = i and τ = ω, respectively.Here, the vertical axis represents the ratio of the number of models to the total number of models.There are three plots in each panel, and each of them corresponds to the maximum value of the D3-brane charge |Q hid D3 | = 200, 400, 800.
. 21 )
By solving the Dirac equation for the massless mode with U (1) charge q = 1, we find |M | degenerate zero-mode solutions; ψ + (z) for M > 0 and ψ − (z) for M < 0. Specifically, |M | degenerate zero-mode wavefunctions are written in terms of Jacobi theta function ϑ and the torus area A [10]: τ ) := ℓ∈Z e πi(a+ℓ) 2 τ e 2πi(a+ℓ)(ν+b) .(3.23) ) and the other generators of G flavor commute with G modular .It means that the modular transformation is regarded as an automorphism of the traditional flavor group.Furthermore, we can construct the outer automorphism u CP : G CP → Aut(G flavor ⋊ G modular ).Indeed, the following relations can be verified in the semi-direct product group: modular flavor and CP symmetries are treated in a uniform manner, the traditional flavor, modular flavor and CP symmetries are described by(G flavor ⋊ G modular ) ⋊ G CP ,(3.78)as discussed in heterotic orbifold models.From the analysis of Sec. 2, the moduli fields can be stabilized in flux compactifications, in particular, Z 3 fixed point.It leads to Z 3 modular symmetry generated by {1, ST, (ST ) 2 }.It turned out that such a Z 3 symmetry still enhances the flavor symmetry due to the relation:( S T )C even ( S T ) −1 = C even Z even (Z ′ even ) 2 , ( S T )Z even ( S T ) −1 = Z even .(3.79)Thus, the discrete non-abelian symmetry (G flavor ⋊ Z 3 ) ⋊ G CP remains in the lowenergy action.So far, we have focused on specific magnetized D-brane models with stabilized moduli, but it is quite interesting to explore other flavor models, which left for future work. | 8,169 | 2023-05-30T00:00:00.000 | [
"Physics"
] |
Spectroscopic Line Modeling of the Fastest Rotating O-type Stars
We present a spectroscopic analysis of the most rapidly rotating stars currently known, VFTS 102 ($v_{e} \sin i = 649 \pm 52$ km s$^{-1}$; O9: Vnnne+) and VFTS 285 ($v_{e} \sin i = 610 \pm 41$ km s$^{-1}$; O7.5: Vnnn), both members of the 30 Dor complex in the Large Magellanic Cloud. This study is based on high resolution ultraviolet spectra from HST/COS and optical spectra from VLT X-shooter plus archival VLT GIRAFFE spectra. We utilize numerical simulations of their photospheres, rotationally distorted shape, and gravity darkening to calculate model spectral line profiles and predicted monochromatic absolute fluxes. We use a guided grid search to investigate parameters that yield best fits for the observed features and fluxes. These fits produce estimates of the physical parameters for these stars (plus a Galactic counterpart, $\zeta$ Oph) including the equatorial rotational velocity, inclination, radius, mass, gravity, temperature, and reddening. We find that both stars appear to be radial velocity constant. VFTS 102 is rotating at critical velocity, has a modest He enrichment, and appears to share the motion of the nearby OB association LH 99. These properties suggest that the star was spun up through a close binary merger. VFTS 285 is rotating at $95\%$ of critical velocity, has a strong He enrichment, and is moving away from the R136 cluster at the center of 30 Dor. It is mostly likely a runaway star ejected by a supernova explosion that released the components of the natal binary system.
INTRODUCTION
We now understand that the lives of massive stars depend critically on both stellar mass and rotation, and the evolutionary paths and surface abundances of rapidly rotating stars are radically different from those of slow rotators.Ekström et al. (2008), Brott et al. (2011), Georgy et al. (2013), Groh et al. (2019), Murphy et al. (2021), Eggenberger et al. (2021), and others have presented grids of evolutionary tracks for massive stars of varying mass, rotation rate, and abundance.They generally find that massive rapidly rotating stars (equatorial velocities greater than ≈ 500 km s −1 ) become brighter and hotter through their H-core burning lifetime, rather than the usual stellar cooling associated with evolution towards the red supergiant branch.This behavior is due to the extreme rotationally-induced mixing that occurs in the interiors of rapidly rotating stars which transports hydrogen fuel into the core and brings processed helium towards the surface.The result of this homogeneous evolution through mixing is that the star will continue to move up the main sequence until the entirety of the internal hydrogen supply is depleted.Evidence for this form of evolution is an observed enhancement of the helium and nitrogen abundances, which are indicators of the CNO nuclear burning process actively occurring in the core of the star (Roy et al. 2020).
How massive stars attain such fast rotation rates is a subject of considerable debate.Stars might be born with an inherently large angular momentum while others may experience a spin up through interactions in close binary systems, a common occurrence among the massive star population.de Mink et al. (2013Mink et al. ( , 2014) ) argue that many main sequence stars were spun up through processes that transform binary orbital angular momentum into the spin angular momentum of the components.Very close binary systems may begin interacting during the components' core H-burning stage, and depending on the circumstances, may ultimately merge during a common envelope event.The merger product may appear as a rejuvenated, rapidly spinning single star.Binary systems with larger separations may instead interact at a later evolutionary stage in which steady mass transfer can lead to the stripping of the mass donor and the spin up of the mass gainer (Wellstein et al. 2001).
Investigating the origins of massive rapid rotators requires careful analysis of their spectra including accounting for the physical changes in stellar properties with rotation.The primary measurement from the Doppler broadening of the spectral lines is the projected equatorial velocity, v e sin i.The inclination i can be directly measured for nearby stars through long baseline interferometry (Che et al. 2011), but otherwise we must rely on subtle changes in the predicted spectral line shapes with inclination in order to determine the equatorial velocity v e from v e sin i.This requires the use of a spectrum synthesis code that performs a numerical integration of the predicted flux emanating from the visible hemisphere of a rotationally distorted star.If we divide the surface of the star into a grid, each surface element contributes a spectral flux increment that is the product of its projected area and the Doppler-shifted specific intensity I λ which is a function of the local effective temperature, surface gravity, atmospheric abundance, and the cosine of the angle between the line of sight and the surface normal.The summation of all the flux spectra increments yields a model line profile that can be directly compared with observations.At high rotation rates, the equatorial radius of the star grows and the polar radius decreases.The result is a systematic temperature variation from the hotter pole to the cooler equator that is known as gravity darkening.The apparent brightness of a star experiencing gravity darkening will depend on the orientation of the star relative to the observer's line of sight.If the star is oriented closer to pole-on, i = 0 • , then the star will be brighter overall and the measured v e sin i will be small.If the star is oriented more equator-on, i = 90 • , the overall brightness will be darker and the measured projected rotational velocity larger.However, because the equatorial zone contributes to the largest Doppler shifts but with relatively less flux, the true rotational velocity can be underestimated unless the gravity darkening is modeled accurately (Townsend et al. 2004).The traditional approach relies on the von Zeipel law (von Zeipel 1924) in which the local temperature varies with colatitude θ as a power law of the local effective gravity, T eff (θ) ∝ g β eff (θ), where β = 0.25 for stars with radiative envelopes.More recent work by Espinosa Lara & Rieutord (2011Rieutord ( , 2013) ) demonstrates the importance of dealing with the interior structure of rotating stars in defining the surface temperature variation.They present an ω-model as an analytical approximation of the results from detailed numerical models.This ω-model predicts a smaller difference between the polar and equatorial temperatures than does the von Zeipel law.
A spectrum synthesis analysis based upon the von Zeipel law for gravity darkening was made by Howarth & Smith (2001), who investigated three stars within our galaxy that, at the time, were the most rapidly rotating stars known with v e ≈ 430 km s −1 and Ω/Ω c ≈ 0.9: HD 93521 (O9.5:V), HD 149757 (ζ Oph, O9.5: V), and HD 191423 (ON9: III n).
Here Ω is the angular velocity at the stellar equator and Ω c is the critical angular velocity, or the Keplerian angular velocity in the Roche model (with the equatorial radius equal to 1.5× the polar radius; Rieutord 2016).They utilized a grid of hydrostatic, plane-parallel, H and He, non-LTE model atmospheres generated by the code TLUSTY to create specific intensity spectra for flux integration.Howarth & Smith (2001) found that all three stars have an atmospheric He abundance that is about twice the solar value, providing strong evidence of rotationally-induced internal mixing.More recently, the record holder for the fastest rotating massive star in the Galaxy was passed to the star LAMOST J040643.69+542347.8 (O6.5:Vnnn(f)p).Li (2020) discovered that this is a runaway star with a projected rotational velocity of v e sin i = 540 km s −1 .This will be a key object for future high resolution spectroscopy and spectrum synthesis analysis to determine its true equatorial velocity.
The fastest rotating stars known today were discovered in the massive star forming region of 30 Doradus in the Large Magellanic Cloud (LMC).The VLT-FLAMES Tarantula Survey (VFTS; Evans et al. 2011) is a large spectroscopic survey of over 800 massive stars in this region that has led to numerous investigations of stellar properties.Ramírez-Agudelo et al. (2013) published a plot showing the distribution of projected rotational velocities among O-stars in the 30 Dor region (see their Fig.11).Their histogram of v e sin i shows a general decline with increasing rotational velocity that reaches zero near v e sin i = 520 km s −1 .However, in several velocity bins beyond this, they find two extremely rapid rotators, the stars VFTS 102 and VFTS 285, with estimated projected velocities of 610 and 609 km s −1 , respectively.These two stars are the subject of this paper.Dufton et al. (2011) were the first to point out the extraordinary nature of VFTS 102 (O9: Vnnne+; Walborn et al. 2014) and to discuss its possible origin.They measured the widths and radial velocities of its very broadened and shallow He absorption lines (see their Fig. 1) and estimated its physical properties.Their derived radial velocity is lower than that of other neighboring massive stars, and as a result they suggested that VFTS 102 is a runaway star.Furthermore, they showed that a nearby pulsar PSR J0537-6910 displays an X-ray emitting bow-shock that points back to the general direction of VFTS 102.This led them to suggest that VFTS 102 is the survivor of a supernova explosion in a binary system that led to the ejection of the pulsar.
The spectrum of the second rapid rotator, VFTS 285 (O7.5:Vnnn), was first described by Walborn et al. (2012Walborn et al. ( , 2014)).Walborn et al. (2014) show (in their Fig. 6) the blue portion of the spectrum of VFTS 285 in relation to other rapidly rotating O-stars from the VFTS survey.The He lines of VFTS 285 are so astoundingly wide and shallow compared to those of the other spectra, that the authors appended an exclamation mark to its labelled spectral classification to highlight its remarkable nature.The star's basic properties were estimated by Sabín-Sanjulián et al. (2017), and its astrometric motion suggests that it is a runaway from the central R136 cluster (Platais et al. 2018).Shepard et al. (2020) described the first ultraviolet spectra of VFTS 102 and VFTS 285 that were obtained with the Cosmic Origins Spectrograph on the Hubble Space Telescope (discussed further in this work).They found that the hotter star, VFTS 285, has a two component stellar wind.The N V λλ1238, 1242 doublet shows a fast, sparse outflow associated with the hotter polar regions, while the Si IV λλ1393, 1402 lines show a slower, but dense outflow associated with the cooler, equatorial zone.They found no P Cygni wind features in the UV spectrum of VFTS 102, but they confirmed the existence of a circumstellar disk which is indicated by the double-peaked emission of the H Balmer lines (especially Hα) and the Paschen series.
Here we investigate the UV and optical spectra ( §2) of VFTS 102 and VFTS 285 to determine their rotational properties and other physical parameters.We first present new radial velocity measurements ( §3) that indicate that both stars are radial velocity constant and probably single.We then describe our spectrum synthesis code ( §4) that we use to calculate model spectra for sixteen lines of interest.These models are compared to observed spectral profiles ( §5) in order to derive the rotational velocities and other parameters.We compare our results to models of single star and binary star evolution ( §6) to explore the possible origins of these extreme stars.Our conclusions are summarized in §7.
OBSERVATIONS
Our sample of observations consists of both far-ultraviolet (FUV) and optical spectra for VFTS 102, VFTS 285, and a Galactic counterpart, ζ Oph.We include an analysis of the spectra of ζ Oph as a check on our methods in comparison to the corresponding work by Howarth & Smith (2001) and as Galactic comparison benchmark for considering the results for the two LMC stars.
FUV
We obtained high resolution spectra of VFTS 102 and VFTS 285 with the Cosmic Origins Spectrograph (COS) on board the Hubble Space Telescope (HST).Comparable spectra of ζ Oph were collected from the archive of the International Ultraviolet Explorer (IUE).
HST/COS is a high dispersion spectrograph designed to record the FUV spectra of faint point sources (Green et al. 2012;Fischer 2019).The observations reported here were obtained during Cycle 23 as a part of the program GO-14246.The observations of VFTS 102 were made over a series of three orbits on 2017 January 1, while the observations of VFTS 285 were obtained during one orbit on 2016 April 10.These FUV spectra were all obtained using the G130M grating in order to record the spectrum over the range from 1150 to 1450 Å with a spectral resolving power of R = λ/ λ = 18000.The two detectors on COS are separated by a small gap, therefore the central wavelength was varied slightly between observations (1300, 1309, and 1318 Å) for VFTS 102 in order to fill in the missing flux.In each of these settings, four sub-exposures were obtained at four FP-POS, or focal plane offset positions, in order to avoid fixed-pattern problems.The VFTS 285 spectra were made using the same method except only two central wavelength positions were selected, 1300 and 1318 Å, due to orbital time restrictions.The spectrograph parameters are summarized in Table 1.The HST/COS observations were processed using the standard COS pipeline, merged onto a single barycentric wavelength grid, and transformed onto a uniform wavelength grid.The resulting spectra have a signal-to-noise ratio of S/N = 5 per pixel in the central, best exposed regions.For additional information on this procedure see Shepard et al. (2020).The FUV spectra for all three stars are illustrated in Figure 1.The primary components are strong Lyα absorption (interstellar), the N V and Si IV wind lines, numerous sharp interstellar lines, and shallow blends of photospheric lines.
The IUE instrumentation suite consisted of two UV spectrographs, two apertures, two dispersion modes, and four cameras (Boggess et al. 1978).Our spectra were obtained using the Short Wavelength Prime camera in the high resolution dispersion mode (Table 1).For the purposes of this work, the spectra were flux normalized to unity in the relatively line-free regions, transformed to a log λ wavelength grid, and co-added to form one spectrum with a high S/N ratio (Fig. 1).
Optical
Our sample of optical spectra of VFTS 102 and VFTS 285 consists of new medium resolution spectra from the Xshooter spectrograph on the Very Large Telescope (VLT) plus archival spectra from the VLT Fibre Large Array Multi Element Spectrograph (FLAMES) instrument used with the GIRAFFE spectrograph.The optical spectra of ζ Oph were collected from the archive of the ESPaDOnS spectrograph mounted on the Canada-France-Hawaii Telescope.The spectrograph properties are given in Table 1, and the averaged spectra appear in Figure 2.
The X-shooter instrument was designed to record the spectra of a wide variety of astronomical objects, ranging from nearby faint point sources to bright extragalactic sources (Vernet et al. 2011).The observations for VFTS 102 were acquired under program 092.D-0108(A) (Przybilla), and additional spectra were obtained for both VFTS 102 and VFTS 285 under program 098.D-0375(A) (Gies).The X-shooter instrument records spectra in three arms corresponding to increasing wavelength bands labelled as UVB, VIS, and NIR (see Table 1).Collectively, these spectra cover a range from 3350 to 20800 Å.The spectra were reduced by the standard pipeline, normalized to unity at the continuum, and then co-added with weighting factors determined by the S/N ratio.
The GIRAFFE spectrograph is a medium-to-high resolution spectrograph that was designed to record the optical spectra of high spatial density galactic and extragalactic objects (Pasquini et al. 2002).Spectral observations for both VFTS 102 and VFTS 285 were obtained as part of the VLT-FLAMES Tarantula Survey of massive stars in the 30 Doradus region in the LMC under programs 182.D-0222(A), (B), and (C).The observations were made using the Medusa fibers on the GIRAFFE spectrograph, which allow for up to 132 objects to be observed at once.The fibers have an entrance aperture equal to 1.2 arcsec on the sky.The spectra utilized in this work cover a range from 3958 to 6820 Å and are labelled as UV, VIS, and NIR (corresponding to different bands than those with similar names for X-shooter; see Table 1).The reduced spectra were collected from the ESO Science Archive1 , rectified to unity at the continuum, and co-added to form one high S/N ratio spectrum.In the final step, the X-shooter and GIRAFFE spectra were co-added where possible on a uniform log λ grid for optimum S/N .These coadded spectra were the focus of this study, and samples of the line profiles are presented in Figures 10 and 11 below.We did not reduce the effective resolving power of the X-shooter spectra to match that of the GIRAFFE spectra because the rotational broadening of the stellar features far exceeds that of the instrumental broadening, and as a result the line profiles appeared identical in both the X-shooter and GIRAFFE average spectra.
ESPaDOnS is a high resolution spectrograph and spectropolarimeter that was designed to record the optical spectrum with a resolving power of R = λ/ λ = 68000 (Donati 2003).The ESPaDOnS spectra of ζ Oph were obtained from the archive at the PolarBase website2 (Petit et al. 2014) as a part of the Magnetism in Massive Stars (MiMeS) survey that collected data from 2005 to 2013 for 560 O-and B-type stars (Wade et al. 2015(Wade et al. , 2016)).The reduced spectra were rectified, and co-added on a uniform log λ wavelength grid.
Spectral Energy Distributions
The rotation code described in §4 calculates both the line profiles and the absolute monochromatic flux in the nearby continuum regions.The absolute flux predictions taken together with the distance and interstellar extinction can be compared to the observed spectral energy distribution (SED) to determine the stellar radius.Figures 3, 4, and 5 show the observed SEDs for ζ Oph, VFTS 102, and VFTS 285, respectively, together with the model estimates for 16 wavelengths ( §4).The observed ultraviolet fluxes for comparison with the models were collected at the two FUV wavelengths from archival, high dispersion IUE spectra for ζ Oph and from the HST/COS spectra for VFTS 102 and Burnashev (1985), and those for VFTS 102 and VFTS 285 are collected from various sources of broad-band photometry including VFTS (Evans et al. 2011), HTTP (Sabbi et al. 2016), SkyMapper (Wolf et al. 2018), and Gaia EDR3 (Gaia Collaboration et al. 2016, 2021).The infrared fluxes are collected from SAGE (Meixner et al. 2006), 2MASS (Skrutskie et al. 2006), and WISE (Wright et al. 2010).
Figures 3, 4, and 5 also show simple flux models for non-rotating stars from the TLUSTY grid (Lanz & Hubeny 2003) for the average temperature, gravity, abundance, distance, and extinction described in §4.These model spectra have a low resolving power similar to the observed broad-band fluxes, and they serve to show the general SED trends with wavelength.The SED of VFTS 102 displays an infrared excess from its circumstellar disk, and Figure 4 shows both the stellar component (dotted line) and combined stellar plus disk flux (solid line) for a simple power law expression for the disk flux (see §4.2).We will adopt wavelength-interpolated estimates of the optical monochromatic fluxes below ( §4) directly from the observed values for the cases of ζ Oph and VFTS 285, and from the star plus disk model fit for the case of VFTS 102.
RADIAL VELOCITIES OF VFTS 102 AND VFTS 285
The absorption lines in the spectra of both targets are extremely broad and shallow.Consequently, we need the best S/N ratio possible in order to examine their rotationally broadened profiles.This can be accomplished for the ground-based, optical spectra through co-addition of the individual spectra.However, we need to first check for any evidence of radial velocity variability to insure that the spectra are properly wavelength registered before co-addition and to explore the possibility that these stars are spectroscopic binaries with faint companions (the possible remnant donors of past mass transfer).We describe below measurements of both the absorption and emission line features.Our results are summarized in The observed spectral energy distribution (SED) of ζ Oph.The small crosses show the low resolving power observed fluxes, and the diamonds indicate the high resolving power fluxes calculated at 16 specific wavelengths using the rotation code ( §4.2).The solid line shows a low resolving power model SED from the TLUSTY code for a non-rotating star with the hemisphere-average temperature and gravity of the star (Table 4).The model fluxes are attenuated for interstellar extinction using the reddening E(B − V ) given in Table 4.
There are two main methods that we use to make our measurements, the line bisector method for emission lines and the cross-correlation method for absorption lines.The bisector method is useful for the spectra of VFTS 102 which displays numerous emission lines formed in its circumstellar disk and in the surrounding nebula and SN remnant (known variously as 30 Dor B, Chu et al. 1992;N157B, Chen et al. 2006;and B0538-691, Micelotta et al. 2009).The bisector method determines the line center position by a Gaussian sampling of the line wings (Shafter et al. 1986).This has the advantage of measuring emission wings from the disk while ignoring the nebular emission that appears in the line core.We form a template using two oppositely signed Gaussians at offset positions from line center, and then the cross-correlation function (CCF) is made using this template and the emission line feature.The zero crossing of the resulting CCF yields the velocity corresponding to the wing bisector position.The wings of emission features form in the circumstellar disk close in to the star where the rotating disk has the highest Keplerian orbital motion.Since the disk is assumed to be centered on and tied to the star, by measuring the disk's radial velocity we are also measuring the star's radial velocity.In general we set the offset positions of the sampling Gaussians at velocities where the emission declines to 25% of the peak value.However, if the feature has extraneous emission or absorption in the center due to residual problems from nebular sky subtraction or disk emission, we lower this threshold to around 5% of the peak value in order to avoid the central region.The second method was applied to the broad absorption lines in the spectra of both stars.We generated a model spectrum which was rotationally broadened to a projected rotational velocity of v e sin i = 600 km s −1 using the TLUSTY/SYNSPEC model flux spectra from the OSTAR2003 grid of Lanz & Hubeny (2003).We then formed the CCF of the observed and model spectra over a wavelength range encompassing specific absorption lines.The exact regions that were used were limited to only the line profiles and did not include the continuum between features.The peak position of the CCF yielded an estimate of radial velocity.
The results for VFTS 102 appear in Table 2 which lists the heliocentric Julian date of mid-exposure, the spectrograph of origin, and radial velocities from the emission and absorption lines.The radial velocity measurements for the Xshooter spectra are averages from the emission features (column 3) of Hδ, Hβ, Hα, H I λλ8502, 8545, 8598, 8665, and He I λλ5875, 6678, 7065 and from the absorption features (column 4) of Hγ and He II λ4686.Hγ consists of a broadened absorption feature with a double-peaked emission line in the center.Despite the central emission, we were able to measure the wings of the absorption line, so this radial velocity is included among the absorption line average.
We attempted to measure the radial velocities of the absorption lines reported by Dufton et al. (2011), which consist of He I λλ4026, 4143, 4387 and He II λλ4200, 4541, 4686.However, we found that all but He II λ4686 were too broad and shallow for reliable measurements in individual spectra.Dufton et al. (2011) used co-added spectra to enable their measurements.The measurements of the GIRAFFE spectra include the emission features Hγ, Hδ, and Hβ.Several of the GIRAFFE spectra have very low S/N making the measurements difficult, and these are excluded from consideration.All of the emission line measurements were made using the line bisector method.The only absorption line measured was He II λ4686.In this case, we first cross-correlated the observed He II λ4686 profile with a broadened model profile.We then measured the radial velocity using the line bisector on the CCF.This combination of methods allowed us to obtain a measurement from the He II λ4686 feature despite its extremely shallow profile.The measurement from the HST/COS spectrum was aquired by cross-correlating the observed spectrum with a rotationally broadened model spectrum.We used a short wavelength region that includes N IV λ1168.6,C III λ1174.93 (plus a blend of six other features), and Si III λ1178.012 in addition to a mid-range region that includes Fe V λλ1370.303,1370.947,1371.217 and Si IV λλ1394, 1403.The result is listed in column 4 of Table 2.
The final rows of Table 2 list the error weighted averages of all the measurements made from the X-shooter and GIRAFFE spectra.The uncertainties are the standard deviations of the individual measurements.There is satisfactory agreement between the absorption and emission line velocities, and this confirms that we are indeed measuring the radial velocity of the star itself through measurements of the disk gas emission.Additional verification comes from detailed fits made of the absorption lines presented in Section 5.2 below.Each model line was shifted in wavelength space until it matched the observed profile.This yields a radial velocity for each absorption line in the mean spectrum, and the average for the nine lines used in the analysis yields a radial velocity of 262 ± 8 (standard deviation) km s −1 , in agreement with the results presented in Table 2. Furthermore, there is reasonable consistency in the results from X-shooter, GIRAFFE, and HST/COS.The weighted mean of the averages from both emission and absorption lines and all three instruments for VFTS 102 is V r = 267 ± 3 km s −1 .The velocity results from the absorption lines in the spectra of VFTS 285 are given in Table 3.The radial velocities for the X-shooter spectra were obtained by cross-correlating a series of features with a rotationally broadened model.The features include Hβ, Hγ, Hδ, and He II λλ4199, 4541, 4686, 5411.The GIRAFFE measurements were obtained from cross-correlation functions based upon Hγ, Hδ, He I λλ3964, 4026, 4471 and He II λλ4199, 4541.The measurement obtained for the HST/COS spectrum was obtained from the cross-correlation function of a short wavelength region (including C III λ1174.93 and Si III λ1178.0)and a mid-wavelength region (Fe V λλ1370.303, 1370.947, 1371.217).The bottom rows of Table 3 show that there is good agreement between the averages from X-shooter and GIRAFFE.The mean of the X-shooter, GIRAFFE, and HST/COS results is V r = 250 ± 6 km s −1 .The standard deviation between observations (external error) is approximately the same as the mean of the individual error estimates (internal error) for both stars, so they appear to be radial velocity constant.Thus, we can reasonably perform a simple co-addition of the spectra to improve the S/N without needing to consider shifting individual spectra to account for any binary orbital motion.The final mean velocities are 267 ± 3 and 250 ± 6 km s −1 for VFTS 102 and VFTS 285, respectively, which are the error weighted mean and uncertainty from the sample averages at the bottom of Tables 2 and 3.These radial velocities are comparable to the average for the single, B-type stars in the 30 Dor region, 272 ± 12 km s −1 , found by Evans et al. (2015).In both cases, our results are slightly higher than found in earlier work: 228 ± 6 km s −1 (Dufton et al. 2011) and 225 ± 11 km s −1 (Sana et al. 2013) for VFTS 102 and 230 ± 4 km s −1 (Sana et al. 2013) for VFTS 285.However, given the difficulty of measuring such broad and shallow absorption lines and differences in our methods, we doubt that the resulting differences in velocity are significant.
SPECTRAL SYNTHESIS MODELS
Our primary goal is to compare models of the flux emitted by very rapidly rotating stars with observed spectral line profiles and the associated spectral energy distribution.Each model is based on parameters for its equatorial rotational velocity, physical parameters, and axial orientation to our line of sight.Fits of the model spectra to the observed spectra help inform final estimates of all these parameters.In this section, we describe the elements of the model and the parameter fitting methods.The results of the model fitting are discussed in §5.
Method
We utilize a numerical code that simulates the distorted shape and latitude-dependent photospheric properties of a rapidly rotating star that is viewed at an inclination angle i between the axis of rotation and the line of sight.The spectral line synthesis code is written in IDL, and the original version was presented by Huang & Gies (2006).The shape of the star is defined by Roche geometry which assumes that most of the mass is concentrated towards the stellar core.The stellar photosphere is represented by a grid of 40,000 surface elements with approximately equal area that are distributed in co-latitude θ and azimuth φ.An integration is made of the flux from each surface element by first calculating the angle between the surface normal and the line of sight in order to determine if the element is situated on the visible hemisphere of the star.For each element, the code determines a local effective temperature (dependent on the gravity darkening model; §4.4), effective gravity (gravitational plus centrifugal), area of the surface element projected in the sky (dependent on µ = cosine of the angle between the surface normal and line of sight), and the rotational radial velocity (assuming solid body rotation).These parameters are used to interpolate in a pre-computed grid of spectral specific intensities as a function of wavelength, µ, T eff , and log g associated with an assumed chemical abundance.The product of the projected area and the specific intensity yields the flux increment from the element, and the sum of all such increments gives the total flux to be compared with the observations.The outputs are a wavelength dependent line profile and a monochromatic flux estimate for the immediate vicinity of a particular spectral feature or line blend.This type of spectrum synthesis is common among many past investigations of rotational shape and spectral line broadening (Collins 1963;Stoeckley 1968;Howarth & Smith 2001;Townsend et al. 2004;Aufdenberg et al. 2006;Abdul-Masih et al. 2020).
The model parameters are listed in Table 4 together with the derived values for the three target stars of this study.Column 2 identifies those parameters that are fit (F), set in advance (S), and derived in the model from the previous parameters (D).The set of fitting parameters includes the projected rotational velocity v e sin i, rotational axis inclination i, polar radius R p , stellar mass M , polar effective temperature T p , He abundance y by number relative to H, and interstellar reddening E(B − V ).The set parameters were determined by independent constraints, and they include the adopted gravity darkening law ( §4.4), distance d, and ratio of total-to-selective extinction R V ( §4.2).The remaining derived parameters describe the physical characteristics of the star and the goodness-of-fit of the model.The derived rotation parameters are the equatorial rotational velocity v e , the critical velocity in the Roche model v c (where the equatorial radius is 1.5× the polar radius and centrifugal acceleration balances gravity at the equator), and the ratio of the angular velocity to the critical angular velocity Ω/Ω c (Rieutord 2016).The other physical parameters for the star include the equatorial radius R e , the logarithm of the effective gravity at the pole log g p , at the equator log g e , averaged over the visible hemisphere log g(avg), the effective temperature at the equator T e , and the flux-weighted average temperature < T >(avg).We also calculate the area-integrated average temperature < T >(all) and logarithm of the total luminosity log L/L by integrating over the entire surface.Those parameters listed with solar units were obtained by adopting the IAU recommended nominal values for the Sun (Prša et al. 2016).
There are several simplifications in the model that are justifiable assumptions.The shape of a star is subject to differential rotation, but detailed calculations suggest that differential rotation is modest in rapidly rotating massive stars (varying by only a few percent with colatitude; Espinosa Lara & Rieutord 2013).Consequently we expect that our neglect of differential rotation and the use of the Roche model for the stellar surface are good approximations (Zahn et al. 2010;Rieutord 2016).The spectral specific intensity calculations are based upon plane-parallel stellar atmospheres ( §4.3), so effects due to extended atmospheres and stellar winds are not treated.However, we expect any such effects to be minimal in the case of the three targets discussed here.We showed in an earlier paper (Shepard et al. 2020) that wind features are present in some spectral lines in the ultraviolet, and we discuss the influence of the circumstellar disk of VFTS 102 below ( §4.2).Finally, we are neglecting any processes related to macroturbulence in this analysis.Simón-Díaz & Herrero (2007) used a Fourier transform method to analyze the broadened spectral line profiles of O-type stars to extract both the projected rotational velocity and the macroturbulent velocity, and they argue that macroturbulence generally becomes a significant contributor to line broadening among the more luminous supergiant stars (Simón-Díaz & Herrero 2014).The net line broadening varies approximately as the quadratic sum of the rotational and macroturbulent velocities, and because the rotational component is so dominant in the stars discussed here, we can safely neglect any macroturbulent broadening terms.
Integrated Flux
The model calculates the integrated monochromatic flux produced by the star, and this can be compared to the observed flux to help estimate the stellar radius.For a spherical star, the ratio of the observed to emitted flux is where θ is the angular diameter and A λ is the wavelength-dependent extinction.Thus, this ratio becomes an important criterion to establish the stellar polar radius of a rotating star R p once the distance and extinction are known.Here we fix the known distances of the targets and solve for a reddening E(B − V ) that sets the amount of interstellar extinction according to an adopted value of the ratio of total-to-selective extinction R V .Our adopted distances are listed in Table 4.The distance of ζ Oph is taken from the parallax measurement from Gaia EDR3 (Bailer-Jones et al. 2021), and this value is in good agreement with other independent estimates (Gordon et al. 2018).The distance of VFTS 102 and VFTS 285 is set to the accurate LMC distance from Pietrzyński et al. (2019) based upon eclipsing binaries and other standard candles in the LMC.
The wavelength dependent extinction curve is taken from the model described by Fitzpatrick (1999) that is determined from the reddening E(B −V ) and ratio of total-to-selective extinction R V .We solve for E(B −V ) by comparing the observed and model SED using an adopted value for R V (Table 4).This is set to the result from Zuo et al. (2021) for ζ Oph.Unfortunately, there are no published results on R V for the two LMC targets.We adopted the value of R V = 2.76 ± 0.09 for the nearby LMC2 supershell region determined by Gordon et al. (2003) (and we used their special extinction prescription for VFTS 102 and VFTS 285), although we caution that the actual value may vary considerably among the stars of the 30 Dor region.For example, both Maíz Apellániz et al. ( 2014) and De Marchi & Panagia (2019) find a larger value of the ratio, R V ≈ 4.5.
Special care is required in the flux analysis of VFTS 102, because this star has an extensive circumstellar disk that also contributes to the observed flux.The SED of VFTS 102 (Fig. 4) shows a strong infrared excess that is a common signature of circumstellar disks around Be stars (Vieira et al. 2015;Klement et al. 2019).The disk emission is often represented approximately as a power law (Waters 1986), where F tot λ is the observed star plus disk flux, F λ is the stellar flux rescaled by distance, λ is wavelength in reference to a standard wavelength λ 0 = 1 µm, A λ is the extinction, and c d and x are the power law parameters describing the infrared excess.A simple fit of the SED using a TLUSTY model for the stellar flux was obtained with c d = 0.78 and x = 1.6, and is shown with the observed SED in Figure 4.
This disk flux excess in the spectrum of VFTS 102 has two important consequences: (1) the model stellar flux must be increased by the amount of the flux excess before comparison with the observed flux, and (2) the model line depths need to be reduced to account for the excess continuum flux (sometimes referred to as "line veiling").We did this by calculating a line depth factor l at the central wavelength of each line, The model stellar fluxes were divided by l to rescale them to the total star plus disk flux, and the continuum normalized line depths were multiplied by l to account for the added disk continuum, i.e., s(disk corrected) = l × s(model) + 1 − l where s is the continuum normalized spectrum ( §4.5).This approach corrects for the added continuum flux of the disk, but not for any stellar flux that may be obscured by the disk if seen edge on.The calculations necessary to account for disk obscuration are outside the scope of this paper, and we therefore leave this complication to future work.
Specific Intensity Profiles
The core of the simulation is the radiative specific intensity that we derive from two TLUSTY grids of line-blanketed, non-LTE, model atmospheres, OSTAR2002 (Lanz & Hubeny 2003) and BSTAR2006 (Lanz & Hubeny 2007).High resolving power model spectra are calculated from these model atmospheres using the SYNSPEC radiative transfer code (Hubeny & Lanz 2017).These assume the solar abundance pattern from Grevesse & Sauval (1998) for Galactic "G" models (with a He abundance by number relative to H of y = 0.10) that we use in the line synthesis for ζ Oph.We adopt their LMC "L" models that have the same H and He abundance with all other elements reduced by half (Rolleston et al. 2002) for the analyses of VFTS 102 and VFTS 285.We found it necessary to explore models with greater than solar He abundance ( §4.7), and this was done by a numerical up-scaling of the He to H number ratio by 2×, 3×, and 4× the solar value with the SYNSPEC code.We caution that this is an approximation that is not fully self-consistent with the abundances assumed in the TLUSTY atmospheres.
The BSTAR2006 grid covers the temperature range from 15 to 30 kK for an assumed microturbulent velocity of 2 km s −1 , while the OSTAR2003 grid ranges from 27.7 to 55 kK using a microturbulent velocity of 10 km s −1 .In order to create a smooth transition between these differing cases, we formed a temperature grid with a step size of 1 kK (like that in the BSTAR2006 grid), and then interpolated in the temperature overlap region scaling by 75/25%, 50/50%, and 25/75% between the BSTAR2006/OSTAR2003 grids at 28, 29, and 30 kK, respectively.
For each spectral region, the SYNSPEC results were transformed to a common and equally spaced wavelength grid to create a specific intensity matrix, I(λ, µ, T eff , log g) for 10 equal steps in µ, the cosine of the angle between the normal and line of sight, from 0.1 to 1.0, 41 steps in T eff from 15 to 55 kK, and 12 steps in log g from 2.0 to 4.75.These matrices were calculated for two regions in the far-ultraviolet and 14 regions in the optical spectrum for comparison with the observed spectra ( §5).We show an example of these specific intensity profiles in Figure 6.These show that the central depths of strong lines are almost the same at all µ angles (formed high in the atmosphere where the temperature and line source function are approximately constant) while the continuum levels decline from µ = 1 to 0.1 (limb darkening associated with the drop in temperature and source function higher in the atmosphere).Thus, in a continuum normalized representation, the spectral line depths look relatively weaker at the limb (µ = 0) compared to the center of the stellar disk (µ = 1).This is a reminder that the observed rotational broadening is not strictly a convolution of a fixed depth photospheric profile and a rotational broadening function.
Our analysis of the spectral lines is based upon this framework of specific intensities from the published TLUSTY grids, so the results are dependent on the approximations in the code in addition to those described above.For example, TLUSTY includes the turbulent pressure associated with microturbulence as a component of the total pressure.Thus, the pressure treatment is different between the BSTAR2006 and OSTAR2003 grids because of the different assumed values of microturbulence (2 and 10 km s −1 , respectively), and this might influence the model Stark broadening of the H Balmer lines (important in the derivation of stellar mass; Section 4.6).However, the BSTAR2006 grid does include the larger 10 km s −1 microturbulence value in a subset of models appropriate for lower gravity giants and supergiants.A comparison of model Hγ lines for the same (T eff , log g) parameters but with microturbulent velocities of 2 and 10 km s −1 shows only very small differences in the profiles.Thus, the hybrid treatment of microturbulence in this application should have no significant impact on our results.
Gravity Darkening
von Zeipel (1924) found that the local energy radiated (and hence the local temperature) of a rotating star varies as T ∝ g 1/4 eff , so that the higher gravity pole is hotter than the equator.However, this can only be strictly correct in cases where the gas is barotropic (pressure dependent on density only), which is generally not the case for stars (Rieutord 2016).The full solution to the problem requires consideration of the interior state and motions (ESTER code; Espinosa Lara & Rieutord 2013).Fortunately Espinosa Lara & Rieutord (2011) found an analytical representation of the surface temperature variation with colatitude θ that matches that from the detailed models quite well.We used this ω-model in our code by solving equations 18 and 23 in the development presented by Rieutord (2016) to determine the ratio of T (θ)/T p .
Generally we adopt the ω-model for our models, however, we show an example of how the line profiles differ between the predictions of the von Zeipel law and the ω-model.Figure 7 shows the results for a model star with R p = 6.7R , M = 20M , T p = 40 kK, v e = 598 km s −1 , and v c = 616 km s −1 , i.e., a case close to critical rotation.The images in the first column show the surface brightness (specific intensity) in the far-ultraviolet (1181 Å) where the contrast between the pole and equator is especially striking.The top image shows the limb darkened disk for the corresponding non-rotating model, while the lower three images show the same stellar model at near critical rotation described by the ω-model with inclination angles of i = 90 • , 70 • , and 50 • .The next columns illustrate the appearance of several surface-integrated flux profiles for these orientations and the two gravity darkening laws considered.The top row shows the non-rotating case for the spectrum in the vicinity of C III λ1175, He I λ4471, and He II λ5411.The next three rows give the rotationally broadened model profiles plotted as a function of Doppler shift relative to v e sin i.If plotted versus actual Doppler shift the profiles would appear narrower at lower inclination (v e sin i = 598, 562, and 458 km s −1 for rows 2,3, and 4, respectively) due to the inclination shift from equator-on, where the extreme rotation is best observed, to pole-on, which shows no rotational broadening.However, by plotting the profiles relative to v e sin i, it is easier to discern overall changes in the shape of the line profile with inclination and between the predictions for the ω-model (solid lines) and von Zeipel law (dotted lines) for gravity darkening.
These models help to demonstrate how the differences between the gravity laws manifest in line profiles.The ωmodel predicts a pole to equator temperature variation that is less extreme than that from the von Zeipel prescription.This difference shows up best in the He II λ5411 profiles for i = 90 • (row 2, column 4).In the von Zeipel case, the darker equator is more extensive and hotter pole more limited in colatitude, and because the He II line is preferentially formed in hotter environments, it appears weaker in the von Zeipel case where hot conditions are confined to a smaller area.The opposite is true of the other lines that grow in strength at relatively cooler temperatures.
A comparison of the relative changes in line shape and strength with inclination angle shows that some lines change significantly (He II) while others are approximately constant (C III).This demonstrates that an analysis of the rotational broadening among a sample of different line species can potentially help to constrain the value of the rotational inclination.
Transformation to the Observer's Frame
The derived model spectrum is created in units of physical flux in the rest wavelength frame of the star, and there are several steps required in order to compare the model directly with the observed line profile.The first step is to shift the spectrum to the observed Doppler shift of the star.Initially we assumed a radial velocity of +15 km s −1 for ζ Oph (Reid et al. 1993), however we made a number of small revisions to this estimate for the different lines in the sample.We adopted the mean of the radial velocities of VFTS 102 and VFTS 285, but again we introduced small changes (< 20 km s −1 ) on a line-by-line basis in order to align the model and observed line profiles.The model profiles were then transformed to the observed log λ wavelength grid by an integration scheme.
The spectra were re-normalized to a unit continuum by selecting wavelength regions immediately to the blue and red of the main absorption feature, where we formed the ratio of observed-to-model fluxes in these regions.The model spectrum was then multiplied by a linear fit of the flux ratios in the rectification regions, so that the local continua of the observed and model spectra are in agreement.The resulting model spectrum was then convolved with a Gaussian function to account for the minor amount of instrumental broadening associated with each spectrograph.The models of spectra for VFTS 102 were subject to a small reduction in line depth to account for the wavelength-dependent contribution of extra continuum flux from the circumstellar disk ( §4.2).Finally the calculated portion of the model spectrum was inserted into an otherwise flat continuum spectrum outside of the wavelength range in the simulation, and these boundaries appear in some of the spectral plots below where there appears to be a sudden jump to unity.
Parameter Fits
Our goal is to optimize the seven fitting parameters given in Table 4 in order to best match the observed and model fluxes and rectified line profiles.We found that we could converge to a unique solution through a guided grid search method that relies primarily on the continuum fluxes to set E(B − V ) and R p , and then uses fits of the line profiles to help determine M , T p , v e sin i, and i.The final parameter is the He abundance y which we discuss separately in the next subsection.
The procedure begins using assumed values for the parameters that are reasonably well known at the outset: i, v e sin i, T p , and y.The first step is to compare the observed and model fluxes that span the full ultraviolet to optical range (including the circumstellar disk flux contribution in the case of VFTS 102).We perform preliminary model simulations and check if there are any systematic trends in the observed-to-model flux ratio as a function of wavelength.If so, then the reddening parameter E(B − V ) is revised in order to find consistent observed-to-model flux ratios across the spectrum.Next, we consider the mean value of the observed-to-model flux ratio, and we revise the polar radius R p in order to make this ratio unity for the given values of distance and interstellar extinction (set through the derived E(B − V ) and fixed R V values).
The procedure next considers the fits of the spectral line profiles.A goodness-of-fit estimate is found by comparing the observed and model line profiles over a limited wavelength range that spans the full absorption profile while excluding any problem regions that are marred by background nebular or disk emission.This is particularly important in the case of VFTS 102, where fits of the H and He I lines were restricted to the extreme line wings to avoid emission components in the central parts of these profiles.The scatter between the observed and model profiles is compared to that in nearby continuum regions to find a reduced chi-squared statistic χ 2 ν for each spectral feature.The code also measures the ratio of (Observed -Calculated) / Observed equivalent width to determine the sense of remaining discrepancies in the model, and these are used to estimate the He abundance ( §4.7).
The observed and model line comparison begins with the H-Balmer lines that are sensitive to both temperature and gravity (through Stark broadening) in the O-and B-type stars.With the polar temperature and radius set at this stage, the gravity dimension is explored through the calculation of the Balmer lines for a test grid of model masses.
For each test value of mass, we create model Balmer lines from an integration of the surface with the local gravity set for each surface element based upon its radial distance from the center of the star and its rotational velocity, and then the corresponding specific intensity profile (with the associated Stark broadening) is derived from the pre-computed set ( §4.3).Then we compare the model and observed flux profiles to determine the goodness-of-fit for each Balmer line.A spline fit is made of the variation of the mean χ 2 ν as a function of assumed mass, and the minimum of this fit yields the estimate of mass M .
The basic procedure outlined above is repeated over a grid of test values for polar temperature T p , and the variation with assumed T p of the mean χ 2 ν derived from all the lines in the sample is used to find the best fit polar temperature.The final step is to conduct this complete analysis over a grid of assumed inclination i and projected rotational velocity v e sin i, determine the global minimum of the mean χ 2 ν , and find estimates for i and v e sin i.We found that there were significant mismatches between the observed and model profiles for certain line profiles that occurred in the analysis of all three target stars.These are systematic problems related to incomplete lists of possible line blends with other features, non-standard abundances, and/or problems with the physical properties assigned to the atomic transitions in TLUSTY/SYNSPEC.After some experimentation, we limited the line sample to a set that gave mutually consistent results, therefore any systematic errors that remain in the analysis are treated consistently for all three stars.The line set adopted for the parameter estimation includes Hζ λ3889, Hγ λ4340, Hβ λ4861, He I λλ3819, 4026, 4387, and He II λλ4541, 4686, 5411.We show in §5 the fits for these nine features and for the remaining seven excluded line profiles.
Helium Abundance
We noticed at the outset of the analysis that the He line model profiles calculated using specific intensity matrices based upon the solar He abundance were often much weaker than the observed profiles.Consequently we computed additional model specific intensity matrices using SYNSPEC for assumed He abundances of 2×, 3×, and 4× the adopted solar value.The same parameter fitting procedure was conducted for these different He abundances and the He line trends were examined in each case to determine how the models predicted He line strengths that were systematically too weak and too strong.
We show an example of the trends in Figure 8 for the case of ζ Oph.The corresponding trends found for VFTS 102 and VFTS 285 are qualitatively similar.Figure 8 shows the fractional differences in (Observed -Calculated) / Observed line equivalent width as function of assumed polar temperature T p for three He I and three He II lines.The He II equivalent width ratios show a net decline from underestimating the strength to overestimating the strength with increasing T p .The He I λ4387 equivalent width ratio shows the opposite trend as expected, while the He I λλ3819, 4026 ratios are approximately constant.We suspect that the latter two features are actually line blends that change with temperature in differing ways so that the composite profile is relatively constant (for example, the blend of He I λ4026 and He II λ4025).The large plus sign near the center marks the average position of all the He I and He II trend crossings.This occurs at < W λ (O − C)/O >= +0.069 for the 2× solar model (left panel; He too weak) and at < W λ (O − C)/O >= −0.062 for the 3× solar model (right panel; He too strong).Thus, the best match of the He line strengths occurs for an intermediate He abundance between these cases.
We made similar plots for all four test cases of He abundance y, and we used a spline fit of (y, < W λ (O − C)/O > to find the zero-crossing position that corresponds to the best fit of the helium abundance based upon these six He lines.A plot of the fractional differences between the observed and model calculated equivalent widths for a He abundance that is twice solar.The model results for each test polar temperature Tp are plotted as small plus signs that are connected by spline fits.The He I λλ3819, 4026, 4387 features are plotted as solid, dashed, and long dashed lines, respectively, and the He II λλ4541, 4686, 5411 features are plotted as dotted, dot-dashed, and triple dot-dashed lines, respectively.The large plus sign near the center marks the mean of the He I and He II intersection points, which is positive in this case (He too weak).Right: The same trends plotted for an assumed He abundance that is three times the solar value that leads to a negative mean (He too strong).
The results are listed with the other parameters in Table 4 with uncertainties based upon the scatter in the He I and He II intersection points in these plots.
Parameter Uncertainties
The predominant source of uncertainty in the parameter estimations comes from the spread in derived parameter values for fits of the individual lines in the default set of three H, three He I, and three He II features.These are systematic errors related to the model itself, so in most cases we have estimated the parameter uncertainties from the line-to-line standard deviation of the results found using the individual line fits.There are two other sources of significant uncertainty that must also be considered.The fractional uncertainty in distance is largest for the case of ζ Oph (12%), therefore this is the most important factor in assessing the uncertainty in polar radius R P which is linearly dependent on the assumed distance.The other key element in the uncertainty is the interstellar extinction that depends on the assumed value of the ratio of total-to-selective extinction R V .The reddening E(B − V ) is modest for ζ Oph and VFTS 285, so the underlying uncertainty in R V has only a small affect on the results.However, the reddening of VFTS 102 is much larger, so uncertainties in R V are important.We tested the sensitivity of the results by making simple SED fits (like those in Fig. 4) for both the adopted value of R V = 2.76 and the nominal value of R V = 3.1, and we found that the derived angular diameter increased by 28% using the latter value.The polar radius R p varies directly with angular size, so we included this factor in the final uncertainty estimate for R p for VFTS 102.
The bottom two rows of Table 4 list the statistics associated with the fits.The reduced chi-square χ 2 ν is the average of the individual chi-square measurements for all the lines used in the sample.It can be somewhat misleading given the complicated and often deceiving nature of extreme rotational broadening.For example, χ 2 ν is the smallest for fits of the lines of VFTS 102, but this is mainly the result of extremely shallow lines that have a depth not much larger than the scatter in the continuum.The final row reports the standard deviation of the observed to model flux ratio for all 16 spectral regions from the FUV to the optical, and it provides a sense of the success of the flux fits (worse in the case of VFTS 102 where complications exist due to the flux of the circumstellar disk).The fits associated with the parameter estimates in Table 4 are discussed in the next section.
ζ Oph
Figure 9 shows the 16 spectral features that we modeled with the rotation code described in §4.The default set of nine lines used in the parameter fitting code (indicated by asterisks in the identifying labels) are generally well fit by the model, however discrepancies from the fit in other cases deserve some comment.The C III λ1175 feature appears The top axis represents Doppler shift in the rest frame of the star, while the lower axis depicts the observed (heliocentric) wavelength.The primary feature in each panel is identified with a label in the lower left, and those features included in the parameter fitting scheme are indicated by an asterisk appended to the label.
to show a significant blue-shift compared to the model, and we suspect that this transition is partially influenced by the stellar wind, appearing like a weak P Cygni feature (observed as a wind feature in the O-star binary UW CMa; Drechsel et al. 1981).The other ultraviolet feature is the Fe IV λ1420 blend that appears to be slightly too weak in the model, perhaps due to the choice of microturbulence in the TLUSTY model or to uncertainties in the atomic oscillator strengths.The Hδ λ4101, He I λ4471, and He II λ4199 lines all appear to be consistently too deep in the model (including the C III λ4186 line in the blue wing of the latter), so they were excluded from the parameter fit.The longer wavelength transitions of He I λλ4921, 6678 appear to show disk-like emission in their extreme wings (possibly also present in He I λ4387), so they were also omitted from the fit.The spectrum of ζ Oph does occasionally exhibit double-peaked emission like that of the disk emission observed in Be stars (Vogt & Penrod 1983), but no Hα emission is apparent in the ζ Oph spectra used here.
Our motivation for including ζ Oph in this study was to test our parameter fitting results with those obtained independently using a similar code by Howarth & Smith (2001).The two sets of results are compared in columns 2 and 3 of Table 4.There is good agreement in most of the derived parameters.The polar temperature T p derived by Howarth & Smith (2001) is slightly higher due to our use of fully line-blanketed model atmospheres that tend to assign lower T eff than models based on H and He line opacities alone (Lanz & Hubeny 2003).The other difference is the use of the von Zeipel law for gravity darkening used by Howarth & Smith (2001).The equator to polar temperature contrast is larger in the von Zeipel description versus the ω-model for a given rotation rate, and we arrive at the same ratio of T e /T p as derived by Howarth & Smith (2001) (who utilized the Von Zeipel law) by using a relatively larger angular velocity ratio Ω/Ω c when using the ω-model.
Our derived He abundance y = 0.24 ± 0.07 is the same within uncertainties as found by Howarth & Smith (2001), y = 0.20 ± 0.03, confirming the apparent He enrichment in the atmosphere of ζ Oph.Models that neglect the changes of the star's shape and gravity darkening due to rotation tend to arrive at a lower He abundance: for example, y = 0.16 from Herrero et al. (1992), y = 0.11 from Villamariz & Herrero (2005), and y = 0.10 − 0.12 from Cazorla et al. (2017).The fast projected rotational velocity we find is similar to that found in most other studies, with the exception of Simón-Díaz & Herrero (2014), who split the apparent broadening between rotation, v e sin i = 303 − 319 km s −1 , and macroturbulence, v m = 159 km s −1 .
VFTS 102
Many of the spectral line features of VFTS 102 are altered in some way by the presence of a well-developed circumstellar disk.We discussed above in §4.2 how disk emission adds to the spectral energy distribution at longer wavelengths and how the disk continuum acts to make the spectral lines appear shallower.The other striking aspect is how the disk emission appears as a new profile component in the core of the H Balmer and He I lines as shown in Figure 10.For these cases, the observed and model profile goodness-of-fit statistic χ 2 ν was calculated only for the line wing portions of the profile where no disk emission appears.This approach is successful in showing the presence of residual disk emission that is not obvious on first inspection (for example, the He I λ4026 profile in Fig. 10).Making a fit of the line wings failed in the case of Hβ λ4861 because disk emission extends into the far wings, so this feature was omitted from the parameter fitting procedure.This leaves only fits of Hζ λ3889 and Hγ λ4340 for the determination of the mass through the apparent Stark (collisional) broadening.Tests showed that omission of Hβ in fits of the spectra of ζ Oph and VFTS 285 changed the derived mass by less than 1%, therefore we doubt the omission of Hβ has any significant impact on the final parameter solution for VFTS 102.
The derived average parameters in Table 4 all agree within uncertainties with those estimated in the discovery paper by Dufton et al. (2011) (see their Table 1).In particular, the estimate of projected rotational velocity v e sin i = 649±52 km s −1 is the largest among all three stars, and the equatorial velocity is the same as the critical velocity within errors.Thus, VFTS 102 appears to be a star that has attained critical rotation.This extraordinary spin probably assists those mass loss processes that feed gas into the circumstellar disk, creating a vigorous decretion disk despite the disk gas ablation that occurs due to the harsh radiation field of the star (Kee et al. 2016).
VFTS 285
The rotational model parameter fits for VFTS 285 indicate that the star is the hottest and most massive of the three targets.The star's true equatorial velocity is about the same as that of VFTS 102, however because the star is more massive, the critical velocity is higher, and therefore the star has a sub-critical spin, Ω/Ω c = 0.95.The hotter temperature and slower spin relative to critical rotation are probably the reasons why no circumstellar disk is found for VFTS 285 (in contrast to the case of VFTS 102; §5.2).The average parameters given in column 5 of Table 4 agree within errors with those derived from Sabín-Sanjulián et al. ( 2017) that do not include rotational deformation in the model.The one exception is the He abundance that Sabín-Sanjulián et al. (2017) find to be only somewhat enhanced, y = 0.14 compared to our result of y = 0.34 ± 0.14 (the largest He overabundance among the three stars).This same kind of difference was noted above between rotating and non-rotating physical model results for ζ Oph ( §5.1).In the non-rotating models He II line formation occurs over the entire visible hemisphere, while in the rotating models that include gravity darkening, He II line formation is more restricted to the hotter polar zones (because the He II lines weaken in the cooler equatorial zone).Consequently, in order to match the observed line strength, the rotating models compensate for the smaller area of formation by increasing the He abundance.The other difference in our work is the neglect of stellar winds in the TLUSTY models.The He II λ4686 line that we use is sensitive to wind emission in more luminous stars (Walborn 1971), but in the case of VFTS 285, the model He II λ4686 line appears to match the observed line as well as found for the other He II lines (Fig. 11).
The spectral line fits shown in Figure 11 are mostly satisfactory among the default set (marked by asterisks in the figure panels), except in the cores of some of the H Balmer and He I lines where sharp features remain from over- or under-subtraction of the nebular emission from the surrounding gas.These core regions were excluded from the goodness-of-fit measurements.
We decided to experiment with model fits of the spectral features of VFTS 285 by changing the gravity darkening prescription to the von Zeipel law in order to demonstrate how the choice of gravity darkening influences the solution.The resulting parameters using the von Zeipel law are shown in the final column of Table 4.We found that the model predicted He I and He II line profiles that were still too weak (7%) compared to the observed profiles even with the largest assumed He overabundance, y = 0.4 ( §4.7).Thus, applying a rotational model using the von Zeipel law appears to lead to an over-estimate of He abundance in this case.Rather than extrapolate to even higher He abundances, we simply report the results of the y = 0.4 fits in Table 4.The von Zeipel model is best fit with a star that is rotating somewhat closer to the critical rate with greater equatorial extension and a larger range in the polar to equatorial temperature.to their extreme spins.The first possibility is that both are very young stars that attained their rapid spin due to accretion of their natal disks.Ekström et al. (2008) and Brott et al. (2011) calculated evolutionary tracks for stars born as rapid rotators, and in Figure 12 we show evolutionary tracks in the Hertzsprung-Russell Diagram (HRD) for three massive stars from Brott et al. (2011).These particular tracks were made assuming LMC abundances and initial equatorial velocities of v e ≈ 550 km s −1 .The track for the 16M model shows the normal evolution to higher luminosity and cooler temperature as core H-burning concludes, but the tracks for 19M and 25M show evolution to higher temperatures.This behavior in massive, fast rotators is due to extensive mixing in the interior that replenishes the H core and dredges up the processed He into the envelope.Thus, mixing tends to homogenize the composition of the core and envelope.We also plot in Figure 12 the derived estimates of < T >(all) and log L/L from Table 4 for VFTS 102 and VFTS 285.Note that the T eff estimates for the models of Brott et al. (2011) are based upon an average over the surface of the star assuming a von Zeipel gravity darkened flux, so they are not exactly comparable to our results for the ω-model (but the difference is small; see the two gravity darkening cases for VFTS 285 in Table 4).Furthermore, the model equatorial velocities are somewhat smaller than our estimates for the stars, so trends related to rotation might be even more extreme for evolutionary tracks at higher rotation speeds.The position of VFTS 102 is somewhat overluminous for its estimated mass (18 ± 6M ) and is slightly cooler than predicted for the age of the nearby stars in the vicinity of the LH 99 OB association of 5.7 Myr (Schneider et al. 2018).Furthermore, an enhanced He abundance of y = 0.2 is obtained at an older age of 6.9 Myr (or longer for masses lower than 19M ) and at a hotter temperature than is observed for VFTS 102.The position of VFTS 285 is cooler and less luminous than predicted for its estimated mass (28 ± 8M ), and the observed He over-abundance of y = 0.34 ± 0.14 only occurs at much higher temperatures in the 25M track.This comparison between the observed and model tracks for VFTS 102 and VFTS 285 indicates that these stars are likely different than predicted for models of stars with fast rotation at birth.Stars may also become rapid rotators towards the terminal age main sequence (TAMS) stage due to the transfer of angular momentum from core to surface by meridional currents (Ekström et al. 2008).However, this spin-up occurs over a relatively short time near the TAMS, and both VFTS 102 and VFTS 285 appear to be too young to have reached the TAMS.Consequently, we doubt that their fast rotation is related to mixing and spin-up associated with the TAMS phase.
Binary Star Models
There are several processes involving interacting binary stars that can lead to the spin up of the mass gainer star (de Mink et al. 2013).Binary systems born with short periods will probably enter a mass transfer stage during slow core-H burning expansion (Case A), and in many circumstances this will result in a merger through a common envelope event (CEE).Population synthesis models by de Mink et al. (2014) suggest that 8% of massive stars drawn from a constant star formation sample are, in fact, such merger products.Menon et al. (2021) present models for binary mergers in the LMC, and they show that many binaries with initial periods less than two days will produce a merger.Our understanding of the physical processes leading up to a CEE is still developing (Ivanova et al. 2013), however the merged star is likely to exhibit rapid rotation, equatorial mass loss, an enriched surface He abundance, and overluminosity for its mass (Ivanova & Podsiadlowski 2003).The properties of the merger product depend critically on the extent of He dredged up into the envelope and the time since the merger (Glebbeek et al. 2013).One key example is the hot merger remnant model that is successful in describing the properties of the B-supergiant progenitor of SN 1987A (Podsiadlowski et al. 1992;Menon & Heger 2017).Another example is the magnetic star τ Sco that may have formed through a merger that generated a strong magnetic field as described by Schneider et al. (2019Schneider et al. ( , 2020)).Their models suggest that the merger product will spin-down on a thermal timescale (∼ 10 4 years) as a result of redistribution of angular momentum in the stellar interior.
Massive binaries with periods greater than a few days will begin mass transfer as the larger mass donor star expands in the H-shell burning stage (Case B).Mass transfer will lead to the spin up of the mass gainer which becomes a rejuvenated star of larger mass than it initially started with (de Mink et al. 2014).When the donor explodes as a supernova, the binary may become unbound (if the donor mass remained large after the mass transfer episode or the donor experiences an asymmetric kick during the SN) or the SN remnant may remain to orbit the mass gainer.The latter systems are observed as massive X-ray binaries in which reverse mass transfer occurs to the neutron star or black hole remnant.Both circumstances will impart a runaway velocity to the surviving gainer that is comparable to the orbital velocity at the time of the SN.Many of the fast-moving OB runaway stars appear to be rapid rotators and are often single stars (Blaauw 1961(Blaauw , 1993;;Gies & Bolton 1986;Hoogerwerf et al. 2001;Platais et al. 2018;Schneider et al. 2018).
Dynamical Processes
The central region of the R136 cluster in the 30 Dor region has a very high spatial number density of massive stars (Massey & Hunter 1998;Crowther 2019), and dynamical encounters between stars and binaries can play an important role in their evolution.In rare cases, a physical stellar collision can lead to the formation of a rapidly rotating star with properties similar to those formed by a close binary merger (Sills et al. 2005;Fujii et al. 2012).Gravitational encounters between wider binary and single stars (and binary and binary stars) offer another way to eject a high velocity star through an interaction that transforms the orbital binding energy of a target binary into the kinetic energy of the escapee (Gualandris et al. 2004).
In the following subsections, we will compare the predictions from these different processes with the observed properties of the three rapid rotators investigated in this paper.
ζ Oph
ζ Oph is the closest Galactic O-star and it is a well-known runaway star (Blaauw 1961).Its trajectory across the sky shows that it was ejected from the Upper-Centaurus-Lupus (UCL) Association (Hoogerwerf et al. 2001). van Rensbergen et al. (1996) argue that it was a member of an interacting binary system that spun up the mass gainer (ζ Oph) prior to a SN explosion that disrupted the system and imparted a runaway velocity.The fact that there is no evidence of orbital motion (Gies & Bolton 1986) is consistent with its status as a single star.Neuhäuser et al. (2020) presented an analysis of the motions of ζ Oph and nearby pulsars, and they argue that the SN that created the radio pulsar PSR B1706-16 caused the ejection of ζ Oph, in addition to the release of a significant amount of 60 Fe gas (some of which was eventually captured on Earth).Thus, ζ Oph is a prime example of a star that was spun up to near critical rotation (Ω/Ω c = 0.95) by mass transfer from a companion that exploded as a SN and imparted a runaway velocity to the survivor.
VFTS 102
The SN ejection mechanism that explains the properties of ζ Oph was explored as the origin of the rapid rotation of VFTS 102 in the discovery paper by Dufton et al. (2011).They noted that the nearby pulsar PSR J0537-6910 is surrounded by an X-ray emitting bow shock that appears to be directed away from the position of VFTS 102, and this implies that the pulsar is a runaway object from the vicinity of VFTS 102.Furthermore, they argued that their measurement of radial velocity, 228 km s −1 , was sufficiently different from the mean for the region that VFTS 102 was also a runaway object.However, the runaway status of VFTS 102 is controversial.Our derived average radial velocity is 267 ± 3 km s −1 , and this is the same within errors as the mean for stars in the region around the LH 99 association, 274 ± 13 km s −1 (Evans et al. 2015).Furthermore, the apparent proper motions of VFTS 102 from Gaia EDR3 (Gaia Collaboration et al. 2021) are µ RA = 1.73 ± 0.04 mas yr −1 and µ DEC = 0.71 ± 0.03 mas yr −1 which agree with the mean values for other nearby massive stars.For example, we formed a sample of 65 O-type stars within a 2 arcmin separation from the massive star Brey 73, which lies near the center of the OB-association LH 99 that is close in the sky to VFTS 102 (Lortet et al. 1991).The mean proper motions of these stars from Gaia EDR3 are µ RA = 1.63 ± 0.16 mas yr −1 and µ DEC = 0.63 ± 0.18 mas yr −1 , i.e., the same within errors as those for VFTS 102.These results suggest that VFTS 102 is not a runaway star.The difference between its spatial velocity and that of the LH 99 association stars is no more than ≈ 30 km s −1 , which is smaller than we would expect from a runaway star.Jiang et al. (2013) presented a binary merger model for VFTS 102, and they showed how a close binary with an initial primary star mass of 12 − 15M , mass ratio M 2 /M 1 > 0.63, and orbital period P < 1.5 d can evolve into contact and merge to create a rapidly rotating star.These parameters are consistent with the current mass of 18M provided some mass loss occurred during the CEE.VFTS 102 does display the properties predicted for a merger: very rapid rotation, enhanced He abundance, overluminosity, and evidence of ongoing mass loss into a large equatorial gas disk.Furthermore, our analysis suggests that it is rotating at essentially the critical rate, so little time has elapsed since the spin-up event for active spin-down processes to occur that are related to evolution (Brott et al. 2011), wind mass loss (Gagnier et al. 2019), angular momentum loss into the circumstellar disk (Krtička et al. 2011), and internal restructuring (Schneider et al. 2020).These facts, combined with the lack of a substantial runaway velocity, suggest that a recent binary merger is the best explanation for the rapid rotation of VFTS 102.An example of a possible merger progenitor is the nearby contact binary VFTS 352 with an orbital period of 1.1 d (Almeida et al. 2015).
VFTS 285
VFTS 285 is among some ten objects that appear to be fleeing from the R136 cluster at the center of 30 Dor (Evans et al. 2010;Lennon et al. 2018;Platais et al. 2018;Renzo et al. 2019;Gebrehiwot & Teklehaimanot 2021).These are all examples of ejection by SN or dynamical encounters.VFTS 285 has a relative tangential velocity in the range of 26 to 48 km s −1 with a time of flight since ejection of 0.6 to 0.7 Myr if it originated in the R136 complex at the center of the NGC 2070 cluster (Platais et al. 2018;Gebrehiwot & Teklehaimanot 2021).We find that the radial velocity is 250 ± 6 km s −1 that is only somewhat smaller than the mean for its cluster of origin, NGC 2070, 271 ± 12 km s −1 (Evans et al. 2015), so both the radial and tangential velocities are consistent with the idea that VFTS 285 is a "slow" runaway star.
There are several factors to consider in determining how VFTS 285 was ejected.Schneider et al. (2018) estimate an age of 1.9 ± 2 Myr for VFTS 285 (see Platais et al. 2018) that is consistent with its location in the HRD near the ZAMS position of a track for its mass (Fig. 12).If this is the actual age, then it is too young for sufficient time to have elapsed for the companion to evolve and explode as a SN (at least 3 Myr for the most massive stars).This young age would indicate instead that the star was ejected by dynamical processes in the R136 complex that has a similar age (≈ 1.2 Myr; Bestenlehner et al. 2020).On the other hand, VFTS 285 may have been ejected from another site in the NGC 2070 cluster, which has a median age of 3.6 Myr (Schneider et al. 2018).In this case, there is sufficient time for a binary companion to reach the SN stage and eject VFTS 285, and then its estimated age would correspond to that of the rejuvenated star after mass accretion in the binary.
The other fact to note is the very high He abundance we determined for VFTS 285 (y = 0.34 ± 0.14).This level of He enrichment is not predicted by single star evolution for a star of its mass and youth, but it could happen through mass transfer from an evolved companion or by large scale mixing associated with a merger.The large He abundance implies an interaction with some kind of evolved object, so the binary path is probably the most likely one for the evolutionary history VFTS 285.It may have been spun up through mass transfer from a companion that exploded and disrupted the binary (like the case of ζ Oph).Bestenlehner et al. (2020) found that the most massive WN5h-type stars in the core of R136 are all He enriched at their surfaces even at very young ages (≈ 1.2 Myr), so mass transfer from such a progenitor companion could potentially explain the He overabundance in VFTS 285.There is one other system that may be a post-mass transfer binary in the 30 Dor region.Clark et al. (2015) found that the rapid rotator VFTS 399 is a strong emitter of variable X-ray emission that is usually associated with thermal emission from an accreting neutron star, so they suspect that VFTS 399 is a high-mass X-ray binary.This would indicate that there has been enough time in some parts of 30 Dor for a possible binary companion of VFTS 285 to explode as a SN, so the formation channel by a SN disruption of the original binary is a viable and attractive explanation.
We doubt that VFTS 285 is the result of a merger, because it would have formed from two lower mass, longer-lived stars, which conflicts with the young age derived from its kinematical and cluster properties.It is possible, however, that a dynamical encounter with another binary led to the ejection of a binary with such high eccentricity that the pair collided and merged on an orbital timescale.Thus, the merger scenario should not be entirely ruled out.
7. CONCLUSIONS VFTS 102 and VFTS 285 are the current record holders for the fastest projected equatorial velocity, and our findings have only solidified that standing.We applied a spectrum synthesis method to create model spectral line profiles that depend on the physical parameters and the inclination angle i between the spin axis and our line-of-sight.Fits of these to the observed profiles led to determinations of both the projected rotational velocity v e sin i and the physical equatorial velocity v e (Table 4).We found that both stars are exceptionally fast rotators with VFTS 102 rotating at v e = 649 km s −1 (Ω/Ω c = 1.00) and VFTS 285 rotating at v e = 648 km s −1 (Ω/Ω c = 0.95).The physical parameters associated with our best fit for VFTS 102 are: R p /R = 5.41 ± 1.55, M/M = 18 ± 6, and T p = 40100 ± 2800 K. VFTS 285 is slightly larger and more massive: R p /R = 5.58 ± 0.39, M/M = 28 ± 8, and T p = 40200 ± 2700 K.
Both stars exhibit very broad and shallow (and often blended) line profiles due to the extreme rotational line broadening.We calculated models for 16 line or line blend features, and from these we selected three H Balmer, three He I, and three He II lines that could be modeled with a self-consistent set of parameters.We found that the best fit parameters also led to predicted profiles for UV features that matched well with the observed, strongly blended spectra.The temperature related variations of surface specific intensities have a much greater contrast between the pole and equator in the UV than in the optical, so the profile shapes are expected to differ (Hutchings 1976).Thus, it is encouraging that our models which incorporate the wavelength dependence of specific intensity are generally successful in fits of both the UV and optical lines.
As a part of our analysis we attempted to measure the He abundance after it became clear that the models associated with a solar He abundance were far too weak to match our observed spectra.We found that all of the target stars are He overabundant (ζ Oph,2.4× solar;VFTS 102,2.0× solar;VFTS 285,3.4× solar).This general He overabundance is probably the result of internal mixing promoted by extreme rotation and/or by past mass transfer of He from an evolved mass donor companion.We caution that while these stars all appear He enriched, the actual He abundances may have systematic errors because the fits were made by simply increasing the He abundance in the radiative transfer solution for the line profiles without re-calculating the full atmospheric structure for the revised He abundance (see Section 4.3).
A comparison of the stellar parameters to those of evolutionary tracks for rapid rotators (Brott et al. 2011) shows that both VFTS 102 and VFTS 285 appear to be somewhat overluminous for their mass.Furthermore, both stars are much more enriched in He than predicted by mixing in these model tracks.These characteristics as well as their implied youth suggest that both VFTS 102 and VFTS 285 may have been rejuvenated by mass transfer from an interacting binary companion.Their current fast rotation may be the result of angular momentum accretion during past mass transfer.
VFTS 102 is rotating very close to the critical rate, is shedding mass and angular momentum into a circumstellar disk, and is enriched in He.These are all the characteristics of a recent, post-merger object as suggested by Jiang et al. (2013).The star's radial velocity and proper motion are similar to those of the nearby OB-association LH 99, so we doubt that VFTS 102 is a runaway star (as suggested by Dufton et al. 2011).
VFTS 285, on the other hand, does appear to be a runaway star ejected from the R136 cluster based upon its proper motion (Platais et al. 2018).An attractive scenario is that VFTS 285 was spun up via mass transfer prior to its companion exploding in a supernova.The binary was disrupted and the orbital motion of the survivor was transformed into the linear ejection velocity of VFTS 285.In this picture, the extreme overabundance of He in the atmosphere of the star marks the remains of nuclear-processed gas from the interior of the mass donor companion.This scenario is similar to the current origin theory for ζ Oph (Neuhäuser et al. 2020), which has a similar He abundance to that of VFTS 285.Our work adds to a growing body of evidence that a significant fraction of the rapid rotators among the massive stars were spun up through binary mass transfer (Bodensteiner et al. 2020;Wang et al. 2021;Gies et al. 2022).
Figure 1 .
Figure 1.Far-ultraviolet spectra of the three rapidly rotating stars.The spectra are normalized to a pseudo-continuum level of unity, and they are offset for clarity.The broad, shallower features are formed in the photospheres, and the many sharp lines have an interstellar origin (as does the very broad Lyα λ 1215 absorption line).
Figure 2 .
Figure 2. The optical spectral region of the three stars (from top to bottom) VFTS 102, VFTS 285, and ζ Oph.Most of the emission features are from background nebular emission or instrumental flaws, with the exception of the disk emission in the Balmer lines in the spectrum of VFTS 102 (especially Hα λ6563).
Figure 3.The observed spectral energy distribution (SED) of ζ Oph.The small crosses show the low resolving power observed fluxes, and the diamonds indicate the high resolving power fluxes calculated at 16 specific wavelengths using the rotation code ( §4.2).The solid line shows a low resolving power model SED from the TLUSTY code for a non-rotating star with the hemisphere-average temperature and gravity of the star (Table4).The model fluxes are attenuated for interstellar extinction using the reddening E(B − V ) given in Table4.
)Figure 4 .
Figure 4.The observed spectral energy distribution (SED) of VFTS 102 in the same format as Fig. 3.The dotted line shows the SED of a TLUSTY model for the star alone, and the solid line presents the model of the combined flux of the star and its circumstellar disk.
)Figure 5 .
Figure 5.The observed spectral energy distribution (SED) of VFTS 285 in the same format as Fig. 3.
Figure 6 .
Figure 6.An example of the model specific intensity profiles for the region in the vicinity of He I λ4471.The plots show I λ for µ = 0.1 to µ = 1.0 at steps of µ = 0.1 from bottom to top.Continuum limb darkening is evident (darker at the limb) while the central line depth of He I λ4471 is relatively constant.
Figure 7 .
Figure7.A depiction of the appearance of a rapidly rotating star and a selection of its spectral lines as viewed at different inclinations.The top row shows an image of a non-rotating star and its spectral lines in three regions.The next three rows show a star at near-critical rotation and the same three spectral features now plotted as a function of Doppler shift relative to ve sin i. Solid (dotted) lines show the model profiles using the ω-model (von Zeipel) descriptions of gravity darkening.The dashed lines in the lower two rows show the difference between the ω-model profiles for the specified inclination and that for i = 90 • (row 2).The difference is scaled up by a factor of three for the cases of C III λ1175 and He I λ4471, indicated by a "3x" label.
Figure 8 .
Figure8.Left: A plot of the fractional differences between the observed and model calculated equivalent widths for a He abundance that is twice solar.The model results for each test polar temperature Tp are plotted as small plus signs that are connected by spline fits.The He I λλ3819, 4026, 4387 features are plotted as solid, dashed, and long dashed lines, respectively, and the He II λλ4541, 4686, 5411 features are plotted as dotted, dot-dashed, and triple dot-dashed lines, respectively.The large plus sign near the center marks the mean of the He I and He II intersection points, which is positive in this case (He too weak).Right: The same trends plotted for an assumed He abundance that is three times the solar value that leads to a negative mean (He too strong).
Figure 9 .
Figure9.The continuum normalized spectral features of ζ Oph (solid lines) together with the synthetic spectra from the rotational model (dotted lines).The top axis represents Doppler shift in the rest frame of the star, while the lower axis depicts the observed (heliocentric) wavelength.The primary feature in each panel is identified with a label in the lower left, and those features included in the parameter fitting scheme are indicated by an asterisk appended to the label.
Figure 10 .
Figure10.The continuum normalized spectral features of VFTS 102 (solid lines) together with the synthetic spectra from the rotational model (dotted lines) in the same format as Figure9.The sharp features in the cores of some of the H Balmer and He I lines are artifacts from incomplete removal of the surrounding nebular emission lines.
6. EVOLUTIONARY ORIGINS 6.1.Single Star Models Both VFTS 102 and VFTS 285 display exceptionally large rotational line broadening compared to other O-type stars in the VFTS sample (Ramírez-Agudelo et al. 2013).Here we consider what processes may have contributed
Figure 11 .
Figure11.The continuum normalized spectral features of VFTS 285 (solid lines) together with the synthetic spectra from the rotational model (dotted lines) in the same format as Figure9.The sharp features in the cores of some of the H Balmer and He I lines are artifacts from incomplete removal of the surrounding nebular emission lines.
Figure 12 .
Figure12.Evolutionary tracks in the HRD for rapidly rotating massive stars fromBrott et al. (2011).The solid lines show the tracks for stars of masses 16M , 19M , 25M with initial equatorial velocities of 562, 557, and 548 km s −1 , respectively.Small plus signs indicate time intervals of 1 Myr, and the square and asterisk symbols show the point on the tracks where the surface He abundance reaches y = 0.2 and 0.4, respectively.The diamond and X symbols mark the observed average temperature and luminosity for VFTS 102 and VFTS 285, respectively, from model fits of the spectral lines.
Table 1 .
Overview of Spectroscopic Observations
Table 4 .
Summary of Rotational Parameters a D = derived from model; F = fit; S = set.b From Howarth & Smith (2001). | 21,442.4 | 2022-04-15T00:00:00.000 | [
"Physics"
] |
Study of spectral index of giant radio galaxy from Leahy’s Atlas: DA 240
. Here we investigate the giant radio galaxy DA 240, which is a FR II source. Specifically, we investigate its flux density, as well as the spectral index distribution. For that purpose, we used publicly available data for the source: Leahy’s atlas of double radio-sources and NASA/IPAC Extragalactic Database (NED). We used observations at 326 MHz (92 cm) and at 608 MHz (49 cm) and obtained spectral index distributions between 326 and 608 MHz. For the first time we give spectral index map for these frequencies. We found that the synchrotron radiation is the dominant radiation mechanism over most of the area of DA 240, and also investigated the mechanism of radiation at some characteristic points, namely its core and the hotspots. The results of this study will be helpful for understanding the evolutionary process of the DA 240 radio source.
Introduction
Double Radio sources Associated with Galactic Nuclei (DRAGNs) are clouds of radio-emitting plasma which have been shot out of active galactic nuclei (AGN) via narrow jets.More precisely, a DRAGN would be a radio source containing at least one of the following types of extended, synchrotron-emitting structures: jet, lobe, and hotspot complex (Leahy, 1993).
AGNs are emitting the most radiation from galaxies, and in case of radio galaxies, a lot of their radiation is emitted at radio wavelengths.Giant radio galaxies (GRGs) belong to a unique class of objects with very large radio structures.Originally, they were defined to be the radio galaxies with projected linear sizes greater than 1 Mpc (Willis et al., 1974).This limit applied to a spatially flat (Ω κ = 0) Friedmann cosmological model with the Hubble constant H 0 = 75 km s −1 Mpc −1 , deceleration parameter q 0 = 0.5 and zero cosmological constant (Ω Λ = 0).Nowadays, the GRG size limit is equivalent to 700 kpc (Tang, 2021) in a ΛCDM cosmology with the parameters by Planck Collaboration from 2016, i.e. in the flat cosmological model with H 0 = 67.8km s −1 Mpc −1 and Ω m = 0.308 (Planck Collaboration, 2016).Due to this comparatively large angular extent of the GRG population, astronomers can observe their fine structures with detailed imaging.
DRAGN DA 240 was among the first GRGs to be recognized as such (more precise, 3C 236 and DA 240 are the first two discovered GRGs, both identified as Fanaroff-Riley II types).A study of its environment can be found in Peng et al. (2004).It consists of two radio clouds about 40 long, and a comparatively weak central core (Artyukh & Ogannisyan, 1988).This giant radio source, with linear size spanning over 1.3 Mpc, is placed at a distance of 215 Mpc.There are also other researchers who investigated radio source DA 240 (Mack et al., 1997;Chen et al., 2011aChen et al., ,b, 2018a,b;,b;Peng et al., 2015;Milley, 2019).
Data and method
Regarding Earth-space (and also space-Earth) communication ranges, there is a wide spectral window in radio band in which Earth's ionosphere does not reflect extraterrestrial radio waves and where atmosphere is transparent for them.This range of radio frequencies, observable from Earth, spans from ∼10 MHz (30 m) to ∼1 THz (0.3 mm).Particularly, for DA 240 there are freely available groundbased observational data at the following frequencies: 326, 608, 2695, 4750 and 10550 MHz.Among these, for this paper we choose observations only at 326 and 608 MHz because of their completeness, and due to good visibility of this source over the whole its area, as well.This is not the case for other three frequencies, where only the regions around hotspots and jets are clearly visible, while the radio lobes are pretty faint at these frequencies.
For our calculations, we used Flexible Image Transport System (FITS) data files containing the flux densities in Jy (1 Jy = 10 −26 W m −2 Hz −1 ) of a chosen radio source.We described the structure of FITS format, as well as what is useful for our investigation, in our previous paper Borka Jovanović et al. (2023).Besides its flexibility and storage efficiency, here we want to point out the long term archiving i.e. all versions of the FITS format are backwards-compatible, so one can compare data from more observations (details at https://heasarc.gsfc.nasa.gov/docs/heasarc/fits_overview.html).
The observed data are available in An Atlas of DRAGNs i.e. the "3CRR" sample of Laing, Riley & Longair (1983).Besides FITS files, this sample gives the readers very useful information from the literature on the DRAGNs, with tables and references, too.There are the Introductory Pages, Description Pages (with full details) and the Listings of individual DRAGNs.Also, we used astronomical database compiled by NASA and IPAC, i.e.NED database -a comprehensive database of multiwavelength data for extragalactic objects, with the information and bibliographic references regarding these objects.
Hence, the easily searchable and accessible data are provided at: -J. P. Team, 2002).
The observations of DA 240 at 326 and 608 MHz were carried out using "The Westerbork Synthesis Radio Telescope" (WSRT) radio telescope.It is located in Netherlands, previously ran by Netherlands Foundation for Research in Astronomy, underwent more upgrades and a major (phased array upgrade of the WSRT) was completed in 2019, resulting in the WSRT-Apertif system.Nowadays, WSRT-Apertif telescope is operated as a survey instrument by ASTRON -the Netherlands Institute for Radio Astronomy.
It is useful here to notice that we used the calibrated data, i.e. the data and images after they are processed using analysis techniques and programs like CLEAN algorithm (see Strom, Baker & Willis (1981) and references therein).Particularly, regarding the resolutions of the processed data used here, their values are: 20 at 326 MHz and 9.2 at 608 MHz.
We used observations of DA 240 at two frequencies, 326 MHz (Willis & O'Dea, 1990) and 608 MHz (Willis et al., 1974).The calculation method, which we have developed, was first published in Borka (2007), with the most detailed explanation given in Borka Jovanović (2012), and further elaborated in Borka Jovanović et al. (2012), also.The area of the investigated radio source, as well as the flux densities, are determined in these three ways: I -flux density contours (isolines S ν ); II -flux density 2D profiles, for constant declination; III -3D profiles (by doing the procedure in all three ways, it can easily be checked whether the results and analysis are good).
As a telescope is most sensitive to extended emission at long wavelengths (i.e.smaller frequencies), the WSRT could provide detailed and sensitive maps of DA 240 at long wavelengths.So, it had a resolvong power sufficient to separate this large object from unrelated confusing sources.As noted in original discovery paper by Willis et al. (1974), a remarkable feature is the huge range of surface brightness over the intensity map, as well as the prominence of the eastern component.Also, it has strong linear polarization: an integrated percentage polarization at 49 cm of 8.3%.
The flux density distribution of giant radio galaxy DA 240
The From the observations of DA 240 at two frequencies, 326 MHz (92 cm) and 608 MHz (49 cm), we determined the contours which represent the lower boundaries of the source.We found that the minimal fluxes are the following: S ν,min = 0.016 Jy at 326 MHz, and S ν,min = 0.0023 Jy at 608 MHz.The areas of DA 240, with the flux density distributions (in Jy) over these areas, we presented by two-dimensional and three-dimensional plots in Figs. 1 and 2. From these radio maps, the spherical radio lobes can be clearly seen, as well as their hotspots at the end of each beam.
We also give an examples of flux profiles for some constant declinations, at the both frequencies.In Fig. 3 we give the profile for δ = 56 • .03containing north-eastern hotspot (left) and for δ = 55 • .87 containing south-western hotspot (right), at 326 MHz; while in Fig. 4 we give profiles for the same δ but at 608 MHz.
Spectral index distribution between 326 and 608 MHz
The amount of flux density S ν as a function of frequency ν is given by the expression: where α is a constant, called the "radio spectral index".
The radio spectral index α can be obtained using the flux density at different frequencies and taking the negative slope of the relation (1).Therefore, we calculate it by the following equation: To obtain the radio spectral index α between two frequencies, in each point over the area of the source, we need values of fluxes at the same coordinates (α, δ).As we have used data with different resolutions, first we had to reduce them to the same resolution, and then to apply Eq. ( 2).For that purpose we used bilinear interpolation for resampling observations with higher resolution (608 MHz) to the nearest existing coordinates at lower resolution (326 MHz).In that way, the data were comparable, and we were able to calculate spectral index.
The spectral index is used for classification of radio sources and studying the origin of radio emission: -If α > 0.1 the emission is non-thermal (synchrotron) and it means that it does not depend on the temperature of the source, for α < 0 it is thermal and depends only on the temperature of the source.
We calculated spectral indices between 326 and 608 MHz, over the whole area of DA 240, and we show how they change over this area in Fig. 5.
From the colorbar in Fig. 5 we can read the values of radio spectral index α, and we can notice that over the area of DA 240 it ranges from ≈ 0 to positive values, meaning this: α > 0 corresponds to non-thermal mechanism of radiation while α < 0 corresponds to thermal mechanism of radiation.When the spectral index is zero, the flux density is independent of frequency, and the spectrum is said to be flat.
As is it can be seen from the presented radio-index map, the spectral index is almost always higher than zero, except in only few small parts where it is around zero.As expected, the largest values of spectral index are in the regions around hotspots (especially eastern one), while the lowest values are in vicinity of AGN, which is dominated by thermal radiation mechanism.This indicates that the non-thermal (synchrotron) emission is by far the most dominant radiation mechanism over the whole source (except at AGN).
Discussion and conclusions
We used the available flux densities of DA 240 at 326 and 608 MHz to provide the spectral index distribution derived between these two frequencies.At both frequencies, the flux structure is characterized with obvious hotspots and the core.We can notice a large variations of flux density over the intensity map, with its highest value at the eastern component, which is dominant high brightness region.In the spectral index map this tendency is even more pronounced.
For the first time we give spectral index map of DA 240 between 326 and 608 MHz.We show that synchrotron radiation is the dominant emission mechanism over the majority of the area of the source, except in the central core.
Our investigation of the giant radio galaxy DA 240, i.e. of the flux density and spectral index distribution, is leading to the following conclusions: by using publicly available data (Leahy's atlas of double radio-sources, as well as the NED database), we were able to investigate flux densities at 326 MHz (92 cm) and 608 MHz (49 cm), although we have developed method of calculation for main Galactic radio loops I-VI, it is applicable (and also rather efficient) to all SNRs, end to extragalactic radio sources, as well, from our results, a remarkable feature can be noticed: the huge range of flux density over the intensity map, as well as the prominence of the eastern component, there are two lobes, with a prominent hotspot in the north-east component, and a weaker one to the south-west, the distribution of spectral index α enables to follow how α varies over the area, to read its values, and also to determine the origin of the radiation, synchrotron radiation is the dominant emission mechanism over the whole area of the source.
Figure 2 .
Figure 2. The same as in Fig. 1, but for 608 MHz.
Figure 5 .
Figure 5. Spectral indices between 326 and 608 MHz over the area of DA 240. | 2,858.8 | 2023-10-20T00:00:00.000 | [
"Physics"
] |
Longitudinal liquid biopsy anticipates hyperprogression and early death in advanced non-small cell lung cancer patients treated with immune checkpoint inhibitors
Background Immune checkpoint inhibitors (ICIs) have revolutionised treatment of advanced non-small cell lung cancer (aNSCLC), but a proportion of patients had no clinical benefit and even experienced detrimental effects. This study aims to characterise patients experiencing hyperprogression (HPD) and early death (ED) by longitudinal liquid biopsy. Methods aNSCLC receiving ICIs were prospectively enrolled. Plasma was collected at baseline (T1) and after 3/4 weeks of treatment, according to the treatment schedule (T2). Cell-free DNA (cfDNA) was quantified and analysed by NGS. cfDNA quantification and variant allele fraction (VAF) of tumour-associated genetic alterations were evaluated for their potential impact on outcome. The genetic alteration with the highest VAF (maxVAF) at baseline was considered as a reference. Results From March 2017 to August 2019, 171 patients were enrolled. Five cases matched criteria for HPD and 31 ED were recorded; one overlapped. Quantification of cfDNA at T2 and its absolute and relative variation (T2–T1) were significantly associated with the risk of ED (P = 0.012, P = 0.005, P = 0.009). MaxVAF relative change (T2–T1/T1) was significantly associated with the risk of HPD (P = 0.02). After identifying optimal cut-off values, a two-step risk assessment model was proposed. Discussion Liquid biopsy performed early during treatment has the potential to identify patients at high risk of ED and HPD.
INTRODUCTION
Immunotherapy is widely considered one of the most important advancements in the treatment of advanced non-small cell lung cancer (aNSCLC). While the majority of non-oncogene addicted aNSCLC patients are currently treated with immune checkpoint inhibitors (ICIs) either in monotherapy or in combination with chemotherapy [1][2][3][4][5], great heterogeneity in response and duration of clinical benefit has been observed. The search of predictive biomarkers is one of the main burning issues in thoracic oncology, and, at present, the only available predictive marker for ICIs is PD-L1 expression in tumour cells, although clearly showing its limitations in the clinical setting.
Notably, there is increasing evidence that ICIs may be associated with very poor outcome or even detrimental effects in a quote of NSCLC patients. This concept was initially related to the observation of an increased number of deaths recorded during the first 12 weeks in patients receiving ICIs versus chemotherapy [6]. Furthermore, a specific radiological pattern of progression, called hyperprogression (HPD), has been associated to the potential detrimental effects of ICIs and it was characterised by the increased rate of tumour growth with respect to radiological imaging performed before the start of immunotherapy [7][8][9][10]. Retrospective analyses have confirmed that the two phenomena are not fully overlapping [11] and different biological mechanisms at their basis could be hypothesised.
Here, we aim to characterise patients experiencing HPD and early death (ED) following ICIs administration using liquid biopsy to quantify cfDNA and to screen for genetic alterations at baseline and at an early timepoint after treatment.
PATIENTS AND METHODS Patients and plasma sample collection
According to the spontaneous prospective study called MAGIC-1 approved by the Istituto Oncologico Veneto Ethics Committee (protocol number 2016/82, 12/12/2016), we prospectively enrolled all advanced EGFR-ALK-ROS1 wild-type NSCLC patients starting systemic treatment at our Institution between January 2017 and August 2019 [12]. Eligibility criteria were: availability of tumour biopsy material collected before starting any treatment, the planning of the systemic treatment and the possibility of an adequate clinical and radiological follow-up. Patients were treated according to clinical practice with chemotherapy or ICIs and palliative local treatment was allowed according to the treating physician's choice.
As previously described [12], liquid biopsy samples were collected at pre-specified timepoints during treatment: at the time of first administration of systemic treatment (baseline, T1), after 3 or 4 weeks of treatment (according to the treatment schedule) (3 ± 1 w, T2), at first radiological restaging (T3), and at radiological progression (PD, T4).
Written informed consent was obtained from all patients before study entry. The study was conducted in accordance with the precepts of the Helsinki declaration.
For this study, only patients receiving single-agent ICIs were considered, and molecular analyses were performed in plasma samples collected at T1 and T2.
Patients experiencing ED were defined as patients experiencing death related to lung cancer within 12 weeks from the start of ICI [6].
Patients having at least two computed tomography (CT) scans available before the start of ICI were evaluated for the presence of HPD. Baseline CT scan was performed within 6 weeks before the start of ICI, and a minimum of 3 weeks between the two previous CT scans were required. Radiological imaging was evaluated by using RECIST v1.1 criteria. Tumour growth rate (TGR) was defined according to previously published criteria [13,14] and progressive disease (PD) was defined as HPD when TGR measured during ICI exceeds 50% TGR measured before ICI [10].
Among patients not experiencing HPD or ED, we analysed as control group patients experiencing PD not matching HPD criteria and patients deriving clinical benefit (CB) from ICIs, when plasma DNA available was suitable for NGS analysis (Supplemental Fig. 1). CB was defined as no evidence of PD within 6 months since the beginning of ICIs.
Plasma sample collection
At each timepoint, blood samples (~20 ml) were collected in two cell-free DNA BCT tubes (Streck Corporate, La Vista, NE, USA) and processed within 24-72 h. Plasma was collected as previously described (11). Briefly, blood samples were centrifuged at 2000 × g for 10 min at 4°C, and next, the supernatant was centrifuged at 20,000 × g for 10 min. Plasma was stored at −80°C, until its use. cfDNA extraction and quantity and quality assessment Molecular analyses were performed on patients complying clinical inclusion criteria and having adequate plasma DNA available.
cfDNA was extracted from 2 to 5 mL of plasma using the AVENIO cfDNA Isolation Kit (Roche Diagnostics Spa, Monza, Italia) and eluted into 60 μL of Elution Buffer, according to the manufacturer's instructions. cfDNA was quantified using the QuBit dsDNA HS Assay kit with QuBit 3.0 fluorimeter (Thermo Fisher Scientific, San Jose, CA), and cfDNA quality was assessed by Agilent Bioanalyzer using a High Sensitivity kit (Agilent Technologies, Palo Alto, CA). The extracted cfDNA was stored at −20°C until analysis.
cfDNA sequencing
Sequencing libraries were prepared from 10 to 50 ng cfDNA, using the AVENIO ctDNA Expanded kit (77 genes; Roche Diagnostics Spa), according to the manufacturer's instructions, and as previously described [15]. Individual enriched libraries were quantified with the QuBit dsDNA HS Assay kit (Thermo Fisher Scientific), and their profile was assessed using the Agilent High Sensitivity kit on the Agilent 2100 Bioanalyzer.
Analysis and variant calling was performed using the AVENIO ctDNA analysis software (Roche Diagnostics), with default parameter settings for the Expanded Panel.
Only variants with a variant allele fraction (VAF) ≥0.5% and annotated as pathogenic, likely pathogenic or with uncertain significance were taken into account as trackable mutations in plasma samples.
Statistical analysis
NGS results were elaborated by considering as "non informative" all cases with no genetic alterations detected in plasma samples both at T1 and at T2, which were not included for statistical analysis. To analyse the impact of genetic alterations in plasma on outcome endpoints, in the presence of multiple mutations, the one with the highest VAF (maxVAF) at baseline was considered as the reference, and its value was considered a continuous variable. Statistical analyses were performed also considering the mean VAF as the reference, and we observed a full concordance with analyses by considering the mean values of VAF instead of maxVAF (data not shown). For quantitative evaluation, VAF data below the LOD (0.5% VAF, as previously assessed [15]) at a single timepoint were replaced with a random number from a uniform distribution on the interval [LOD/2, LOD].
Quantification of cfDNA was considered as continuous variable. Quantitative variables were summarised as median and interquartile range (IQ), categorical variables as counts and percentages. The distribution of cfDNA and maxVAF among clinical variables was verified using the Kruskal-Wallis test and pairwise comparisons used the Wilcoxon rank-sum exact test. The correlation between molecular variables has been tested by using the Spearman test with a P value < 0.05 considered as significant.
The impact of clinical predictors on the probability of experiencing HPD or ED was estimated in univariate and multiple logistic regression models. Further, the association of cfDNA and maxVAF with HPD or ED was evaluated in separate logistic regression models, adjusted with clinical factors found significant at multiple analysis. Each biomarker was also considered as a categorical variable according to high and low levels. Optimal cut-points were selected in the full sample using a criterion based on maximising the Youden index, being the difference between true positive rate and false positive rate over all possible cut-point values, and validated with bootstrapping. The odds ratios (OR) were reported with their 95% confidence interval (CI). The median follow-up time was based on the reverse Kaplan-Meier estimator.
Radiological response (RR) was assessed by using RECIST criteria v1.1. For the current analysis, CB was defined as stable disease (SD) plus partial response (PR) plus complete response as best RR. Progression-free survival (PFS) was calculated as the time from the beginning of the systemic treatment (corresponding to T1-the time of the baseline sample draw) to radiological PD or death for any cause. Overall survival (OS) was calculated as the time from the beginning of the systemic treatment to death from any cause. Patients who did not develop an event during the study period were censored at the date of the last observation. Median PFS and OS were estimated using the Kaplan-Meier method and reported with their 95% CI calculated according to Brookmeyer and Crowley.
All statistical tests used a two-sided 5% significance level and a P value <0.05 was considered statistically significant. Statistical analyses were performed using the SAS statistical package (SAS, rel. 9.4; SAS Institute Inc.), RStudio (RStudio: Integrated Development for R. RStudio, Inc., Boston, MA),) and the cutpointr package of R software.
RESULTS
Study population, treatments and outcome A total of 171 aNSCLC patients enrolled in the MAGIC-1 study and receiving ICIs were evaluated for the current study. Details about clinical features of the whole population and treatments received are summarised in Table 1.
Five patients (3%) experienced progression matching radiological criteria for HPD, 31 patients experienced ED, and one of them met also radiological HPD criteria (Supplemental Fig. 1).
In order to test the hypothesis that longitudinal liquid biopsy could be able to identify patients at higher risk for dismal outcome or detrimental effects, we considered all patients experiencing HPD and ED with plasma DNA suitable for NGS analyses both at baseline and at the earliest timepoint during treatment (T2). All HPD cases and 12 out of 31 ED cases fit this criterion and were analysed to test the impact of molecular variables on outcome (Supplemental Fig. 1). For statistical analysis, the patient experiencing ED and also matching radiological criteria for HPD was considered in the group of HPD patients. Control patients not experiencing HPD or ED included 16 cases (Supplemental Fig. 1).
Clinical features of analysed patients are summarised in Table 1 and are not significantly different from those of the whole study population (data not shown).
We tested the hypothesis that clinical features might be associated with increased probability of experiencing HPD and/ or ED. Logistic regression described in Supplemental Table 1 showed that the presence of more than one extrathoracic metastatic site was associated with higher risk of experiencing ED and/or HPD (P = 0.002).
Molecular analysis of longitudinal liquid biopsy
Circulating free DNA (cfDNA) from plasma samples collected at T1 and T2 of 32 patients was analysed, for a total of 64 samples (Supplemental Fig. 1 and Supplemental Table 2).
cfDNA concentrations assessed at baseline (T1) ranged from 3.97 ng per ml of plasma to 290.36 ng per ml of plasma, with a median value of 13.34 ng per ml of plasma (Supplemental Table 2).
All cfDNA samples were found to be adequate for the subsequent NGS analysis, in term of quality and quantity. Sequencing parameters of all analysed samples are reported in Supplemental Table 3. All analysed samples showed a theoretical sensitivity at unique depth greater than 99%, thus enabling a limit of variant detection (LOD) of 0.5%.
cfDNA concentration identification of ED and HPD patients cfDNA concentration at any timepoint was not significantly correlated with clinical features (Supplemental Table 5).
We investigated the role of cfDNA concentration at different timepoints in predicting ED or HPD. While baseline cfDNA concentration was not associated with the risk of experiencing either ED or HPD (Table 2A, B), a significant difference in the median concentrations of cfDNA at T2 and in its variation during treatment were observed among the four clinically defined subgroups of patients (HPD, ED, CB versus PD) (P < 0.001 for T2, P < 0.001 for the absolute difference and P = 0.002 for the relative change from T1 to T2) (Fig. 1b-d, and Table 3A).
Specifically, median concentration of cfDNA at T2 was 67.82 (ng per ml of plasma) (95% CI: 55.37-94.64) in the ED subgroup, versus 9.07 (95% CI: 6.32-22.44) for patients not experiencing ED (P < 0.001) ( Table 3B). A greater variation in cfDNA concentration during treatment in ED patients was also shown (P < 0.001 for both the absolute and relative difference from T1 to T2) (Table 3A and Supplemental Fig. 3). Interestingly, logistic regression confirmed that both T2 concentration and variation T1-T2 permit to identify the risk of experiencing ED (Table 2A), also when analysing the impact of clinical factors affecting the risk for ED (Supplemental Table 1). On the other hand, neither cfDNA concentration at T2 nor its variation T1-T2 were associated with the risk of HPD (Table 2B).
Monitoring of plasma genotyping and identification of ED and HPD patients At least one somatic variation was identified in 72% (23/32) of plasma samples at baseline, with an average of two mutations per sample. The most frequent mutations were found in TP53 (28%), KRAS (10%), APC (5%) and STK11 (5%) genes (Supplemental Table 2). VAF of individual detected genetic alterations ranged from 0.5 to 52.88%, with a median of 4.71% (Supplemental Table 2).
We first evaluated the potential impact of clinicopathological features on the parameter maxVAF, used as a reference value for NGS results, but no correlation was found between maxVAF at any timepoint and clinical features (Supplemental Table 6).
When we tested potential impact of maxVAF on outcome endpoints, we observed that the maxVAF value detected at baseline and at T2, considered as a static parameter, was statistically associated with an increased risk of experiencing ED (Table 2C). In particular, the median value of maxVAF at T2 was 33.46 (95% CI: 16.05-42.10) for patients experiencing ED versus 1.74 (95% CI: 0.75-7.39) for the rest of the study population (P = 0.004; Table 3D, Fig. 1).
Cut-off definition and proposal for HPD/ED risk assessment in clinical practice
In order to investigate the potential applicability of our results, we defined the optimal cut-off value for cfDNA levels and maxVAF able to individuate patients at higher risk to develop dismal outcome or detrimental effects following ICIs treatment. Through a ROC-based analysis, we determined the value of 22.7 ng per ml of plasma as the optimal cut-off of the cfDNA concentration at T2 (corresponding to the lower quartile limit) to discriminate between patients experiencing ED or HPD versus all other patients, with an accuracy of 81% (95% CI: 64-93). The median cut-point value in the boostrap samples was 22.7 (95% CI: (Table 4A). Similarly, we identified the optimal cut-off of cfDNA absolute and relative variation from T1 to T2. An absolute change of 3.8 ng per ml of plasma or a relative increase of 0.2 ng per ml of plasma (T1-T2) identified an increased risk of ED or HPD with an adjusted OR of 68.2 (95% CI: 5.6-828.6, P = 0.001) and 17.2 (95% CI: 2.6-114.7, P = 0.003), respectively (Table 4A). The performance of optimal cutpoins is reported in Supplemental Table 7.
Since cfDNA quantification was not able to specifically identify all HPD patients, we analysed the maxVAF to define a cut-off value for the risk of experiencing HPD: the value of 0.71 for maxVAF relative increase T1-T2 emerged as the optimal cut-off, with an accuracy of 84% (95% CI: 64-95). Specifically, patients with maxVAF relative variation T2-T1 exceeding this value had an increased risk for experiencing HPD, with an OR of 13.5 (95% CI: 1.3-136.0, P = 0.027) (Table 4B).
DISCUSSION
The introduction of ICIs in clinical practice has radically changed the outcome of non-oncogene addicted aNSCLC patients [16,17]. Even though ICIs are associated with the chance of long survivorship, their benefit is highly heterogeneous and some detrimental effects have been described [18][19][20]. Importantly, ED and HPD have been observed even in aNSCLC patients expressing high levels of PD-L1 and treated in first-line with single-agent or combination ICIs [3,21] while no biomarkers are currently available for their identification.
In our report, we assessed the potential value of liquid biopsy at early timepoints during ICIs treatment for assessing the risk for potential detrimental effects. HPD is a phenomenon related to largely unknown biological mechanisms triggered by ICIs leading to accelerated tumour growth. HPD is complex to assess in clinical practice [7,8] and no unique defining criteria are available [10,22] although the need for taking into consideration both clinical and radiological criteria has already been raised [9]. Among potential clinical criteria, we decided to include patients experiencing death within 12 weeks from the start of ICIs, being an objective criterion and a phenomenon already observed in several clinical trials [6,11]. Evidence about lack of complete overlap between the two phenomena, ED and HPD, in line with previous observations [23,24], has been confirmed in our experience. We also confirmed the potential impact of extrathoracic disease on the risk of developing ED or HPD and the occurrence of ED even in patients expressing high level of PD-L1 and treated in a first-line setting [3,10,25].
When considering the impact of longitudinal liquid biopsy, we analysed both cfDNA values and VAF of tumour-associated genetic alterations during treatment: our results permit to speculate a differential role of the two assessed parameters and a potential different biological background for the ED and HPD phenomena. Specifically, ED was associated with a dramatic variation in cfDNA concentration between T1 and T2, but no significant change of maxVAF between T1 and T2. Although it is generally hold that ctDNA represents only a small portion of total cfDNA [26], we speculate that in the ED subgroup ctDNA might represent a large part of cfDNA. Although the NGS assay used in this study did not enable estimation of the tumour fraction in cfDNA, our hypothesis is supported by the much higher maxVAF value of tumour-associated mutations in ED samples both at T1 and T2, compared with other samples (Table 3C, D). Expectedly, we did not observe a variation in the maxVAF value between T1 and T2, likely due to the fact that this parameter reached a plateau value in ED patients. On the other hand, cfDNA concentration was not associated with HPD, whereas the dynamic relative variation of maxVAF T1-T2 identified patients experiencing HPD. Since maxVAF relative variation is independent from the cfDNA, it might be more influenced by rapid increase in tumour growth following the start of ICIs and less related to baseline prognostic factors.
The role of liquid biopsy in patients with solid tumours and its predictive potential on outcome has already been described [12,[27][28][29], but identification of potential detrimental effects of ICIs requires early time-point evaluation. To the best of our knowledge, our study is the first evaluating the impact of liquid biopsy performed after 3-4 weeks of treatment [12] and the first assessment concerning the identification of HPD and ED by using liquid biopsy. cfDNA quantification is easy to perform and could be assessed at low cost in clinical practice. We thus suggest a twostep risk assessment model, including an initial evaluation of cfDNA in plasma at T1 and T2 followed by NGS (Supplemental Table 4. Logistic regression predicting the risk of experiencing ED or HPD according to cfDNA concentration or VAF as a categorical variable. Fig. 4). Although limited by the relatively low number of patients included, this approach represents a proof-of-concept analysis in order to show potential clinical applicability of our longitudinal liquid biopsy model. Early identification of ED/HPD patients has great potential for clinical applications as it could help optimisation and personalisation of treatment, thus avoiding more toxic combination treatment when not needed. In view of the main limitation of our study, represented by the relatively small number of cases analysed, the prospective interventional trial is warranted to confirm our results and validate a dynamic risk-based treatment approach.
In conclusion, this study represents a proof-of-concept analysis concerning an innovative approach to the issue of predictive biomarkers of immunotherapy in lung cancer, focusing on patients who do not derive any clinical benefit and could benefit of a customised approach including early changes in treatment.
DATA AVAILABILITY
The data generated and analysed during this study are included in this published article and its additional files. Further raw data might be asked to the authors. | 4,939.2 | 2022-09-29T00:00:00.000 | [
"Medicine",
"Biology"
] |
Graptopetalum paraguayense Extract Ameliorates Proteotoxicity in Aging and Age-Related Diseases in Model Systems
Declines in physiological functions are the predominant risk factors for age-related diseases, such as cancers and neurodegenerative diseases. Therefore, delaying the aging process is believed to be beneficial in preventing the onset of age-related diseases. Previous studies have demonstrated that Graptopetalum paraguayense (GP) extract inhibits liver cancer cell growth and reduces the pathological phenotypes of Alzheimer’s disease (AD) in patient IPS-derived neurons. Here, we show that GP extract suppresses β-amyloid pathology in SH-SYS5Y-APP695 cells and APP/PS1 mice. Moreover, AMP-activated protein kinase (AMPK) activity is enhanced by GP extract in U87 cells and APP/PS1 mice. Intriguingly, GP extract enhances autophagy in SH-SYS5Y-APP695 cells, U87 cells, and the nematode Caenorhabditis elegans, suggesting a conserved molecular mechanism by which GP extract might regulate autophagy. In agreement with its role as an autophagy activator, GP extract markedly diminishes mobility decline in polyglutamine Q35 mutants and aged wild-type N2 animals in C. elegans. Furthermore, GP extract significantly extends lifespan in C. elegans.
Introduction
Aging is a normal physical process characterized by a general decline in physiological functions and behavioral capacity, leading to reduced vitality and eventually death [1,2]. As humans age, cellular damages accumulate, increasing the risk of disease formation. Among these age-related diseases, neurodegenerative diseases, such as Alzheimer's disease (AD), Parkinson's disease, and Huntington's disease (HD), have garnered much attention due to the lack of effective treatment and accompanied economic burdens.
AD is the most common cause of dementia in people who are older. Amyloid plaques and neurofibrillary tangles (NFTs) in the brain, composed of abnormally folded amyloid-β42 (Aβ42) and phosphorylated tau proteins, are the pathological hallmarks of AD [3]. Autophagy has been recognized as a critical cellular mechanism in maintaining cellular homeostasis by degrading aggregated proteins and damaged organelles [4]. Recently, autophagy has been shown to mediate Aβ metabolism and tau assembly [5]. Numerous studies also demonstrate that autophagy dysfunction has been indicated in AD progression [6][7][8][9]. Furthermore, several pieces of evidence suggest that enhancing autophagy could promote the degradation of pathologic protein aggregates in AD and HD models [10][11][12][13][14].
Loss of protein homeostasis is a key hallmark of aging [2]. Thus, autophagy activation is also suggested to be beneficial for lifespan or health span in animals. Consequently, most of the interventions that extend lifespan in model organisms usually elevate autophagic activity [15]. Several lines of evidence have demonstrated that autophagy machinery is required for longevity regulation in animal models [16]. Recently, studies have shown that genetic activation of autophagy prolongs lifespan in mice [17,18], indicating the plausible application of autophagy activators in delaying the aging process and the onsets of agerelated diseases.
Graptopetalum paraguayense (GP) is an edible succulent plant. In Taiwan, GP has been used as a medical herb to prevent liver disorders and lower blood pressure. Recent research has revealed that an extract of GP, HH-F3, could inhibit the proliferation of liver cancer cells, could lessen liver fibrosis in rats, and could reduce the secretion of Aβ and the phosphorylation of Tau proteins in induced pluripotent stem cell (iPSC)-derived neurons from AD patients [19][20][21][22]. This study further investigated the underlying mechanisms by which GP extracts reduce AD-associated pathological phenotypes in neuroblastoma SH-SYS5Y-APP 695 cells and APP/PS1 mice. Furthermore, we demonstrated that GP extracts could reduce the mobility decline and could extend lifespan in C. elegans.
Preparation of GP Extract
The extraction method of GP HH-F3 was established previously [22,23]. In brief, frozen GP leaves were ground and lyophilized at −20 • C. Next, 15 g of lyophilized GP powder was mixed with 100 mL 100% ethanol for 5 min and then centrifuged at 1500× g for 5 min. The pellet was suspended in 10 mL of 30% dimethyl sulfoxide (DMSO), followed by 9300× g centrifuge for 5 min. The supernatant was fractionated into four fractions (F1-F4) by a Sephadex LH-20 column. The F3 fraction, termed HH-F3, was identified to be the active fraction.
Cell Culture
Human neuroblastoma SH-SY5Y cells were maintained in MEM/F12 (Gibco BRL, Grand Island, NY, USA) with 10% fetal bovine serum (FBS, Gibco BRL), 100 U/mL penicillin, and 10 µg/mL streptomycin sulfate. SH-SY5Y cells were stably transfected with the vector containing the full-length APP 695 isoform. Stable clones with plasmid expression were maintained by growing cells in the selective medium containing G418. Human GBM cell line U87 was maintained in Dulbecco's Modified Eagle Medium (Gibco BRL), supplemented with 10% FBS and 1% penicillin-streptomycin (Gibco BRL). Human colorectal cancer cell line HT-29 was cultured in McCoy's 5A medium (Gibco BRL) supplemented with 10% FBS and 1% penicillin-streptomycin. All cells were maintained at 37 • C in 5% CO 2 . Cells were seeded at the density of 3 × 10 5 cells/6 cm dish for at least 16 h before drug treatment.
Cell Viability Assay
The colorimetric MTT metabolic activity assay was used to determine cell viability. Cells were incubated with minimum essential medium containing 0.5 mg/mL MTT (Sigma-Aldrich, St. Louis, MO, USA) for one hour. After incubation, the medium was aspirated, and the resultant formazan crystals were dissolved in DMSO. The absorbance intensity at 600 nm was measured by a microplate reader.
Western Blot Assay
The cells were lysed in buffer containing 50 mM HEPES, 2.5 mM EDTA, 1 mM PMSF, 5 µg/mL aprotinin, and 10 µg/mL leupeptin; 30 µg of protein lysate were electrophoresed on 8 or 10% SDS-PAGE gels and transferred to methanol-activated PVDF membranes. The membranes were blocked with 5% non-fat skim milk and incubated with primary antibodies at 4 • C overnight. Western blotting was visualized by peroxidase-conjugated secondary antibodies and ECL chemiluminescent substrate (Immobilon Western Chemiluminescent Substrate, Millipore, Burlington, MA, USA). The quantification of target protein bands with reference to control bands (for each concentration) used the ImageJ Gel Analysis program. The
Mouse Studies
APPswe/PS1dE9 (APP/PS1) double transgenic mice were purchased from Jackson Laboratory (No. 005864, Bar Harbor, ME, USA). Breeding was conducted using female transgenic mice and their male wild-type siblings. Mice were maintained under a 12 h/12 h light/dark cycle in constant conditions of temperature (24 • C) and humidity (55-65%) with free access to food and water. All procedures were approved by the Institutional Animal Care and Use Committee at the National Research Institute of Chinese Medicine (IACUC No.:105-417-1).
Feeding Protocol
HH-F3 powder was dissolved in H 2 O. HH-F3 (300 mg/kg/day) or H 2 O were administered orally to wild-type and APP/PS1 mice for 30 days.
Thioflavin-S (ThS) Fluorescent Staining in Brain Sections
Dry sections of mouse brains were incubated with fresh and filtered 1% ThS solution for 60 min, followed by washing twice with 70% ethanol and twice with water.
L1000 Expression Profiling
The gene expression of the HT29 cells treated with 5 µg/mL HH-F3 for 6 h was profiled using the L1000 platforms by Genometry Inc., Cambridge, MA, USA [24]. In short, mRNA transcripts of HH-F3 treated cells were captured from the whole cell lysates by o ligo-dT plates. The cDNAs were generated by reverse transcription from mRNA and then amplified using PCR. The PCR amplicon was then hybridized to barcoded Luminex beads to exhibit the expression levels of specific genes. The expression of 978 landmark genes was analyzed.
Gene Set Enrichment Analysis (GSEA)
GSEA analysis was performed in GSEA software version 4.0.3 (Broad Institute, MA, USA). The study was proceeded using the C2 gene set collections from the MSigDB v.7.2. with 1000 permutations.
HLH-30::GFP Nuclear-Cytoplasmic Ratio (N/C Ratio) Quantification
Synchronized worms carrying integrated hlh-30::gfp arrays were treated either in vehicle or HH-F3 from hatching. Fluorescence images of the Day 1 adult animals were taken and scored blindly for the nuclear accumulation of HLH-30::GFP protein in the intestinal cells. The quantification was perforemd by measuring the total GFP fluorescence intensity of the entire cell and the nucleus area of the first six intestinal cells (Int1* and Int2* cells) using Olympus Microsuite software. Cytosolic GFP intensity was calculated by the following equation, Area ( whole cell)−Area (nucleus) . The N/C ratio of HLH-30 in a given cell was obtained by dividing the nuclear signal by the cytosolic signal. At least 50 animals (1-4 cells per worm) were scored per experiment.
Mobility Analysis in C. elegans
Thrashing assays were carried on at least 12 worms. Individual stage-synchronized worms were placed in M9 buffer. Thrashes produced by each worm for one minute were counted after one minute equilibration period. A single thrash was defined as the bending of the body to the outermost angle and then back to the initial posture. Experimental data are shown as mean ± SEM. Statistical comparisons were conducted using Student's t-test.
Lifespan Analysis in C. elegans
Lifespan analyses were conducted at 20 • C as described previously [25,26], and 60-90 animals were tested in each experiment. The viability of the worms was scored every two days. In all experiments, the pre-fertile period of adulthood was used as day 0 for lifespan analysis. Stata 12 (StataCorp, College Station, Texas, USA) software was used for statistical analysis to determine the means and percentiles. In all cases, p values were calculated using the log-rank (Mantel-Cox) method.
GP Extract Inhibits the Secretion of Amyloid-Aβs in the Human Neuroblastoma SH-SY5Y-APP695 Cells
Recently, the study by Wu et al. demonstrated that the GP extract, HH-F3, significantly reduces Aβ secretion in AD patient iPSC-derived neurons [22]. Previous studies have shown that the overexpression of APP 695 in human SH-SY5Y cells significantly increases Aβ40 and Aβ42 secretion [27]. To further explore the molecular mechanisms of HH-F3 to reduce the AD-associated phenotypes, we used SH-SY5Y cells that stably expressed wildtype human APP 695 as the cell culture model. We first treated SH-SY5Y-APP 695 cells with HH-F3 at concentrations of 10, 30, and 50 µg/mL for 24 h. The cytotoxic effect of HH-F3 was evaluated by cell viability assay. No cytotoxic effect was observed in SH-SY5Y-APP 695 cells even at the highest concentration of HH-F3 (Figure 1a). We then assessed the impact of HH-F3 on the secretion of Aβ40 and Aβ42 in SH-SY5Y-APP 695 cultured medium by ELISA assay. Our results indicated that HH-F3 treatment for 24 h markedly reduced Aβ40 and Aβ42 secretion at the dosage of 50 µg/mL (Figure 1b), which is also the same effective concentration applied to the AD-iPSC derived neurons [22]. Thus, SH-SY5Y-APP 695 cells serve as a proper cell model to further study the molecular mechanisms of HH-F3 in the regulation of amyloid secretion. We next examined whether full-length APP levels in SH-SY5Y-APP 695 cells were affected by HH-F3 treatment. As shown in Figure 1c, the amounts of full-length APP were not changed by the treatment of HH-F3. Meanwhile, HH-F3 did not affect the levels of major amyloid degrading proteases, such as insulin-degrading enzyme (IDE) and neprilysin (NEP), neither in the cell lysate of SH-SY5Y-APP 695 cells nor in the conditioned medium (Figure 1c).
GP Extract Reduces the Plaque Formation in the Cerebral Cortex of APP/PS1 Transgenic Mice
We then investigated the effects of HH-F3 in APP/PS mice model of AD. The APP/PS1 transgenic mouse is widely used in various aspects of AD-related study. In this study, HH-F3 was administrated to 140-day-old APP/PS1 mice orally at 300 mg/kg/day for 30 days. There were no significant differences in the body weight between the control and HH-F3-treated groups after a 30-day treatment (Figure 2a). To test whether HH-F3 could reduce the Aβ deposition in APP/PS1 mice, we performed thioflavin-S (ThS) fluorescent staining to detect Aβ plagues in the cerebral hemisphere of APP/PS1 mice fed with or without HH-F3. Our results indicated that Aβ deposit formation in the cerebral hemisphere is markedly reduced by 48% after a 30-day HH-F3 treatment (Figure 2b,c). Furthermore, through Aβ1-40 and Aβ1-42 ELISA assays, we found that both soluble and insoluble Aβ1-40 levels in the cerebral cortex were significantly reduced in the HH-F3-treated group (Figure 2d,e). The amounts of soluble Aβ1-42 slightly decreased in the HH-F3-treated mice. However, the data were not statistically significant (Figure 2d).
GP Extract Activates AMPK in U87 Cells and the Cerebral Cortex of APP/PS1 Mice
To gain an overview of the altered pathways after HH-F3 treatment, differentially expressed genes of HT29 cells treated with 50 µg/mL HH-F3 were subjected to Gene Set Enrichments Analysis (GSEA). The results indicated that genes associated with HD, AD, and AMPK signaling were significantly enriched in HH-F3-treated cells (Supplementary Figure S1), suggesting that HH-F3 might reduce neurodegeneration via the activation of AMPK pathways.
As described earlier, the GSEA analysis of HH-F3-treated cells revealed significantly enriched AMPK pathways. To further confirm the effects of HH-F3 in AMPK signaling, glial U87 cells were treated with HH-F3 at a concentration of 10, 25, and 50 µg/mL for 24 h. We accessed the activity of AMPK by measuring the phosphorylation at the Thr172 of AMPK (pAMPK). HH-F3 treatment significantly activated AMPK in U87 cells, as shown by the elevated pAMPK/AMPK ratios (Figure 3a). Previous studies have indicated that AMPK signaling pathways [28] are dysregulated in the brains of APP/PS1 mouse model and human AD patients. Thus, we examine if HH-F3 reduces AD pathology by elevating AMPK signaling in APP/PS1 mice. Our results demonstrated that the levels of phosphorylated AMPK and total AMPK in the cerebral cortex of APP/PS1 mice were reduced (Figure 3b), suggesting the downregulation of AMPK signaling pathway. Intriguingly, a 30-day HH-F3 treatment could markedly restore the levels of both pAMPK and AMPK (Figure 3b), supporting that HH-F3 might act as an AMPK activator to reduce pathological conditions in APP/PS1 mice. for three biological replicates. Data were analyzed by Student's t-test. * p < 0.05. (b) Western blot analysis of AMPK phosphorylation (Thr172) and AMPK in the cerebral cortex of APP/PS1 mice treated with or without HH-F3 for 30 days. Western blot quantification of AMPK phosphorylation (Thr172) (c) and AMPK (d) in the cerebral cortex of wild-type mice and APP/PS1 mice treated with or without HH-F3. The results are expressed as mean ± SD. * p < 0.05 and *** p < 0.001. The mean value of pAMPK/AMPK ratios from wild-type mice was normalized to one.
Autophagy Is Elevated by GP Extract Both in Cells and C. elegans
Several pieces of evidence have indicated that autophagy dysregulation occurs in both AD patients and animal models [29][30][31]. Moreover, numerous studies have reported that the genetic or pharmacological activations of autophagy could reduce amyloid accumulation and prevent cognitive decline in AD-mouse models [32][33][34][35]. According to our data shown above, HH-F3 treatment could increase the activity of AMPK, one of the key autophagy regulators. Thus, we tested whether HH-F3 ameliorates AD pathology through activation of autophagy. Neurons and glia are the two major types of cells in the brain. Glial cells, such as astrocytes, microglia, and oligodendrocytes, are also critical in AD pathogenesis [36]. Research has shown that autophagy in glial cells plays a key role in reducing extracellular Aβ around neurons [37,38]. Here, we monitored the autophagic activity in the HH-F3treated glial U87 cells and neuronal SH-SY5Y-APP 695 cells by analyzing the turnover of microtubule-associated protein 1A/1B-light chain 3 (LC3), a maker of autophagosomal membrane. As shown in Figure 4, an HH-F3-dependent increase in the levels of LC3-II, the lipidated LC3, suggests that the induction of autophagy was enhanced by HH-F3 treatment in both U87 and SH-SY5Y-APP 695 cells. Furthermore, we found that p62 levels were significantly reduced in SH-SY5Y-APP 695 cells at the dosage of 50 µg/mL HH-F3 (Figure 4b), indicating that HH-F3 could indeed activate autophagy flux. analysis of LC3-I, LC3-II, and p62 in SH-SY5Y-APP 695 cells treated with 0, 5, 25, and 50 µg/mL for 24 h. The folds of the mean grayscale of LC3-II/LC3-I and p62 to actin among treatments are shown on the right. Mean ± SD for 2-3 biological replicates. * p < 0.05, ** p < 0.01, Student's t-test.
Next, we examined whether HH-F3 could activate autophagy activity through an evolutionarily conserved mechanism. To do so, we performed the HH-F3 treatments in the nematode C. elegans. We used the transgenic worms carrying GFP::LGG-1, the worm homolog of LC3, to detect autophagy activity. Increased levels of GFP::LGG-1 puncta commonly represent the activation of autophagy. Here, GFP::LGG-1 mutants were treated with 0, 20, and 40 µg/mL HH-F3 from hatching. After a two-day HH-F3 treatment, the levels of LGG-1/LC3 puncta in the seam cells were remarkably increased (Figure 5a), indicating autophagy activation.
GP Extract Extended Lifespan in C. elegans in a Daf-16-Independent Manner
As the above results show, HH-F3 could promote autophagy activity across species. The stimulation of autophagy has been shown to enhance the turnover of aggregated proteins, such as TDP-43 and huntingtin. Therefore, we asked whether HH-F3 could reduce the pathological phenotypes induced by disease-associated protein aggregation in other model organisms. Here, we used C. elegans expressing fluorescently tagged polyglutamine (polyQ) in the body-muscle cells to study the effects of HH-F3 in polyQ pathogenesis. The transgenic animals carrying 35 polyglutamine repeats (Q35) were treated with 20 µg/mL HH-F3 from L4 larval stage, and the mobility of Day 5 adults was determined by thrashing assay. In the Q35 mutants treated with vehicle, the locomotion decreased to 30% in Day 5 Q35 worms. However, Day 5 Q35 worms treated HH-F3 still maintain 70% of mobility in Day 1 adult animals (Figure 6a). Thus, HH-F3 could significantly lessen the mobility decline caused by polyQ-mediated toxicity in the muscle cells. Supplementary Table S1.
Increasing evidence has indicated that autophagy might serve as a common downstream effector in aging processes. Since HH-F3 could activate autophagy in the mammalian cells and nematodes, we tested whether HH-F3 could slow down the aging process and extend lifespan in C. elegans. First, we verified if HH-F3 treatment could prevent mobility decline during aging. We performed thrashing assays on Day 1 and Day 7 adult wild-type N2 worms treated with vehicle or 20 µg/mL HH-F3. As shown in Figure 6b, the mobility in Day 7 N2 worms fed with the vehicle was reduced by 40% compared with the Day 1 animals. However, there was no significant difference between the mobility of Day 1 and Day 7 worms treated with HH-F3 (Figure 6b), indicating that HH-F3 treatment could prevent mobility decline in aged animals. Next, we performed lifespan analysis on wild-type N2 animals treated with 20 µg/mL HH-F3. We found that 20 µg/mL HH-F3 significantly increases animals' lifespan by 14-16% (Figure 6c, Table S1).
DAF-16, a FOXO transcription factor in C. elegans, is the key mediator for several longevity pathways, such as insulin/IGF-1 signaling and germline signaling. We then further investigated whether DAF-16/FOXO is required in HH-F3-induced lifespan extension. We thus performed lifespan analysis in daf-16 null mutants treated with vehicle or 20 µg/mL HH-F3. As shown in Figure 6d, HH-F3 could still increase the lifespan of daf-16 mutants by 11.3%, suggesting that the lifespan extension induced by HH-F3 treatment was not dependent on daf-16/FOXO.
Discussion
Our study has shown that GP extract, HH-F3, markedly reduced amyloid-β secretion in both SH-SY5Y-APP 695 cells and APP/PS1 mice. Furthermore, amyloid plaque formation in APP/PS1 mice was lessened after a 30-day HH-F3 treatment, suggesting that HH-F3 is a potential therapeutic candidate for AD treatment. To elucidate the molecular mechanisms of HH-F3 to reduce AD pathology, we identified that HH-F3 could activate autophagy in U87 and SH-SY5Y-APP 695 cells. Moreover, the activation of autophagy by HH-F3 was observed not only in the mammalian cells but also in C. elegans. Our findings have suggested that HH-F3 might promote autophagy through a conserved pathway across species, further supporting its plausible application in humans.
Loss of protein homeostasis (proteostasis) has been described as one of the hallmarks of aging [2,39]. Since the autophagy-lysosomal pathway is one of the main cellular mechanisms in maintaining proteostasis [40,41], autophagy activation has been thought to be beneficial to longevity [16]. Indeed, studies from various model organisms have shown the essential role of autophagy in the regulation of longevity [15,16]. Furthermore, enhancing autophagic activity by overexpressing autophagy genes could extend the lifespan of flies and mice [17,42]. Since AMPK and TFEB/HLH-30 are two critical regulators in autophagic activity [18], presumably, AMPK and TFEB/HLH-30 might also affect longevity regulation. Indeed, several lines of evidence have also demonstrated that AMPK and HLH-30/TFEB are both involved in lifespan regulation in C. elegans [43][44][45]. Moreover, the overexpression of AMPK and HLH-30/TFEB could extend lifespan in C. elegans [43,44,46]. Thus, pharmacological activation of AMPK or HLH-30/TFEB might also promote lifespan and health span in animals. Given the fact that HH-F3 could activate AMPK and HLH-30/TFEB, we presumed that HH-F3 might have longevity effects. Indeed, our results have indicated that HH-F3 significantly delayed the mobility decline and extended the lifespan of wild-type animals. Furthermore, HH-F3 greatly reduced polyQ pathology in C. elegans, supporting the negative effect of HH-F3 in age-associated decline of proteostasis. Through a genetic epistasis analysis, we further found that daf-16/FOXO transcription factor is not required in the longevity effect of HH-F3. Our results in both AD models and C. elegans suggest that the GP extract HH-F3 might act as an autophagy activator to maintain proteostasis, slowing down the aging process and delaying age-related disease onset. Therefore, HH-F3 may be a potential pharmacological candidate for the future development of anti-aging drugs.
Data Availability Statement:
The lifespan data present in this study are available in Table S1. All data generated or analyzed during the current study are available from the corresponding author upon reasonable request. | 4,964.4 | 2021-11-29T00:00:00.000 | [
"Medicine",
"Biology",
"Environmental Science"
] |
Breeding Gender-aware Direct Speech Translation Systems
In automatic speech translation (ST), traditional cascade approaches involving separate transcription and translation steps are giving ground to increasingly competitive and more robust direct solutions. In particular, by translating speech audio data without intermediate transcription, direct ST models are able to leverage and preserve essential information present in the input (e.g.speaker’s vocal characteristics) that is otherwise lost in the cascade framework. Although such ability proved to be useful for gender translation, direct ST is nonetheless affected by gender bias just like its cascade counterpart, as well as machine translation and numerous other natural language processing applications. Moreover, direct ST systems that exclusively rely on vocal biometric features as a gender cue can be unsuitable or even potentially problematic for certain users. Going beyond speech signals, in this paper we compare different approaches to inform direct ST models about the speaker’s gender and test their ability to handle gender translation from English into Italian and French. To this aim, we manually annotated large datasets with speak-ers’ gender information and used them for experiments reflecting different possible real-world scenarios. Our results show that gender-aware direct ST solutions can significantly outperform strong – but gender-unaware – direct ST models. In particular, the translation of gender-marked words can increase up to 30 points in accuracy while preserving overall translation quality.
Introduction
Language use is intrinsically social and situated as it varies across groups and even individuals (Bamman et al., 2014). As a result, the language data that are collected to build the corpora on which natural language processing models are trained are often far from being homogeneous and rarely offer a fair representation of different demographic groups and their linguistic behaviours (Bender and Friedman, 2018). Consequently, as predictive models learn from the data distribution they have seen, they tend to favor the demographic group most represented in their training data (Hovy and Spruit, 2016;Shah et al., 2020).
This brings serious social consequences as well, since the people who are more likely to be underrepresented within datasets are those whose representation is often less accounted for within our society. A case in point regards the gender data gap. 1 In fact, studies on speech taggers (Hovy and Søgaard, 2015) and speech recognition (Tatman, 2017) showed that the underrepresentation of female speakers in the training data leads to significantly lower accuracy in modeling that demographic group.
The problem of gender-related differences has also been inspected within automatic translation, both from text (Vanmassenhove et al., 2018) and from audio . These studies -focused on the translation of spoken language -revealed a systemic gender bias whenever systems are required to overtly and formally express speaker's gender in the target languages, while translating from languages that do not convey such information. Indeed, languages with grammatical gender, such as French and † The authors contributed equally. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.
1 For a comprehensive overview on such societal issue see (Criado-Perez, 2019).
Italian, display a complex morphosyntactic and semantic system of gender agreement (Hockett, 1958;Corbett, 1991), relying on feminine/masculine markings that reflect speakers' gender on numerous parts of speech whenever they are talking about themselves (e.g. En: I've never been there -It: Non ci sono mai stata/stato). Differently, English is a natural gender language (Hellinger and Bußman, 2001) that mostly conveys gender via its pronoun system, but only for third-person pronouns (he/she), thus to refer to an entity other than the speaker. As the example shows, in absence of contextual information (e.g As a woman, I have never been there) correctly translating gender can be prohibitive. This is the case of traditional text-to-text machine translation (MT) and of the so-called cascade approaches to speech-to-text translation (ST), which involve separate transcription and translation steps (Stentiford and Steer, 1988;Waibel et al., 1991). Instead, direct approaches (Bérard et al., 2016;Weiss et al., 2017) translate without intermediate transcriptions. Although this makes them partially capable of extracting useful information from the input (e.g. by inferring speaker's gender from his/her vocal characteristics), the general problem persists: since female speakers (and associated feminine marked words) are less frequent within the training corpora, automatic translation tends towards a masculine default. Following (Crawford, 2017), this attested systemic bias can directly affect the users of such technology by diminishing their gender identity or further exacerbating existing social inequalities and access to opportunities for women. Systematic gender representation problems -although unintended -can affect users' self-esteem (Bourguignon et al., 2015), especially when the linguistic bias is shaped as a perpetuation of stereotypical gender roles and associations (Levesque, 2011). Additionally, as the system does not perform equally well across gender groups, such tools may not be suitable for women, excluding them from benefiting from new technological resources.
To date, few attempts have been made towards developing gender-aware translation models, and surprisingly, almost exclusively within the MT community (Vanmassenhove et al., 2018;Elaraby et al., 2018;Moryossef et al., 2019). The only work on gender bias in ST proved that direct ST has an advantage when it comes to speaker-dependent gender translation (as in I've never been there uttered by a woman), since it can leverage acoustic properties from the audio input (e.g. speaker's fundamental frequency). However, relying on perceptual markers of speakers' gender is not the best solution for all kinds of users (e.g. transgenders, children, vocally-impaired people). Moreover, although their conclusions remark that direct ST is nonetheless affected by gender bias, no attempt has yet been made to try and enhance its gender translation capability. Following these observations, and considering that ST applications have entered widespread societal use, we believe that more effort should be put into further investigating and controlling gender translation in direct ST, in particular when the gender of the speaker is known in advance.
Towards this objective, we annotated MuST-C (Di Gangi et al., 2019a; -the largest freely available multilingual corpus for ST -with speakers' gender information and explored different techniques to exploit such information in direct ST. The proposed techniques are compared, both in terms of overall translation quality as well as accuracy in the translation of gender-marked words, against a "pure" model that solely relies on the speakers' vocal characteristics for gender disambiguation. In light of the above, our contributions are: (1) the manual annotation of the TED talks contained in MuST-C with speakers' gender information, based on the personal pronouns found in their TED profile. The resource is released under a CC BY NC ND 4.0 International license, and is freely downloadable at https://ict.fbk.eu/ must-speakers/; (2) the first comprehensive exploration of different approaches to mitigate gender bias in direct ST, depending on the potential users, the available resources and the architectural implications of each choice.
Experiments carried out on English-Italian and English-French show that, on both language directions, our gender-aware systems significantly outperform "pure" ST models in the translation of gender-marked words (up to 30 points in accuracy) while preserving overall translation quality. Moreover, our best systems learn to produce feminine/masculine gender forms regardless of the perceptual features received from the audio signal, offering a solution for cases where relying on speakers' vocal characteristics is detrimental to a proper gender translation.
Background
Besides the abundant work carried out for English monolingual NLP tasks (Sun et al., 2019), a consistent amount of studies have now inspected how MT is affected by the problem of gender bias. Most of them, however, do not focus on speaker-dependent gender agreement. Rather, a number of studies (Stanovsky et al., 2019;Escudé Font and Costa-jussà, 2019;Saunders and Byrne, 2020) evaluate whether MT is able to associate prononimal coreference with an occupational noun to produce the correct masculine/feminine forms in the target gender-inflected languages (En: I've known her for a long time, my friend is a cook. Es: La conozco desde hace mucho tiempo, mi amiga es cocinera).
Notably, few approaches have been employed to make neural MT systems speaker-aware by controlling gender realization in their output. Elaraby et al. (2018) enrich their data with a set of genderagreement rules so to force the system to account for them in the prediction step. In (Vanmassenhove et al., 2018), the MT system is augmented at training time by prepending a gender token (female or male) to each source segment. Similarly, Moryossef et al. (2019) artificially inject a short phrase (e.g. she said) at inference time, which acts as a gender domain label for the entire sentence. These approaches are implemented and tested on natural spoken language that, compared to written language, is more likely to contain references to the speaker and, consequently, speaker-dependent gender-marked words.
In the light of above, the correct translation of gender is a particularly relevant task for ST systems, as they are precisely developed to translate oral, conversational language. Nonetheless, to our knowledge only one work has investigated gender bias in ST . Focusing on the proper handling of gender phenomena, the authors take stock of the situation by comparing cascade and direct architectures on MuST-SHE, a multilingual benchmark derived from the TED-based MuST-C corpus and specifically designed to evaluate gender translation and bias in ST. Their conclusions remark that, although traditional cascade systems still outperform direct solutions, the latter are able to exploit audio information for a better treatment of speaker-dependent gender phenomena.
These findings open a line of focused research on speaker-aware ST that is worth exploring more thoroughly, also in light of the fact that the performance gap between cascade and direct approaches has further reduced (Ansari et al., 2020). On one side, rather than comparing the two paradigms, this progress now motivates exploring all the possible ways to boost direct ST performance towards the translation of gender-marked expressions. On the other side, since the direct systems tested in rely on "pure" models built to verify an hypothesis (i.e. that translating audio signals without intermediate representations makes a difference in handling gender), the real potential of direct ST technology with respect to this problem is still unknown. Moreover, as their "pure" models solely rely on the speaker's fundamental frequency, various instances in which such perceptual marker is not indicative of the speaker's gender remain out of the picture.
Annotation of MuST-C with Speakers' Gender Information
Although current research on gender-aware ST can count on the MuST-SHE benchmark for fine-grained evaluations, gender-annotated training data are not yet available. So far, this has limited the scope of research to application scenarios in which speakers' gender is inferred from the input audio. These scenarios are not representative of the full range of possible usages of ST and are also potentially problematic, since gendered forms expected in translation do not necessarily align with speaker's vocal characteristics.
In the light of the above, building large training corpora explicitly annotated with gender information becomes crucial. To this aim, rather than building a new resource from scratch, we opted for adding an annotation layer to MuST-C, which has been chosen over other existing corpora (Iranzo-Sánchez et al., 2020) for the following reasons: i) it is currently the largest freely available multilingual corpus for ST, ii) being based on TED talks it is the most compatible one with MuST-SHE, iii) TED speakers' personal information is publicly available and retrievable on the TED official website. 2 Following the MuST-C talk IDs, we have been able to i) automatically retrieve the speakers' name, Table 1: Statistics for MuST-C data with gender annotation. The number of segments and hours varies over the two language pairs due to the different pre-processing of MuST-C data.
ii) find their associated TED official page, and iii) manually label the personal pronouns used in their descriptions. Though time-consuming, such manual retrieval of information is preferable to automatic speaker gender identification for the following reasons. First, since automatic methods based on fundamental frequency are not equally accurate across demographic groups (e.g. women and children are hard to distinguish as their pitch is typically high (Levitan et al., 2016)), manual assignment prevents from incorporating gender misclassifications in our training data. Second, biological essentialist frameworks that categorize gender based on acoustic cues (Zimman, 2020) are especially problematic for transgender individuals, whose gender identity is not aligned with the sex they have been assigned at birth based on designated anatomical/biological criteria (Stryker, 2008).
Differently, following the guidelines in (Larson, 2017), we do not want to run the risk of making assumptions about speakers' gender identity and introducing additional bias within an environment that has been specifically designed to inspect gender bias. By looking at the personal pronouns used by the speakers to describe themselves, our manual assignment instead is meant to account for the gender linguistic forms by which the speakers accept to be referred to in English (GLAAD, 2007), and would want their translations to conform to. We stress that gendered linguistic expressions do not directly map to speakers' self-determined gender identity (Cao and Daumé III, 2020). We therefore make explicit that throughout the paper, when talking about speakers' gender, we refer to their accepted linguistic expression of gender rather than their gender identity.
Focusing on the two language pairs of our interest, 2,294 different speakers described via he/she pronouns 3 are represented in both en-it and en-fr. Their male/female 4 distribution is unbalanced, as shown in Table 1, which presents the number of talks, as well as the total number of segments and the corresponding hours of speech.
ST Systems
For our experiments, we built three types of direct systems. One is the base system, a state-of-the-art model that does not leverage any external information about speaker's gender ( §4.1). The others are two gender-aware systems that exploit speakers' gender information in different ways: multi-gender ( §4.2) and specialized ( §4.3). All the models share the same architecture, a Transformer (Vaswani et al., 2017) adapted to ST. The encoder processes the input Mel-filter-bank sequences with two 2D convolutional layers with stride 2, returning a sequence that is four times shorter than the original input. The vectors of this sequence are projected by a linear transformation into the dimensional space used in the following encoder Transformer layers and are summed with sinusoidal positional embeddings. The attentions in the encoder layers are biased toward elements close on the time dimension with a logarithmic distance penalty (Di Gangi et al., 2019b). The decoder architecture, instead, is not modified.
Base ST Model
We are interested in evaluating and improving gender translation on strong ST models that can be used in real-world contexts. As such, our base, gender-unaware model is trained with the goal of achieving state-of-the-art performance on the ST task. To this aim, we rely on data augmentation and knowledge transfer techniques that were shown to yield competitive models at the IWSLT-2020 evaluation campaign (Ansari et al., 2020;Potapczyk and Przybysz, 2020;Gaido et al., 2020). In particular, we use three data augmentation methods -SpecAugment (Park et al., 2019), time stretch (Nguyen et al., 2020), and synthetic data generation (Jia et al., 2019) -and we transfer knowledge both from ASR and MT through component initialization and knowledge distillation (Hinton et al., 2015).
The ST model's encoder is initialized with the encoder of an English ASR model (Bansal et al., 2019) with a lower number of encoder layers (the missing layers are initialized randomly, as well as the decoder). This ASR model is trained on Librispeech (Panayotov et al., 2015), Mozilla Common Voice, 5 How2 (Sanabria et al., 2018), TEDLIUM-v3 (Hernandez et al., 2018), and the utterance-transcript pairs of the ST corpora -Europarl-ST (Iranzo-Sánchez et al., 2020) and MuST-C. These datasets are either gender unbalanced or do not provide speaker's gender information apart from Librispeech, which is balanced in terms of female/male speakers (Garnerin et al., 2020). However, since these speakers are just book narrators, first-person sentences do not really refer to the speakers themselves.
Knowledge distillation (KD) is performed from a teacher MT model by optimizing the cross entropy between the distribution produced by the teacher and by the student ST model being trained (Liu et al., 2019). For both en-it and en-fr, the MT model is trained on the OPUS datasets (Tiedemann, 2016).
The ST model is trained in three consecutive steps. In the first step, we use the synthetic data obtained by pairing ASR audio samples with the automatic translations of the corresponding transcripts. In the second step, the model is trained on the ST corpora. In these first two steps, we use the KD loss function. Finally, in the third step, the model is fine-tuned on the same ST corpora using label-smoothed cross entropy (Szegedy et al., 2016). SpecAugment and time stretch are used in all steps.
Multi-gender Systems
The idea of "multi-gender" models, i.e. models informed about the speaker's gender with a tag prepended to the source sentence, was introduced by Vanmassenhove et al. (2018) and Elaraby et al. (2018). This approach was inspired by one-to-many multilingual neural MT systems (Johnson et al., 2017), in which a single model is trained to translate from a source into many target languages by means of a target-forcing mechanism. With this mechanism -here adapted for "gender-forcing" -ST multi-gender systems are fed not only with the input audio, but also with a tag (token) representing the speaker's gender. This token is converted into a vector through learnable embeddings. This approach has two main potential advantages: i) a single model supports both male and female speakers (which makes it particularly appealing for real-world application scenarios), and ii) each gender direction can benefit from the data available for the other, potentially learning to produce words that would have never been seen otherwise (transfer learning). Regarding the several options to supply the model with the additional gender information, we do not follow the approach of Vanmassenhove et al. (2018) and Elaraby et al. (2018), since it is dedicated to MT. Instead, we consider those that obtained the best results in multilingual direct ST (Di Gangi et al., 2019c;Inaguma et al., 2019), namely: Decoder prepending. The gender token replaces the <\s> (EOS, end-of-sentence) that is added in front of the generated tokens in the decoder input. Decoder merge. The gender embedding is added to all the word embeddings representing the generated tokens in the decoder input. Encoder merge. The gender embedding is added to the Mel-filter-bank sequence representing the source speech given as input to the encoder.
In all cases, multi-gender models' weights are initialized with those of the Base models. The only randomly-initialized parameters are those of the gender embeddings.
Gender-specialized Systems
In this approach, two different gender-specific models are created. Each model is initialized with the Base model's weights and then fine-tuned only on samples of the corresponding speaker's gender. This solution has the drawback of a higher maintenance burden than the multi-gender one, as it requires the training and management of two separate models. Moreover, no transfer learning is possible: although each model is initialized with the base model trained on all the data and the low learning rate used in the fine-tuning prevents catastrophic forgetting (Mccloskey and Cohen, 1989), data scarcity conditions for a specific gender are likely to lead to lower performance on that direction.
Gender-balanced Validation Set
To train our gender-aware models, we do not rely on the standard MuST-C validation set as it reflects the same gender-imbalanced distribution found in the training data. We therefore created a new specifically designed validation set composed of 20 talks. Unlike the standard MuST-C validation set, it contains a balanced number of female/male speakers, thus avoiding to reward models' potentially biased behaviour. This new resource is released under a CC BY NC ND 4.0 International license, and is freely downloadable at https://ict.fbk.eu/must-c-gender-dev-set/. 6 5 Experimental Setting
Experiments
As described in §4.1, our ST models adopt knowledge transfer techniques that showed to significantly improve ST performance. In particular, knowledge distillation (KD) is especially relevant as it allows the ST model to learn and exploit the wealth of training data available for MT, which otherwise would not be accessible. Hence, since we are also interested in assessing the effect of KD on the ability of the resulting ST systems to deal with gender, we compare: i) the teacher MT models, ii) the intermediate ST models trained on KD, and iii) the final ST models obtained with fine-tuning without KD.
The final ST models are used to initialize both multi-gender ( §4.2) and gender-specialized models ( §4.3), which are then fine-tuned on the MuST-C gender-labeled dataset. Since, as seen in §3, this dataset shows a quite skewed male/female speaker distribution (approximately 70%/30%), we test both approaches in two different data conditions: i) balanced (*-BAL), where we use all the female data available together with a random subset of the male data, and ii) unbalanced (*-ALL) where all the MuST-C data available are exploited. It must be noted that there are differences between the two approaches on the usage of data. In the specialized approach, since we have two separate systems, the one which is fine-tuned with talks by female speakers remains the same in both data conditions. Differently, in the multi-gender approach, which is trained on both genders together, all the training mini-batches contain the same number of samples for each gender. Thus, when all MuST-C data are used, the female gender pairs -which are underrepresented -are over-sampled.
Evaluation Method
For our experiments, we rely on MuST-SHE , a gender-sensitive, multilingual benchmark for MT and ST consisting of (audio, transcript, translation) aligned triplets. By design, each segment in the corpus requires the translation of at least one English gender-neutral word into the corresponding masculine/feminine target word(s) to convey a referent's gender. With the intent to evaluate our gender-aware ST models on speaker-dependent gender phenomena, we focus on a portion of MuST-SHE containing, for each language pair, ∼600 segments where gender agreement only depends on the speaker's gender. 7 Segments are balanced with respect to female/male speakers and masculine/feminine marked words, which are explicitly annotated in the corpus.
An important feature of MuST-SHE is that, for each reference translation, an almost identical "wrong" reference is created by swapping each annotated gender-marked word into its opposite gender (e.g. I have been uttered by a woman is translated into the correct Italian reference Sono stata, and into the wrong reference Sono stato). The idea behind gender-swapping is that the difference between the scores computed against the "correct" and the "wrong" reference sets captures the system's ability to handle gender translation. However, relying on these scores does not allow to distinguish between those cases where the system "fails" by producing a word different from the one present in the references (e.g. andat* in place of stat*) and failures specifically due to the wrong realization of gender (e.g. stato in place of stata).
Thus, while following the same principles as , in our experiments we rely on a more informative evaluation. First, we calculate the term coverage as the proportion of gender-marked words annotated in MuST-SHE that are actually generated by the system, on which the accuracy of gender realization is therefore measurable. Then, we define gender accuracy as the proportion of correct gender realizations among the words on which it is measurable. Our evaluation method has several advantages. On one side, term coverage unveils the precise amount of words on which systems' gender realization is measurable. On the other, gender accuracy directly informs about systems' performance on gender translation and related gender bias: scores below 50% indicate that the system produces the wrong gender more often than the correct one, thus signalling a particularly strong bias. Gender accuracy has the further advantage of informing about the margins for improvement of the systems.
Overall Results
Table 2 presents overall results in terms of BLEU scores on the MuST-SHE test set. Despite the wellknown differences in performance between en-it and en-fr, both language directions show the same trend.
First, the MT systems used by the ST models for KD achieve by far the highest performance. This is expected since the ST task is more complex and MT models are trained on larger amounts of data. However, all our ST results are competitive compared to those published for the two target languages. In particular, on the MuST-C test set, the scores of our ST BASE models are 27.7 (en-it) and 40.3 (en-fr), respectively 0.3 and 4.8 BLEU points above the best cascade results reported in .
Moving on to ST systems, we attest that the models after the first two training steps based on KD (BASE-KD-ONLY, see 4.1) have a lower translation quality than the BASE models, showing that the third training step is crucial to boost overall performance. In general, except for the MULTI-DECMERGE system (whose performance is significantly lower), we do not observe statistically significant differences between the BASE models and their gender-aware extensions (MULTI-* and SPECIALIZED-*), which also perform on par when fine-tuned with varying amounts of annotated data (balanced vs all).
Due to the very small percentage of speaker-dependent gender-marked words in MuST-SHE (< 3%, 810-840 over ∼30,000 words), systems' ability to translate gender is not reflected by BLUE scores. Now, we delve deeper into our more informative evaluation (as per §5.2) and turn to the term coverage and gender accuracy values presented in Table 3. The overall results assessed with BLEU are confirmed by term coverage scores for both en-it and en-fr: the MT systems generate the highest number of annotated words present in MuST-SHE (63.83% on en-it and 63.10% on en-fr), while we do not observe large differences among the ST models (between 56.17% and 58.02% for en-it and 60.60% and 62.38% for en-fr). Instead, looking at gender accuracy, we immediately unveil that overall performance is not an indicator of the systems' ability to translate gender. In fact, the best performing MT systems show the lowest gender accuracy (51.45% for en-it and 52.08% for en-fr): intrinsically constrained by the lack of access to audio information, they produce the wrong target gender in half of the cases. Such deficiency is directly reflected in the BASE-KD-ONLY models, which are strongly influenced by the MT behaviour; thus, although effective for overall quality, KD is detrimental to gender translation. By undergoing the third training step without KD, the BASE models are in fact able to improve on gender translation, but with limited gains. Differently, the models fed with the speaker's gender information display a noticeable increase in gender translation, with SPECIALIZED-* models outperforming the MULTI-* ones by 16-20 points and the BASE ones by 30 points. Among the multi-gender architectures, our results show that MULTI-DECPREP has an edge on the other two models, both in overall and gender translation performance: for the sake of simplicity, from now on we thus present only that model. As a single-model architecture, multi-gender would be a more functional solution than multiple specialized models, but -being trained on both female and male speakers' utterances -it is noticeably weaker than multiple specialized models (trained on gender-specific data) at predicting gender. With regard to the different amounts of gender-annotated data used to train our gender-aware models, we cannot see any appreciable variation in term coverage and gender accuracy between the two settings. Further insights on this aspect are presented in the next section. Table 4 shows separate term coverage and gender accuracy scores for target feminine and masculine forms. This allows us to highlight the models' translation ability for each gender form and conduct crossgender comparisons to detect potential bias. Also in this analysis, results are consistent across language pairs. We assess that both the MT model and its strongly connected BASE-KD-ONLY present a very strong bias since they almost always produce masculine forms: accuracy is always much lower than 50% on the feminine set (up to 20.85% for en-it and 26.91% for en-fr) and very high on the masculine set (up to 88.49% for en-it and 89.58% for en-fr). After fine-tuning without KD, the BASE ST models improve feminine forms realization, but they still remain far from 50%. The comparison with the direct model in shows that, despite the much higher overall translation quality, our BASE models are affected by a stronger bias. This further confirms the detrimental effect of KD on gender translation and that higher overall quality does not directly imply a better speaker's gender treatment.
Cross-gender Analysis
All gender-aware models significantly reduce bias with respect to the BASE systems. This is particularly evident in the feminine set, where accuracy scores far above 50% indicate their ability to correctly represent female speakers. In particular, the SPECIALIZED models achieve the best results on both feminine and masculine sets (over 79% and 93% respectively). The higher performance on the masculine set can be explained considering that the two gender-specialized models derive from the BASE model, which is strongly biased towards masculine forms. Interestingly, MULTI-DECPREP shows similar feminine/masculine accuracy scores. This is possibly due to the random initialization of the gender tokens' embeddings: as a result, the initial model hidden representations and predictions are perturbed in an unbiased way. An unbiased starting condition combined with balanced data leads to a fairer, similar behaviour across genders, although the final models have a lower accuracy than the SPECIALIZED ones. Finally, we notice that results obtained by training our models with balanced (*-BAL) and unbalanced (*-ALL) datasets are similar. Indeed, the masculine gender accuracy slightly improves by adding more male data, while there is not a clear trend on the feminine accuracy: we can conclude that oversampling the data is functional inasmuch it keeps the performance on the feminine set stable.
Analysing Conflicts between Vocal Characteristics and Gender Tags
So far, we worked under the assumption that the speaker's vocal characteristics match with those typically associated to the gender category she/he identifies with. In this section, we explore systems' capacity to produce translations that are coherent with the speaker's gender in a scenario in which this assumption does not hold: this is the case of some transgenders, children and people with vocal impairment. However, we are hindered by the almost absent representation of such users within MuST-C. As such, we design a counterfactual experiment where we associate the opposite gender tag to each actual female/male speaker and inspect models' behaviour when receiving conflicting information between the gender tag and the properties of the acoustic signal. This can also be considered as an indirect assessment of systems' robustness to possible errors in application scenarios where speakers' gender is assigned automatically. Table 5 presents the results for this experiment. In the M-audio/F-transl set, systems were fed with a male voice and a female tag and the expected translation is in the feminine form, while in the F-audio/Mtransl set we have the opposite. As we can see, in both sets the multi-gender model has a drastic drop in accuracy with respect to the results shown in Table 4, with scores below 50% for en-it. This behaviour indicates that this model relies on both the gender token and the audio features, which in this scenario are conflicting. Thus, the multi-gender model could be more robust to possible errors in automatic recognition of the speaker's gender, but it is not usable in scenarios in which the vocal characteristics have to be be ignored. On the contrary, the specialized systems show a high accuracy on both sets. In particular, on F-audio/M-transl the performance is in line with the results of Table 4. This indicates that, independently from speakers' vocal characteristics, the model relies only on the provided gender information, being therefore suitable for situations in which one wants to control the gendered forms in the output and override the potentially misleading speech signals. Table 5: Coverage and accuracy scores when the correct translation is expected in a gender form opposite to the speaker's gender but in accordance with the gender tag fed to the system.
Manual Analysis
We complement our automatic evaluation with a manual inspection on the output of three models: BASE, MULTI-DECPREP-ALL (MULTI), and SPECIALIZED-ALL (SPEC). For each model, we analyzed the translation of 100 common segments across en-it/en-fr, which allow for cross-lingual comparisons. We first take into account those instances where systems' accuracy in the production of gender-marked words was measurable, as in (a), (b), (c) in Table 6. A first observation, consistent across languages and models, is that a controlling noun (student) and its modifiers (the, classic, Asian) always concord in gender in the systems' output. As per (a), this agreement is respected for both correct (MULTI, SPEC) and wrong gender realizations (BASE). Differently, (b) shows that, whenever two words are not related by any morphosyntactic dependency, some words may be correctly translated (chercheuse -MULTI, SPEC), and some others not (professeur). Such dynamic seems to attest that, although the systems are fed with sentence-level gender tags, gender predictions are still skewed at the level of the single word. Table 6: Examples of feminine (F) and masculine (M) gender-marked words translated by BASE, MULTI-DECPREP-ALL (MULTI) and SPECIALIZED-ALL (SPEC) on en-it and en-fr.
Overall, (a), (b) and (c) clearly attest the progressively improved performance from BASE to MULTI and SPEC. In particular, in (c), SPEC is able to pick the required masculine form in spite of a contextual hint about a second female referent (woman), thus overcoming what is a difficult prediction even for MULTI. We also inspect those cases where systems' accuracy on gender production was not measurable to cast some light on the reasons for a limited term coverge. We found that, while there are some generally wrong translations -(d) -such instances only amount to 1/3 of the cases. In the remaining 2/3, the output is fluent and reflects the source utterance meaning but it simply does not match the exact annotated word in the reference. We found that ST translations often offer alternative constructions that do not require an overt gender-inflection -(e) -or rely on appropriate gender-marked synonyms of the word in the reference -(f). We can hence conclude that many gender translations that do not contribute to gender accuracy confirm an improved ability of the enriched models in gender translation.
Conclusion
We rose to the challenge posed by to further explore gender translation in direct ST. Going beyond direct systems' attested ability to leverage speaker's vocal characteristics from the audio input, we developed gender-aware models suitable for operating conditions where speaker's gender is known. To this aim, we annotated the large MuST-C dataset with speaker's gender information, and used the new annotations to experiment with different architectural solutions: "multi-gender" and "specialized". Our results on two language pairs (en-it and en-fr) show that breeding speaker's gender-aware ST improves the correct realization of gender. In particular, our specialized systems outperform the gender-unaware ST models by 30 points in gender accuracy without affecting overall translation quality. | 8,064 | 2020-12-01T00:00:00.000 | [
"Computer Science"
] |
Molybdenum targets produced by mechanical reshaping
Targets required to determine the parameters of the 100Mo(p,xn)99mTc reaction and to estimate the yield of the 99mTc production were prepared starting with powder material. Material, melted with electron beam gun into solid bead, was reshaped into foil mechanically. Targets were prepared by powder melting and hot flattening of the droplet followed by cold rolling. Procedure allowed preparation of thick (in the range of hundreds of microns) and thin (down to 250 nm) foils.
Introduction
The metastable 99m Tc widely applied as radioactive tracer in medical diagnostic procedures currently is mainly obtained from the molybdenum-99 ( 99 Mo) in its radioactive decay. The 99 Mo is produced by irradiation of enriched 235 U with flux of neutrons provided by research reactors. 235 The 99 Mo produced in the above reaction is extracted from target and after purification is delivered to hospitals where is used as generator of 99m Tc.
The reactors used to supply the 99 Mo were build 40-50 years ago and recently world customers had to face not only planned but as well unexpected shut downs of some of reactors what caused shortages in the 99 Mo and thus 99m Tc supply. 99 Mo, source of 99m Tc, can be produced also by neutron capture in 98 Mo inserted into the core of a nuclear reactor so this method, although considered as alternative for use of HEU, requires reactors as well what is a significant drawback when assessing usefulness of 99m Tc production this way. Other drawbacks are discussed in [1].
Thus, the growing problem with operationality of research reactors (interruptions of their work) stimulated search for alternative ways of 99m Tc production either via production of 99 Mo [2] or direct production of 99m Tc [3,4] although the last solution due to the isotope half-life can be seen as alternative for local supplies only [5]. Both isotopes can be produced in accelerators providing protons, deuterons, alpha projectiles using various Mo isotopes as targets (Table 1), but direct 99m Tc production in reaction of 100 Mo with protons is considered as the most promising (due to its cross section, production energy range) alternative way of 99m Tc production. Advantages and drawbacks of this solution are presented in many publications [e.g. 3, 5] and they will not be discussed in this paper as it is aside of the work objective.
The excitation function of 100 Mo(p,2n) 99m Tc reaction has been studied by many authors for decades ( [6][7][8][9] just to list few) but nevertheless the value of cross section of this reaction is still not well defined. The measured values of the excitation function of the proton-induced reactions on molybdenum obtained by different researchers are presented in Fig. 1. As can be seen from the plot, values presented by different authors differ even by factor of 2. It is difficult to point out all sources of this inconsistence but it is not excluded that one of them is related to the fact that in the most cases the cross section studies were completed with natural material. The excitation function
Procedure of target preparation
Isotopically enriched molybdenum is available in powder form and thus studying the excitation function of the discussed reaction required conversion of this material into a foil of relatively low thickness, while studies of the reaction yield require thick targets.
However, 99m Tc production with target in solid metallic form could be favourable considering its better thermal conductivity comparing to powdered targets, what may allow the use of higher beam intensity.
Taking into account the form of the available enriched 100 Mo, our procedure of preparation of the metallic foils consists of powder consolidation by melting and then the bead conversion into a foil by mechanical reshaping.
Powder consolidation by melting
The powdered material in the amount corresponding to the target thickness and its size (up to about 1 300 mg) was pelletized with use of a die allowing the air removal during pellet forming (Fig. 2), and a hydraulic press. The obtained pellet was melted into a droplet in the vacuum of *10 -6 mbar with e-beam gun. Before reaching the melting temperature, pellet was carefully heated with e-beam, both for outgassing, i.e. removing the air residual, and evaporating the [10] molybdenum oxide (t evap = *1 155°C). The e-beam intensity was increased gradually until stable pressure of *10 -6 mbar was reached. Only then the e-beam intensity was increased to melt the Mo pellet into a droplet. In case of thicker pellets only the upper part was melted in the first run and formation of droplet was completed after breaking the vacuum and turning the half melted pellet upside-down (Fig. 3).
Further re-melting of the received droplet is required to prepare a bead of good for rolling quality (smooth, without deformation that can act as a starting point of droplet cracking when rolled). Re-melting of the material has to be done with changing of its position in the crucible of e-beam gun, i.e. turning the bead to expose each side to the electron beam. It is important especially in case of droplets made of big amount of material (few hundreds milligram). The total material loss during melting process was of about 15-18 %.
Rolling
Droplet produced by powder melting was placed between stainless steel sheets (rolling pack) and passed through the rolling mill. The applied rolling speed was of about 10 RPM (125 cm min -1 ) and thickness reduction was not greater than 4-5 lm at the initial steps irrespective the size of the droplet/disc. Higher reduction of the thickness would result with inevitable droplet crack at first pass through the rolling mill (as reported by [17] and others), see Fig. 4.
Below 0.5 mm the thickness reduction was not bigger than *2.5 lm, otherwise the rolled material emerged as disc/foil with many cracks or as small, inutile pieces, too small to produce even the thin (10 lm) foils. During rolling process material, after each change of the rollers distance, was passed 4-5 times through rolling mill.
To remove stresses from the rolled foils they were annealed in vacuum for *10-15 min at temperature of *1 200°C. The influence of the annealing on the foils properties can be seen in Fig. 5.
Described procedure allows production of thin (10 lm) foils. The production of sufficient area of these foils (to prepare stacked foil target composed of 10 Mo pieces) took about 1 week of the whole day work.
Annealing useful at preparation of thin foils (below 100 lm) was not significantly helpful in production of thick ones (400-600 lm). The amount of cracks was lower but, when appearing, they propagated through the foil area preventing production of the foil of the required size (Fig. 6).
Lipski [18] suggests that slow reduction of the e-beam intensity should reduce the stresses in the material and decrease crackability but such relation was not observed in case when big droplets needed for thick target preparation were produced. Not only droplets but also thick discs and plates were cracking with the same 'easiness' irrespective of slow cooling of the melted material in case of small droplets. Slow cooling of big droplets, as mainly described in this work, resulted with brittle material.
N. Y. Kheswa in her paper [19] reports production of malleable, not cracking molybdenum droplet just by thorough melting but the amount of material used in [19] (only 75 mg of the starting amount) is incomparably smaller than the amount required by our needs (one target of 1.4 cm 9 1.4 cm of 600 lm thick requires *1 300-1 400 mg of molybdenum). Thorough melting, at a single run, of the amount of Mo as used by Kheswa seems to be easier. The cold flattening recommended by K. Zell [20], applied by him to the droplet of *2 mm in diameter most probably does not stress the material at the same level as in case of droplet of 6-7 mm in diameter made of 1 300-1 400 mg of starting amount of Mo.
Substantial material loss (40 %) reported in [19] is not acceptable as well in case of thick targets of expensive material such as 100 Mo. There is also no information on thickness and size of the produced foils so the final result can not be compared to our work.
Expecting improvement of the purity of the melted material, and thus its malleability, the Mo powder was heated in the reducing atmosphere (1 h at 1 600°C at H 2 atmosphere) for removing the oxide residues before pellet forming. At other approach the pellet was sintered under mentioned condition but no improvement of the molybdenum malleability was observed. On the contrary, the droplet resulting from the pre-treated powder was less malleable. The Fig. 7 shows the foil prepared using the droplet produced from the powder sintered in the above listed condition.
Hot reshaping of the droplet and subsequent cold rolling
To produce thick foils, the relatively big droplets (6-7 mm diameter) were flattened in high temperature before rolling.
Molybdenum, oxygen resistant metal at ambient temperature, oxidises easily at temperature above 600°C. To protect molybdenum from oxidation at elevated temperature, the Mo droplet was packed into the stainless steel packet (envelope) under argon atmosphere (Fig. 8a, b). Fig. 6 Example of crack passing through the disc of *1 mm thick Fig. 7 The 80 lm thick foil produced from the droplet obtained by melting the Mo pellet beforehand sintered at 1 600°C under the hydrogen atmosphere Fig. 8 To flatten the Mo droplet in the high temperature, droplet prepared by powder melting was packed into the stainless steel envelope (a) and sintered tightly under the argon atmosphere (b). The packet was heated up to 1 100°C and when hot, pressed under the hydraulic press (c) The packed droplets of *6-7 mm in diameter were heated at temperature of 1 100°C for 3-5 min and when hot were flattened with the use of hydraulic press as quickly as possible to preserve the high temperature (see Fig. 8c). The height of the droplet and further of the disc was reduced by 20-25 % at initial steps and by *15 % in consecutive steps until disc was about 1-1.5 mm thick. Example of the forces used for flattening is given in the Table 2.
After last flattening, the packet was left under argon atmosphere for cooling down. When cold, disc was removed from the envelope and rolled down to the required thickness of few hundreds micrometer. The Fig. 9 shows the foil of 320 lm with crack free area (*1.5 cm 9 1.6 cm) sufficient for the target. But as can be seen in Table 2 (sample E4/1), foils of 600 lm of bigger area (1.5 cm 9 2.5 cm) with only single crack, 2-3 mm long, were prepared from later produced droplets of 100 Mo.
The foil was prepared from droplet of 7 mm diameter. The upper part of the presented foil was used to produce thinner, 10 lm thick foils needed to build stacked foil target. Size after hot press: 7 9 7.5 9 2.5 mm, slightly oxidised, after oxide removal by e-beam heating, additional cold press was applied: 8.65 9 8.
Conclusions
Big molybdenum beads (6-7 mm in diameter made of more than 1 g of the material), prepared for rolling by powder melting with e-beam gun and hot flattening of the received droplet, demonstrated better malleability than only thoroughly melted material. It was possible to produce the thin foils in much shorter time than in the case of material prepared by melting only. The thickness reduction per pass was of similar value but number of passes per reduced thickness required to get 'no size changeable' foil significantly dropped down. Described procedure allows not only production of thick foils free from cracks but makes also possible to produce the thin foils of big area (Fig. 10). The thinnest foil produced at this work was of *250 nm (thickness measured by alpha particle energy loss method [21]). Below this thickness the material starts sticking to the rolling pack and tries to further reduce the foil thickness were not undertaken. The main aim of this work was to develop the procedure of production of thick (few hundreds micrometer) and thin (10 lm) Mo foils/plates of area of *1.5 9 1.5 cm, thus the possibilities of further thinning of the foil were not investigated. It is not excluded that an application of anti-adhesive agent such as e.g. Teflon as rolling pack lining would allow reduction of the foil thickness.
The hot reshaping of the Mo droplet in the way described above, applied before cold rolling, is relatively simple. The Mo material after cooling down can be easily removed from the envelope and sticking to the stainless steel as reported by Karasek [17] at hot rolling applied by him was not observed. | 3,007.2 | 2015-02-15T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Spatial survey of tephra deposits in the middle Lahn valley ( Hesse , Germany )
Tephra deposits and especially Laacher See tephra (LST) deposits resulting from the Laacher See eruption (12.9 ka) are an important stratigraphic marker for the Allerød period in central Europe (van den Bogaard and Schmincke, 1995). Within the central German low mountain range (Rhenish Massif and eastern areas) the LST was found within soils (initial deposits, sheltered slope positions) and valleys (relocated deposits) (Bos and Urz, 2003; Hahn and Opp, 2005). The Niederweimar gravel quarry, located on the lower terrace in the middle reach of the Lahn River valley south of Marburg (Hesse, Germany), is known for its high-resolution stratigraphy of Quaternary gravel deposits and late glacial, as well as Holocene, floodplain fines (Lomax et al., 2018). This particular stratigraphy is mainly achieved by the up to 2 m thick LST deposits, which consist of pure LST beds and a multitude of fine LST bands (partly interbedded with black sands or interrupted by clay bands). The origin of the LST in the floodplain is attributed to an extensive deposition (aeolian, directly in the floodplain), as well as later fragmentation of the tephra deposits by surface erosion and renewed deposition of LST from the catchment area through changing river systems (Bos and Urz, 2003; Lomax et al., 2018). The surroundings of the gravel quarry are also rich in archaeological finds reaching more or less continuously from the Mesolithic (11.7 to 7.5 ka) to the Middle Ages (Bos and Urz, 2003; Lomax et al., 2018). Further well-summarized information about the situation within the Niederweimar gravel quarry can be found in Lomax et al. (2018) or on the website of the archaeological survey of Hesse (https://lfd.hessen.de/, last access: 21 March 2021). The evidence of LST in the Lahn valley, as in other valley sediments, is often limited to gravel pits (other larger excavations). These pits and their profiles offer very good insights (e.g. detailed lithostratigraphic description of profiles), but they are always limited to a comparatively small spatial section of the entire floodplain (gravel pit area). Therefore, the objective of the presented study is to provide a spatial survey of LST deposits in the middle Lahn valley, covering the entire floodplain cross section. The following two questions form the focus of the spatial survey. (1) How is the lateral and vertical extension of the LST deposits within the Lahn valley floodplain? (2) Does the spatial distribution provide overarching information about the deposition dynamics of the LST? For this purpose, a transect-based survey with qualitative analysis of LST grains based on density separation and visual identification (stereomicroscope) was applied.
Introduction
Tephra deposits and especially Laacher See tephra (LST) deposits resulting from the Laacher See eruption (12.9 ka) are an important stratigraphic marker for the Allerød period in central Europe (van den Bogaard and Schmincke, 1995). Within the central German low mountain range (Rhenish Massif and eastern areas) the LST was found within soils (initial deposits, sheltered slope positions) and valleys (relocated deposits) (Bos and Urz, 2003;Hahn and Opp, 2005). The Niederweimar gravel quarry, located on the lower terrace in the middle reach of the Lahn River valley south of Marburg (Hesse, Germany), is known for its high-resolution stratigraphy of Quaternary gravel deposits and late glacial, as well as Holocene, floodplain fines (Lomax et al., 2018). This particular stratigraphy is mainly achieved by the up to 2 m thick LST deposits, which consist of pure LST beds and a multitude of fine LST bands (partly interbedded with black sands or interrupted by clay bands). The origin of the LST in the floodplain is attributed to an extensive deposition (aeolian, directly in the floodplain), as well as later fragmentation of the tephra deposits by surface erosion and renewed deposition of LST from the catchment area through changing river systems (Bos and Urz, 2003;Lomax et al., 2018). The surroundings of the gravel quarry are also rich in archaeological finds reaching more or less continuously from the Mesolithic (11.7 to 7.5 ka) to the Middle Ages (Bos and Urz, 2003;Lomax et al., 2018). Further well-summarized information about the situation within the Niederweimar gravel quarry can be found in Lomax et al. (2018) or on the website of the archaeological survey of Hesse (https://lfd.hessen.de/, last access: 21 March 2021).
The evidence of LST in the Lahn valley, as in other valley sediments, is often limited to gravel pits (other larger excavations). These pits and their profiles offer very good insights (e.g. detailed lithostratigraphic description of profiles), but they are always limited to a comparatively small spatial section of the entire floodplain (gravel pit area). Therefore, the objective of the presented study is to provide a spatial survey of LST deposits in the middle Lahn valley, covering the entire floodplain cross section. The following two questions form the focus of the spatial survey. (1) How is the lateral and vertical extension of the LST deposits within the Lahn valley floodplain? (2) Does the spatial distribution provide overarching information about the deposition dynamics of the LST? For this purpose, a transect-based survey with qualitative analysis of LST grains based on density separation and visual identification (stereomicroscope) was applied.
Methods
Survey and sampling of tephra samples was conducted at three floodplain transects (including active floodplain zone and lower terrace) in the south of Niederweimar gravel quarry where the floodplain shows one of its greatest transverse expansions (Fig. 1). Transects cover a maximum width of 973.9 m and are at 170.9 m a.s.l. (above sea level) with a height difference of ±1.4 m. Sampling was carried out with the help of a hand auger (Pürckhauer, Ø 2 cm, 2 m depth) and pile core probing (Ø 6-8 cm, 3 m depth). Soil properties and stratigraphy were documented in the field according to the national standard soil classification scheme (KA5, Ad-hoc AG Boden, 2005). Tephra-containing layers were identified visually (visible tephra grains) or by smeary consistence (owing to higher contents of allophane) (Jahn et al., 2006), proportion documented (percentage of tephra grains estimated by area according to KA5), and extracted for further analysis.
Samples for method validation and as comparison material (reference samples) were taken from a recently excavated profile (32U 480710 5621590) in the Niederweimar gravel quarry, which was excavated during archaeological work. The profile consists of Holocene floodplain loams (silt to sandy loam) above four LST layers (sandy loam, partwise alternating with black sands and ripple) and flood loam (late Pleistocene) at the base (Fig. 2). Three samples were taken from the upper (LST bands mixed with sandy loam), middle (thick LST bands, between black sands) and lower (LST bands interrupted by black sands with ripple shapes) parts of the LST layer (Fig. 2).
Reference samples and samples from the transects were dried at 100 • C (drying chamber) under weight loss control, subsequently carefully ground, and sieved to < 2 mm (stainless steel sieve, Retsch, Haan, Germany). Subsample material (20g per sample) was then mixed with 150 mL saturated NaCl solution (density adjusted, ρ = 1.2 g cm −3 ) within glass beakers, stirred (1 min, magnetic stirrer) and allowed to sediment for 20 min. This allows the tephra grains to be separated from other mineral components (sand to clay grains), despite the heavy minerals contained (Lomax et al., 2018), because the grains float in the dry state due to their many cavities (volcanic origin). The floating tephra grains were then sieved to > 50 µm (Atechnik, Leinburg, Germany) to separate clay and/or silt particles and filtered by vacuum filtration (cellulose filter, LLG-Labware, Meckenheim, Germany) to rinse out remaining salts. Dried filters were visually examined using a stereomicroscope (Motic SMZ-161 TL, Motic, Hong Kong). From the reference samples, 30 randomly selected tephra grains were extracted and measured (Moticam, Motic, Hong Kong). Samples from the transects were inspected according to the presented method, and the presence of tephra grains or their fragments was considered a positive finding. In addition, 19 samples containing proven LST were selected, and the particle size distribution was determined according to DIN ISO 11277 (2002) and the integral suspension pressure method (Durner et al., 2017).
Results and discussion
In total 56 tephra containing stratigraphically distinguishable layers were identified and sampled. Qualitative tephra analyses within the laboratory show a positive rate of 69.6 % in which case tephra could be detected under the microscope for the 56 samples. In 30.4 % of the layers, no tephra grains or larger fragments could be found even if the Greasing effect occurred in the field. The applied method is therefore suitable for qualitative detection, is fast and inexpensive, and can be extended by other methods such as mineral analysis and dating. Tephra grains occur usually entire with greybrown to greenish colours, clear holes and glassy surface structure (Fig. 2). The length to width ratio of the grains from a random sample of 30 grains comprises an average length of 782.7 (±288.4) µm and width of 557.12 (±179.5) µm. Grain surface and average and maximum length (1565.9 µm) correspond clearly to the reference samples (same colour, holes and glassy structure) with an average length of 724.3 (±245.6) µm.
Tephra layers come up at average depths of 68.6 cm below surface and end at average depths of 166.5 cm (±46.3 cm). Tephra layers have thicknesses from 6.0 up to 132.0 cm with an average thickness of 52.0 cm (Figs. 1b and 2). Grain size analyses of tephra-containing layers has a mean distribution of 30.2 % clay, 46.8 % silt and 23.1 % sand. However, the share of sand is ranging between 2.7 % and 57.9 %, resulting in the samples comprising the grain size classes clay, silty clay, silty clay loam to silt loam (outliers within loam and sandy loam), and they are thus very heterogeneous.
Stratigraphic classification of tephra layers corresponds to the findings of our reference profile and other profiles within the Niederweimar quarry (e.g. Lomax et al., 2018): The tephra layers are covered by Holocene floodplain sediments (floodplain loams, silty loams) and rarely with single clay layers or isolated gravel deposits. The lowermost tephra layers, partly with the same pattern of banding (LST bands and black sands) (Fig. 2), however, are only poorly visible within the drill sample in contrast to quarry profiles. Below the tephra, a thin band (approx. 5 cm) of sand with gravel occurs, followed by a thick layer of clay (dark and rich in organic matter), before the gravel deposits of the lower terrace begin.
Regarding the vertical and lateral spatial distribution of tephra layers it can be stated that it is present nearly all over the floodplain area of the middle Lahn valley. The interpolation of the upper and lower tephra boundaries (Fig. 1b) shows that it is independent of the terrain surface today. This indicates that the tephra follows the morphology of Pleistocene gravel and flood loam deposits, as observed within the quarry (reference profile). The tephra is missing at the edge of the floodplain (drill point 101a: transition to the lower slope, colluvial formation), in the area of the active floodplain (drill points 108a and 2010a: river erosion) and in parts with inactive channel situations (drill point 304b) where Pleistocene gravel structures are found directly below the terrain surface. The interpolation also indicates, despite spatial inaccuracies, that the position of the lower tephra border allows for a partwise reconstruction of the terrain surface that existed at the time of deposition, with structures of various flow channels preformed in the Pleistocene.
Regarding the depositional conditions for LST within the Lahn River valley the heterogeneous grain size distribution indicates slow to medium flow velocities (clay-silt deposition) during LST deposition. From the findings presented in this report an area-wide deposition of LST in the middle Lahn valley can be assumed, which encompasses the entire width of the floodplain. LST deposits seem to follow the structure of preformed Pleistocene flow channels throughout the floodplain, as already stated for the quarry profiles (Bos and Urz, 2003;Lomax et al., 2018). The heterogeneous grain size distribution, banding of the LST with sandy loam intermediate layers and the absence of LST in the floodplain margin and active floodplain area (erosion) indicate fluvial deposition. Therefore, it can be concluded that the LST deposits originate from the entire upper catchment area and have been reworked within the entire floodplain.
Conclusion
The present survey shows an area-wide occurrence of LST deposits within the floodplain area of the middle Lahn valley. Vertical extension of LST deposits seems to follow preformed Pleistocene channel structures. The spatial distribution of LST deposits allows for an overreaching consolidation of former findings regarding the LST deposition and origin: LST deposition took place during the Allerød period, when river systems change due to climatic shifts. The findings presented here support the assumption that large quantities of LST have been deposited and relocated in the floodplain, which must originate mainly from fluvial transport originating from the entire catchment area instead of direct aeolian deposition. The widespread distribution of the sediments alone suggests large quantities of LST. Even though the analyses of quarry profiles have provided major insights about LST deposition in previous research, the presented results offer a spatial view that goes beyond this and contribute to the reconstruction of preformed Pleistocene channel structures. As the pure LST deposits occurring in the Lahn valley represent a special stratigraphic unit, further research should aim to clarify the details of LST deposition, exact sequence of events and former river system characteristics by a holistic spatial methodological approach in the future and also in other regions. In relation to the more than 11 000 years of documented settlement history within the study area, the immense occurrence and widespread distribution of tephra deposits may have influenced land use and cultivation opportunities across different cultures. Despite the fact that the aeolian deposition, fluvial transport and redeposition of a mas-sive amount of tephra at the beginning of the Holocene had an immense impact on the landscape and humans at that time, the deposits that are still present today provide an enhancement of the soil properties. In contrast to other valleys without these amounts of tephra, the alluvial floodplain soils of the middle Lahn valley can probably have a higher water holding capacity in dry periods, as well as a positive influence on groundwater reserves due to the tephra grains even if their soil cultivation might be affected by compaction (stagnic properties, stagnosols) in some places. Further interdisciplinary research from the fields of soil science and archaeology should examine the considerations and effects of the tephra deposits beyond their use as stratigraphic markers.
Data availability. The digital elevation model is used with the permission of the Hessian State Agency for Soil Management and Geoinformation. All further data generated during this study are included in this article or are available from the corresponding author upon request.
Author contributions. CJW conceived the project. CJW and VMHD collected reference samples and performed the profile description in the field. VMHD performed the spatial survey and further sample collection. CJW and VMHD have developed, tested and applied the density separation method for tephra grain extraction. VMHD performed the laboratory work. All authors contributed to the interpretation of the data and have contributed to the manuscript's initial and review processes. | 3,553.2 | 2021-07-01T00:00:00.000 | [
"Geology"
] |
Assessment of a prototype for the Systemization of Nursing Care on a mobile device 1
Abstract Objectives: assess a prototype for use on mobile devices that permits registering data for the Systemization of Nursing Care at a Neonatal Intensive Care Unit. Method: an exploratory and descriptive study was undertaken, characterized as an applied methodological research, developed at a teaching hospital. Results: the mobile technology the nurses at the Neonatal Intensive Care Unit use was positive, although some reported they faced difficulties to manage it, while others with experience in using mobile devices did not face problems to use it. The application has the functions needed for the Systematization of Nursing Care at the unit, but changes were suggested in the interface of the screens, some data collection terms and parameters the application offers. The main contributions of the software were: agility in the development and documentation of the systemization, freedom to move, standardization of infant assessment, optimization of time to develop bureaucratic activities, possibilities to recover information and reduction of physical space the registers occupy. Conclusion: prototype software for the Systemization of Nursing Care with mobile technology permits flexibility for the nurses to register their activities, as the data can be collected at the bedside.
Introduction
Information Technology (IT) has become part of people's daily life all over the world.The application and the use of technology and computer-based technologies for health care are an ongoing process (1) .This accelerated development of scientific and technological modernization has produced new forms of knowledge construction and establishing relations with the labor world.It is believed that, in upcoming years, the advances of computer technology will revolutionize the processes at all levels of nursing work at health institutions and offer operational and strategic benefits for professional organization and practice (2) .
It is important to clarify that, often, the term health technology is associated with the machinery developed for individual rehabilitation and survival.It should be highlighted that this concept can be expressed in different ways: hard technology, which refers exactly to the common-sense idea of machines, organizational standards and structure; soft-hard technology, represented by the theoretical knowledge that will support the understanding of the health work process and soft technology, evidenced by the interpersonal relations aimed at attending to the user's needs (3)(4) .In this study, the goal is to evidence the contribution of both hard and soft-hard technology in the condition of nursing work.
As a result of the evolution in these technologies and the constant miniaturization process of computers, today, large amounts of information can be obtained and carried digitally using portable devices, such as handhelds, smartphones and tablets (5) .
Studies verify that the nurses' difficulty to employ other computer tools than mobile devices is to transport the information collected from the patient to the microcomputer.As a result of the distance between the location of the hardware and the patient's bedside, the nurse registered the data collected from a patient on paper and later transcribed it.That is one of the main problems in using fixed computers to register nursing practice, as the care activity involves the professionals' mobility to attend to different inpatients (6) .
In the context of nursing care at the Neonatal Intensive Care Unit (NICU) of the teaching hospital where this study was undertaken, the Nursing Care Systemization (NCS) was not developed.The records were handwritten and the clinical evolution was not standardized, occupying considerable physical space and, in addition, demanding the nurse's time to make notes.Thus, mobile computing emerges as an innovative technology for nursing care, through its application via a mobile device to other computers, via an interface of integrated and planned wireless network interface.The parallel use of mobile computing and the access to this kind of network can undoubtedly help considerable in the health professionals' daily life (7) .
With the mobile device at hand, information on the patient can be accessed, collected and documented at the bedside, steps of the Nursing Process can be developed and the professionals' need for mobility in patient care actions can be monitored.The time spent to document the activities can also be reduced and the probability of losing information can be decreased.This starts to be stored on the device instead of paper, which demonstrated how the characteristics of flexibility and dynamism converge mutually and contribute to the productivity of nursing care (8) .
In that perspective, this study was aimed at developed an assessing a prototype for a mobile device, which permits registering data for the Systemization of Nursing Care at a Neonatal Intensive Care Unit.
Method
An exploratory and descriptive study was undertaken, also characterized as an applied methodological research, developed at a teaching hospital located in the city of João Pessoa, Paraíba, between March and October 2014.
The development of the prototype followed three phases: 1 st phase -definition, in which the information for processing will be presented, the function, the performance of the program, the restrictions and the interfaces; 2 nd phase -development, when the data entry, the project architecture, the procedure details for the implementation and translation to the program language and tests on the applicability of the prototype are structured; and the 3 rd phase -maintenance, characterized by the correction of errors and adaptations to the users' requirements (nurses).
In the elaboration of the software prototype, a database was used which the nurses at the unit had constructed and validated.The main technological tools used to develop the software were: the program language Ruby, Ruby on Rails and JavaScript; the Bootstrap framework; the production server Ubuntu Linux, Nginx Webserver and the Database Management System.
Based on the implementation of the software at the Neonatal ICU, the nurses participated by using it in practice, and then by assessing the prototype.On this occasion, five professionals affiliated with the teaching hospital participated, who worked at the Neonatal ICU and were present in September and October 2014, when the system was maintained.
Concerning the participants' characteristics, the length of education ranged between 10 and 30 years Rezende LCM, Santos SR, Medeiros AL. and all of them held some kind of specialization degree -in education, collective health, occupational health and pediatric nursing; only one nurse held a Master's degree in Nursing.The length of professional experience at the unit ranged between 10 and 12 years.They were also asked about knowledge on informatics and they unanimously affirmed that they had never taken a course or training in information technology.
For the participants to maintain the system, a tablet was used, 7", dual core, Android 4.0, connected to the unit's wi-fi.It is important to highlight that the system ideally functions on any mobile device (smartphone or tables) with Internet access, without the need for minimal configurations.The software can also be used on computers, as its development permits the use on different platforms.
To assess the suitability of the protocol for mobile devices to the reality of the teaching hospital's Neonatal ICU, the participants were interviewed to get to know their opinion on the difficulties to handle the system, the importance of the prototype for NCS and the suggestions to improve it.The data were analyzed through a qualitative approach and Bardin's content analysis was chosen for the analysis.
Concerning the ethical aspects, the orientations inherent in the research protocol in National Health
Council Resolution 466/12 (9) were followed.
Results
The results appointed the two research phases: the first that showed how the prototype was developed and the second that assessed the prototype on a mobile device.
Care Systemization
The prototype was developed through the use of a database the nurses from the unit had validated, which presents the following empirical data, used in care practice: identification of the infant, anthropometric data, vital signs and motive for the hospitalization.Both are registered by the system administrator.The screen subsequent to the access (Figure 2) shows a message confirming that the login was successful.Thus, as a standard user, the nurse can access the software through the following options: 'beds', 'patient' or 'exit' the system.After concluding the selection of the Nursing Interventions, the nurses terminates and saves the nursing care records, which are filed in the system.All information produced can be printed (producing a .pdffile), when finishing the inclusion of information or at any other time.
Concerning the system functions for the user/ administrator, the initial screen is similar to the screen displayed to standard users.The main difference relates to the other functions the system offers for this kind of user.The administrator can make changes in the system database, that is, new information can be included in the functions considering the Nursing Process: needs, assessment items, nursing diagnoses and interventions.
The same person is responsible for managing the users, that is, for registering the nurses who can use the system.
Care Systemization on mobile devices
To assess the applicability of the system developed for mobile technology in care practice, after the software maintenance period, interviews were held with five nurses.
Through the assessment of the prototype, three categories could be identified: appointing system maintenance difficulties, acknowledging the importance of the prototype for NCS on mobile devices and suggesting changes to adapt to the care reality.
Category 1 -Appointing system maintenance difficulties [...] the main difficulty is that I'm not good with informatics (laughs), but… I was able to handle it, I was able to open it, do the evolutions (E1); [...] in my case, I'm a starter, in systemization as well as informatics [...] I experience some difficulty (E2); [...] no, difficulty really, I couldn't find anything [...] I was able to advance through all steps. I found it was easy, I've already got this small one, so... (referred to the mobile phone)) (E3); [...] no, no difficulty. I have the same table, so I had no problem (E5).
The reports demonstrate that some nurses experienced difficulties to use the tablet, which are related to the professionals' lack of experience with the technology and the fact that they had never used this kind of mobile technology in their work process, and mainly because it is a touchscreen device.
On the opposite, some nurses did not experience difficulties to handle the mobile technology, affirming that they have a device similar to the table, either a mobile device like a smartphone or technology identical to the type used in the study.The statements highlighted show that there is a consensus among the nurses on the importance of an application for NCS using mobile devices.The advantages mentioned mainly refer to requisites like mobility and agility for the patient evolution and elaboration of the care plan, thus optimizing the time, besides the flexibility granted in the management of care actions when using the tablet.These reports demonstrate the need for changes in the final step, in which all information included and the selected items are listed.This screen produces the document that is printed and attached to the patient history.The nurses also suggested changing the color between questions and answers for any professional to see the information easily.In addition, the need was highlighted to automatically save the information and to adapt some software items to the particularities of infant assessment in critical conditions.Other suggestions referred to the possibility to compact the final information to reduce the size of the PDF document and, consequently, to reduce the consumption of printing paper.
Category 2 -Acknowledging the importance of the prototype for NCS on mobile devices
Rezende LCM, Santos SR, Medeiros AL.
Discussion
It cannot be denied that the technological advances have increasingly influenced the health care practices.In that context, the technology has also greatly influenced the nurses' daily work.In recent years, the use of IT, including computers, portable digital devices and the Internet, has advanced in nursing knowledge, permitting the construction of a link between the art and science of nursing.In all spheres of these professionals' practices, in nursing research and in the insertion of informatics into nursing education, the technological resources play a very important role.If used correctly, technology is a way to save time, helping to offer high-quality nursing care, besides contributing to the nurses' proficiency (10) .
Nevertheless, the nursing professionals' lack of proximity with the computerization process is still present nowadays.In this study, it was verified that Initially, the difficulties and resistance existed but, as a result of the use, the users gradually adapted to the particularities of the mobile devices (11) .
Since 2003, in New York City, nurses from a health service who engage in home visits use tablets to document patient information.The mobile devices helped to computerize and handle different forms used during the visits (12) .This means that the inclusion of technologies into the nurses' daily work grows daily and that the professionals need to get familiar with these advances to adapt to the new reality.
Authors affirm that the mobile devices offer great advantages, including the fact of being portable (capable of being transported relatively easily), usable and functional, easy to connect and communicate with the users and with other devices.Another important aspect is the user's facility to move, as the mobile device fits into the hand palm, improves the visual quality and is more comfortable, light, easy to use and discrete (13) .
In a study on the use of tablets to register clinical information involving North American nurses, the authors concluded that these mobile devices are convenient.
In one of the reports, the participants highlighted that the nurses are always short of time and interested in anything that can simplify their lives and grant them some more free time (14) .
Concerning the interface, some authors affirm that, when assessing software based on the viewpoint of the end user, one of the most important factors is the communication interface between the user and the system, which should be easy to learn and intuitive because, to reach an objective, the user should follow "certain steps" easily.In this study, the nurses reported good acceptance of the program interface, merely suggesting some color changes, suggestions observed in earlier studies, where nurses highlighted that the software assessed should have more contrasting colors (15) .It was also observed in this study that some aspects of the system that permit assessing the infant's health condition, like the assessment of reflexes for example, need to offer judgment items that reflect the particularity of these clients.In this respect, the authors emphasize that the system developers have faced a permanent challenge to enhance the activity flow, reduce the professionals' work burden and adapt the design and content of the technological devices and systems to the reality of the nurses' care practice (16) .
Authors emphasize that, besides the contributions to care practice, the technological advances grant the nurses the opportunity to direct their professional destiny, adapting technological resources to care, helping them to envisage emerging trends in health as challenges and opportunities to grow in the career.New tools are available, new areas and new work, demanding experts in any country, a vast number of opportunities available to who decides to incorporate technological information in daily practice (17) .
In this study, the nurses' comments and suggestions permitted identifying the difficulties, importance and strategies to better adapt the prototype to the care reality of the Neonatal ICU, besides the advantages the system can offer in daily nursing work.
Conclusion
In this study, a system was developed that allowed the nurses to systemize nursing care at Neonatal ICU through the use of a tablet.When included in the care reality to support the practice, even if as a test, the research revealed that the nurses experienced difficulties to use the mobile devices, but that the advantages surpassed these obstacles.
It was verified that an NCS system through mobile technology enhanced the flexibility of the nursing records, because the data were collected at the bedside and the Nursing Process was dev eloped anywhere at The project was forwarded to the Research Ethics Committee, approved and registered in the National Information System on Ethics in Research involving Human Beings (SISNEP), under CAAE-25890914.5.0000.5183, on March 13 th 2014.
The data on the assessment parameters of the infant's health condition and which supported the construction of the care plan were elaborated in view of the following human needs: shelter, thermal regulation, oxygenation, hydration, nutrition, cutaneous-mucous, physical and corporal integrity, exercise, physical motility, sleep and rest, perception, endocrine regulation, need for elimination, therapeutics, communication and, finally, the nurse's supplementary notes.What the elaboration of the care plan is concerned, 273 assertions are presented, 143 of which related to the Nursing Diagnoses and 130 to the Nursing Interventions, constructed based on the ICNP 1.0.Concerning the system functions, there are two types of credentials: standard users, which in this study refer to the nurses from the Neonatal ICU, and the administrator, in this case the researcher.The standard user can do the following: include and edit patients, occupy/void beds, visit the patients, consults the data on a visit done, print data, consult the time of the visit and the patient.Besides the above actions, the administrator can also: manage beds, include categories of indicators, which refer to the human needs, include nursing diagnoses and interventions, manage users, besides excluding patient information, as can be observed in the diagram of usage cases, shown in Figure 1.
Figure 1 -
Figure 1 -Diagram of case functions according to type of system user
Figure 3 -
Figure 3 -Images shown to inform on available and occupied beds
Figure 2 -
Figure 2 -Initial screen of the system after the login
Figure 4 -
Figure 4 -System categories for the development of NCS for infants
Figure 5 -
Figure 5 -Suggested Nursing Diagnoses [...] it (tablet) grants you freedom to move and do the physical examination wherever you are, you already evolve and proceed with your work process (E1); [...] with the mobile device, I can go somewhere else if the computer is occupied, I can do it standing, you're very limited on the computer [...] I think that, in general, it gets more organized, the nurse, whoever it is, will follow those steps, will standardize (E3); [...] I gain time, I manage my actions better (E4); [...] the fact that the device is mobile grants the nurse mobility [...] agility to elaborate the care plan, a safe register, a file without occupying physical space, besides saving time in the implementation of the NCS (E5).
Category 3 -
Suggesting modifications in the system to adapt to the care reality [...] for us to visualize the end better, it would need more colors [...] the answers should be more colored (E1); [...] to adapt to the infant, some things need improvements, the part about the child's assessment (E4); [...] although I liked it, I would include the option to save the data as they are informed to the system [...] condense the information further, reducing the volume of paper needed for printing (E5).
none of the nurses had participated in a computer training course, which could also result in the difficulty to adapt the system for mobile devices to their daily work, despite their daily use of smartphones.With a view to minimizing the shortages in technology use in care practice, interventions can also take place during undergraduate nursing programs.In the same perspective, a pilot study undertaken at the University of Philadelphia in the United States sought strategies to include the table into the daily reality of nursing undergraduates and obtained similar results. | 4,502.4 | 2016-07-04T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Theoretical Estimate of the Glass Transition Line of Yukawa One-Component Plasmas
The mode coupling theory of supercooled liquids is combined with advanced closures to the integral equation theory of liquids in order to estimate the glass transition line of Yukawa one-component plasmas from the unscreened Coulomb limit up to the strong screening regime. The present predictions constitute a major improvement over the current literature predictions. The calculations confirm the validity of an existing analytical parameterization of the glass transition line. It is verified that the glass transition line is an approximate isomorphic curve and the value of the corresponding reduced excess entropy is estimated. Capitalizing on the isomorphic nature of the glass transition line, two structural vitrification indicators are identified that allow a rough estimate of the glass transition point only through simple curve metrics of the static properties of supercooled liquids. The vitrification indicators are demonstrated to be quasi-universal by an investigation of hard sphere and inverse power law supercooled liquids. The straightforward extension of the present results to bi-Yukawa systems is also discussed.
Introduction
When liquids are quenched below their melting point by cooling or compression in a manner that suppresses crystallization [1], they exhibit a dramatic slowdown in dynamics and remarkable increase in their viscosity. Since quenching is typically caused by cooling, these metastable liquids are known as supercooled and, for a sufficiently low temperature, they can undergo dynamical arrest and transform into a glass [2]. The process of liquidglass transition or more simply glass transition has been the source of various questions concerning the nature of the transition and the microscopic mechanisms driving it [3][4][5].
The physics of the glass transition have been addressed with a mix of experiments, computer simulations and theoretical approaches. In experiments, the glass transition has been investigated in colloidal systems [6][7][8], granular media [9,10] and organic compounds [11]. Simulations have led to important insight in the physics of supercooled liquids in regimes often not accessible in experiments by adopting simplified models such as the Kob-Andersen [12][13][14] and hard-sphere binary mixtures [15,16]. Theoretical approaches such as mode coupling theory [17,18], random first-order transition theory [19] and dynamic facilitation theory [20] have rationalized some experimental findings and even predicted previously unobserved features of the vitrification process [21,22].
In this paper, mode coupling theory (MCT) is employed to estimate the glass transition line of Yukawa one-component plasmas (YOCP). MCT is an entirely first-principle approach for the investigation of the dynamic processes occurring in glass-forming liquids, which allows to localize the glass transition point requiring only the knowledge of the static properties of the supercooled liquid as an input. MCT is known to lead to good predictions for the dynamics of supercooled liquids [22], which also applies for the glass transition point in spite of the fact that the MCT bifurcation (associated with the glass transition) should rather be interpreted as a cross-over point from a non-activated into an activated dynamics regime [4,22]. The YOCP comprises of equally charged point particles immersed in a neutralizing background that interact via the pair potential u(r) = (Q 2 /r) exp(−r/λ). Here Q is the particle charge and λ the screening length defined by the polarizable background. Thermodynamic YOCP states are uniquely specified by two dimensionless variables [23]: the coupling parameter Γ = βQ 2 /d and the screening parameter κ = λ/d. This allows to re-write the interaction potential as βu(x) = (Γ/x) exp(−κx), where d = (4πn/3) 1/3 is the Wigner-Seitz radius, n is the particle number density, β = 1/(k B T) and x = r/d is a normalized distance. The YOCP possesses a well-understood phase diagram in terms of the κ and Γ variables [24].
The main motivation of this work lies in the relevance of the YOCP model to the experimental realization of complex plasmas, a novel state of soft matter composed of charged particles of mesoscopic size that are immersed in a weakly ionized plasma [23]. The YOCP has been suggested as a promising tool to investigate the dynamics of glassy systems [25], but glass formation has remained experimentally elusive in three dimensional complex plasmas. In particular, complex plasmas have already been employed in order to study supercooled fluids near the vitrification point in two dimensions [26,27], but threedimensional glassy structures exhibiting dynamical arrest still remain to be observed. The accurate estimate of the glass transition line provided in this work should help to guide current [28] or future complex plasma experiments in microgravity conditions that are or will be actively searching for the glassy state of plasmas. It is worth pointing out that MCT calculations of the YOCP glass transition line are already available in the literature [29]. However, there is space for drastic improvement over the existing prediction due to the use of oversimplified structural input that should be grossly inaccurate within the supercooled liquid regime.
Theoretical Background
This section provides an overview of the theoretical background upon which the remaining part of this work is constructed. The equations and approximations which characterize mode coupling theory are presented, the basics of isomorph theory are discussed and the integral equation theory of liquids employed to compute the static structural properties is presented.
Mode Coupling Theory of the Glass Transition
In order to distinguish glasses from stable and supercooled fluids, it is necessary to consider the temporal evolution of the microscopic dynamics. One common probe for such dynamics is the intermediate scattering function, F(k, t), which quantifies, at time t and for the wavenumber k, the correlation of the density fluctuations over a length scale ∼k −1 . The temporal dependence of F(k, t) exhibits three distinguished behaviors between the fluid, supercooled and glassy state [30]: for stable fluids, the intermediate scattering function relaxes exponentially in time, F(k, t) ∼ e −t/τ . On the other hand, supercooled liquids exhibit a multi-stage relaxation in which an initial exponential decay is followed by the so-called β-relaxation regime in which the particles are trapped in cages and the intermediate scattering function exhibits a plateau, i.e., F(k, t) ≈ const.r Such plateau is eventually destroyed during the final α-relaxation regime when the particles escape from their cages, ergodicity is restored and the intermediate scattering function relaxes towards zero following a stretched-exponential law, F(k, t) ∼ e −(t/τ)γ . In the above τ and γ are two constants which are both wavenumber and temperature dependent. Finally, for glasses, there is no full relaxation of the dynamics, particles escape their cages only in rare events and the intermediate scattering function is characterized by a persistent plateau which leads to a positive asymptotic limit F(k, t → ∞) > 0.
MCT provides an equation of motion for the intermediate scattering function [17,18]. In what follows, we will briefly describe the derivation of the fundamental MCT equation with the main purpose of highlighting the assumptions that are adopted in MCT, the reader is addressed to Ref. [30] for a detailed derivation of the MCT equation of motion. Let us start by considering a system with N particles of mass m that are enclosed in a volume V with average number density n = N/V and temperature T. Using r j (t) to denote the position of a particle j at a time t, we introduce the time-dependent microscopic density ρ(r, t) = ∑ N j=1 δ[r − r j (t)] and the microscopic density fluctuations δρ(r, t) = n − ρ(r, t). The intermediate scattering function is then defined as the autocorrelation function of the density fluctuations, F(k, t) = (1/N) δρ(−k, 0)δρ(k, t) , where ... denotes the ensemble average operator and where δρ( is the spatial Fourier transform of the density fluctuations (we have implicitly assumed that the system is isotropic). Employing the Zwanzig-Mori projection formalism [31,32], it is possible to obtain the following exact integro-differential equation for the intermediate scattering function [18] is the static structure factor and M(k, t) is a memory kernel which can be expressed as the ensemble average of a fluctuating force that depends on density fluctuations and pair density products of the form δρ(−k, t)δρ(k, t) [30].
Equation (1) has the structure of an oscillator equation in which Ω(k) plays the role of the characteristic frequency, while the memory kernel acts as a generalized and time-dependent friction coefficient. However, without a simplified expression for the memory function, Equation (1) cannot be solved. Within MCT, such simplified expression for M(k, t) is derived by adopting the procedure explained below. In MCT, the memory kernel is decomposed into the regular M reg (k, t) part that describes the short-time conventional liquid dynamics and the asymptotic M MCT (k, t) part that describes the long-time dynamics dominated by the interplay between caging and ergodicity-restoring effects [18], M(k, t) = M reg (k, t) + M MCT (k, t). The regular part is neglected [30] or approximated with M reg (k, t) = νδ(t) with ν a friction constant [17,33], since the glassy state is mainly concerned with the long-time behavior of the density correlation function. The second part is treated under the assumption that the most relevant contribution stems from the pair density products of the fluctuating force. Hence, M MCT (k, t) is projected onto a basis of pair density products ρ(k 1 , t)ρ(k 2 , t) with a properly defined projection operator running over all (k 1 , k 2 ) pairs relevant to the system [30]. The projection leads to the emergence of triplet correlation functions containing only static properties and of a time-dependent four particle density correlation function; the triplet correlation functions are factorized into static structure factor products within the convolution approximation [34], while the four particle density correlation function is approximated as the product of density pair correlation functions with Kawasaki's approach [35]. These lead to the MCT intermediate scattering function equation where τ(κ) = ν/[Ω(k)] 2 and the memory kernel is given by [30] In the above p = |p|, p = k − k and c(k) is the Fourier transform of the direct correlation function which is related to the static structure factor via the Ornstein-Zernike equation (see Section 2.3), S(k) = 1/[1 − nc(k)]. Provided that the static structure factor and the viscosity are known, Equation (2) is a self-consistent equation for F(k, t) which can be solved subject to the initial conditions F(k, 0) = S(k) and ∂F(k, 0)/∂t = 0. Equation (2) is sometimes reported in its over-damped form by assuming that the viscosity is so large that the inertial second order derivative term can be neglected [18,33,36]. However, while the over-damped form of Equation (2) is appropriate to study colloids, it would be inaccurate for complex plasmas in which the dilute background gas [23] makes viscous damping less predominant.
Given that glasses can be distinguished from conventional liquids from the asymptotic limit of the intermediate scattering function, it is sufficient to obtain an equation for F(k, t → ∞) in order to investigate the glass transition properties. Rewriting Equation (2) in terms of the normalized density autocorrelation function φ(k, t) = F(k, t)/S(k), Laplace transforming and employing the final value theorem to extract the asymptotic limit, leads to the form factor f (k) = lim t→∞ φ(k, t) MCT equation The solution of the MCT equation for the form factor allows to distinguish between liquids and glasses, since the former are characterized by f (k) = 0 while the latter are characterized by f (k) > 0. It should be noted that, at the glass transition point, the form factor changes discontinuously from zero to some positive critical value f c (k) > 0. This discontinuity in the form factor happens despite the fact that the static properties do not exhibit any discontinuity between supercooled liquids and glasses. This bifurcation phenomenon is a manifestation of the feedback between the force fluctuations in the memory kernel and the density fluctuations in the form factor [18,21].
Isomorph Theory
Isomorphic curves are phase diagram lines of constant excess entropy along which a substantial set of structural and dynamical properties remain approximately invariant when expressed in dimensionless units where the length is normalized to a = n 1/3 and the energy to k B T [37,38]. While it is possible to identify lines of constant excess entropy in the phase diagram of any system, only in R-simple systems the isentropic lines are also isomorphs. R-simple systems are characterized by the property that the ordering of the potential energies of two configurations corresponding to the same density is preserved when the two configurations are re-scaled uniformly to a different density [39]. In other words, denoting with U the potential energy and with R the set of N particle's position {r 1 , ...r N }, R-simple systems satisfy the relation U(R a ) < U(R b ) =⇒ U(ζR a ) < U(ζR b ), where R a and R b are two equal-density configurations and ζ a positive scaling factor.
A recent investigation has shown that the YOCP is an R-simple system whose isomorphs can be accurately described with the following analytical parameterization [40] where α = a/d = (4π/3) 1/3 is the ratio between the mean-cubic inter-particle distance employed in isomorph theory and the Wigner-Seitz radius used for the characterization of the YOCP state point, while Λ is a constant close to unity which depends weakly on the state point. It should be noted that, with the assumption Λ = 1, the isomorph parameterization described by Equation (5) is equivalent to the semi-empirical expression utilized in Ref. [41] in order to fit MD data of the YOCP melting line [24,42] Γ m (κ) = Γ OCP m e ακ 1 + ακ + (ακ) 2 where Γ OCP m = 171.8 is the coupling parameter at melting in the one-component plasma (OCP) limit (κ = 0) [42]. The fact that the isomorphs and the melting line can be approximated with the same analytical expression is a reflection of the fact that for R-simple systems the melting line constitutes an isomorphic curve to the first-order [37,43].
Integral Equation Theory of Liquids
The integral equation theory (IET) of liquids gives access to the static liquid properties without resorting to computer simulations. For one-component liquids with pair-wise isotropic interactions, IET comprises of two exact equations: the Ornstein-Zernike equation [34] h(r) = c(r) + n c(r )h(|r − r |)d 3 r and the non-linear closure condition derived from cluster diagram analysis [34] In the above g(r) denotes the radial distribution function, h(r) = g(r) − 1 the total correlation function, c(r) the direct correlation function and B(r) the bridge function. Knowledge of the total correlation function h(r) gives direct access to the static structure factor via the Fourier space relation S(k) = 1 + nh(k). Thus, the solution of the above equations provides all the input that is necessary for the calculation of the form factor via MCT. Nevertheless, the solution of the IET system of equations requires an expression for the bridge function. The latter can be formally represented as a power series of the density, but such series is known to converge slowly already at moderate densities [44,45]. Combined with the fact that the calculation of the series coefficients becomes extremely cumbersome beyond the third order [46][47][48], this calls for the adoption of approximate bridge function expressions.
Among the wide variety of bridge function approximations or closures that have been proposed over the years [49], here we shall consider only three approximations which have been previously employed to investigate the fluid properties of the YOCP: the hypernetted chain (HNC) approximation, the isomorph-based empirically-modified hypernetted chain (IEMHNC) approach and the variational modified hypernetted chain (VMHNC) approximation. Within the HNC approximation, the bridge function is neglected altogether by assuming B(r) = 0 [50]. The HNC approximation is straightforward to implement and flexible, since it is not taylored to any specific interaction potential. For systems featuring Coulomb interactions, it has been shown that the HNC approximation produces qualitatively correct results for the static and thermodynamic properties of state points far from crystallization, but that its performance rapidly degrades near the freezing line [51].
The IEMHNC approach is an advanced closure to the IET system of equations that is built upon the ansatz that reduced-unit bridge functions remain exactly invariant along isomorphic lines [52]. Systematic indirect bridge function extractions along isomorphic curves with the aid of molecular dynamics simulations have confirmed the approximate validity of the invariance conjecture for Yukawa systems [53]. This ansatz is then combined with two external inputs, a closed-form expression for the isomorphs and a closed-form bridge function expression valid along any phase diagram line that possesses a unique intersection point with any isomorph, in order to construct an expression for the bridge function valid for the whole phase diagram [52]. The IEMHNC approach has been applied to Yukawa and bi-Yukawa fluids showing a remarkable agreement with computer simulations in the entire dense fluid region up to crystallization [52,54]. Note that the IEMHNC approach is only applicable to R-simple systems for which the aforementioned external input is available. In addition, the IEMHNC bridge function expression might be accurate only in a sub-region of the phase diagram which depends on the region of validity of the external inputs. For instance, with the currently available external inputs, the IEMHNC bridge function for the YOCP is strictly applicable for coupling parameters which satisfy 5.25 ≤ Γ iso (Γ, κ) ≤ 171.8 [52].
The VMHNC approximation is an advanced closure to the IET system of equations which is based on the ansatz of bridge function quasi-universality [55]. The quasiuniversality conjecture justifies the following two-step procedure adopted to define the bridge function [56]: first, the unknown bridge function B(r) is replaced with the Percus-Yevick bridge function of the hard sphere system, B HSPY (r; η), for which an exact analytical representation in terms of the inter-particle separation r and of the packing fraction η = πnσ 3 /6 (with σ the hard sphere diameter) is available. Second, the value of the packing fraction which ensures the correct mapping between B(r) and B HSPY (r; η) is determined with a robust method based on the minimization of a properly defined free energy functional. Since it is not constructed for a specific class of systems, the VMHNC approach shares the same flexibility of the HNC approximation and can be applied without any major modification to any one-component system characterized by purely repulsive interactions. For the specific case of the YOCP, the VMHNC is known to produce results that compare exceptionally well with computer simulations [57] and are as accurate as those obtained with the IEMHNC approach over the entire fluid region of the phase diagram [58]. The main drawback of the VMHNC approach lies in its computational cost, since the minimization of the free energy functional makes the algorithm for the solution of the IET more cumbersome, eventually causing the VMHNC approach to become up to 80 times slower than approximations which do not involve minimization [58].
Computational Approach
This section describes the computational methods employed in the solution of the MCT equation for the form factor and in the solution of the IET system of equations for the static structural properties. The advantages of combining MCT with structural input obtained with advanced IET closures are discussed and the present algorithm is benchmarked against the literature results.
Combining MCT with Advanced IET Approaches
Within MCT, the static structural properties of the supercooled fluid constitute the only external to the theory input that is required for the calculation of the glass transition line and the critical form factors. In the literature, such properties have been obtained either from computer simulations [59][60][61] or from IET calculations combined with elementary closures (when simulation input was unavailable) which include the aforementioned HNC approximation for soft long range interaction potentials [29,62] or the Percus-Yevick (PY) approximation for hard sphere short range interactions [16,17,33,[63][64][65]. More advanced IET closures which enforce thermodynamic consistency through free parameters or resemble the VMHNC approximation by featuring an optimized correspondence rule have also been considered [66][67][68]. However, they have received comparatively much less attention than the elementary HNC and PY closures. The objective of this section is to demonstrate that the adoption of advanced IET approximations for the calculation of the static properties is an essential ingredient for reliable predictions of the MCT glass transition line. In what follows, the HNC approximation will be compared to the IEMHNC approach. The PY approximation will not be discussed owing to the long range interaction potential of interest, while the VMHNC approach will not be addressed here owing to its similar accuracy to the IEMHNC approach [58].
The discussion will center around stable rather than supercooled fluids. The main reason is that it has proven to be extremely challenging to simulate supercooled liquids due to the necessity of preventing crystallization (especially for one-component systems) [69][70][71][72][73] as well as due to the high computational cost necessary to obtain an equilibrated configuration and to effectively sample the phase space [74][75][76][77], while it is relatively cheap from a computational point of view to perform computer simulations of the stable fluid state in order to quantify the accuracy of the different IET approximations. The secondary reason is that, formally, the region of validity of the IEMHNC approach is the stable fluid region. Thus, any attempt to compare the HNC and IEMHNC approximations outside this region involves extrapolations which inevitably cast a doubt over the result of the comparison. However, as will shall deduce later, the IEMHNC approach can indeed be extrapolated deep into the supercooled liquid regime without losing its accuracy.
The comparison is illustrated in Figure 1. Panels (a) and (b) feature a radial distribution function comparison between the results of the HNC, IEMHNC approximations and the results of molecular dynamics simulations. It is evident that the IEMHNC approach outperforms the HNC approximation leading to radial distribution functions that are nearly indistinguishable from the results of computer simulations within the first and second coordination cells [52]. In addition, the IEMHNC approach maintains the same level of accuracy throughout the stable fluid region [58]. On the contrary, the HNC approximation exhibits strong deviations from the MD results within the first coordination cell. More important, it becomes more and more problematic upon approaching the melting point [52,58]. The latter observation is a direct manifestation of the fact that the bridge function contribution to the static structural properties gradually becomes more prominent as the liquid-solid phase transition is approached. Finally, it is reasonable to assume that this trend continues also beyond the melting point, in the supercooled portion of the phase diagram, which implies that the HNC approximation leads to grossly inaccurate estimates for state points close to the MCT glass transition. Panels (c) and (d) of Figure 1 feature a static structure factor comparison between the results of the HNC and IEMHNC approximations. These panels shed light on another property of the HNC approximation, which is here observed for the stable fluid region but also holds beyond the melting point (as we shall deduce in what follows); namely, the fact that the HNC approximation produces quantitatively correct structural properties but for state points of much stronger coupling than that of the actual state point. In particular, in panel (c), the IEMHNC and HNC approaches are compared for the same state points and, as anticipated, lead to different results. Given the high accuracy of the IEMHNC approach, the IEMHNC results can be considered as nearly exact which confirms that the HNC approximation introduces noticeable distortions in the static properties. On the other hand, panel (d) demonstrates that the two IET approximations result to nearly identical structural properties, if the HNC approximation is applied at Γ = Γ HNC and the IEMHNC approach at Γ = Γ IEMHNC with Γ HNC Γ IEMHNC . In order to rationalize this result, it is convenient to rewrite the IET diagrammatic closure, Equation (8) . This leads to the physical interpretation of the bridge function B(r) as an additional repulsion which is superimposed on the interaction potential u(r), given that it is always negative except for a series of small positive peaks in its long-range behavior [79][80][81][82]. Thus, it becomes apparent that the HNC approximation, which assumes that B(r) = 0, would produce quantitatively accurate results, only when the state point is adjusted in such a way that u * (r) ≈ u(r). For the specific case of the YOCP, this amounts to say that the HNC approximation can be expected to reproduce the structural properties of the state point (Γ, κ), only if it is applied to a different state point (Γ HNC Γ, κ), as demonstrated in panel (d).
At this point, it is worth analyzing the consequences of the results discussed above on the MCT glass transition calculations. The MCT equation for the form factor, Equation (4), does not explicitly include any information about the interaction potential whose effect appears only indirectly via the static properties in the non-linear kernel stemming from the memory function. Therefore, it can be expected that the HNC approximation, which produces nearly exact static properties if the state point is artificially re-scaled towards the stronger coupling region, produces unreliable estimates for the location of the MCT glass transition line but accurate predictions for the shape of the critical MCT form factors. Therefore, for a precise determination of the MCT glass transition line, it is necessary to adopt more advanced IET closures. Here we consider two such closures, the IEMHNC approach and the VMHNC approximation. The reason behind considering both advanced closures is that none of them has been extensively tested in the supercooled regime, so it is important to confirm that they produce consistent results for a phase diagram region that falls outside their normal range of applicability.
Numerical Implementation
The following four-step procedure was devised for the efficient localization of the YOCP MCT glass transition line: (i) For a given value of the screening parameter κ, ten values of the coupling parameter Γ that belong to the YOCP supercooled region were considered according to the prescription Γ i = Γ i−1 + ∆Γ with i = 2, 3, ...10. Here Γ 1 and ∆Γ are κ-dependent free parameters that should be chosen in a manner that allows the exploration of a sizable portion of the YOCP supercooled region. (ii) For each (κ, Γ i ) combination, Equations (7) and (8) were solved with one of the three closures described in the Section 2.3 leading to the determination of the static structural properties of the supercooled fluid. (iii) For each (κ, Γ i ) combination, the form factor was computed from Equation (4) supplemented with the structural properties obtained from the IET equations. (iv) In the case of positive form factor for a state point characterized by Γ = Γ i , the update Γ 1 → Γ i−1 was employed, ∆Γ was divided by ten and the procedure was repeated until the coupling parameter of the glass transition point was determined within four digits of accuracy. In the case of zero form factors for all Γ = Γ i state points, the update Γ 1 → Γ 10 + ∆Γ was employed and the procedure was restarted. In the case of positive form factors for all Γ = Γ i state points, the update Γ 1 → Γ 1 − 11∆Γ was employed and the procedure was restarted. For the above procedure to prove successful, robust algorithms should be available for the solution of the IET system of Equations (7) and (8) and for the solution of the MCT form factor Equation (4). The algorithms implemented are outlined in the following paragraphs.
The IET equations were solved with a well-established algorithm [52,54,58] that is based on Picard iterations in Fourier space combined (when necessary) with mixing and long-range decomposition techniques to facilitate convergence. The Fourier transforms are computed on a discretized domain extending up to R max = 80d with a real space resolution ∆r = 10 −3 d and a reciprocal space resolution ∆k IET = π/R max = 0.039/d. The convergence criterion for the Picard iterations reads as |γ m (k) − γ m−1 (k)| < 10 −5 ∀k where γ(k) is the Fourier transform of the indirect correlation function γ(r) = h(r) − c(r). When the IET equations are solved within the VMHNC closure, the effective packing fraction η which appears in the VMHNC bridge function is found with a dedicated iteration cycle that is terminated when the convergence criterion |η m − η m−1 |/η m < 10 −5 is satisfied. A more detailed description of the algorithm employed in the solution of the IET equations can be found in Ref. [58].
The MCT equation for the form factor was solved over a discretized wavenumber domain with resolution ∆k = 0.1/d containing 400 equally-spaced points distributed between ∆k and k m = 40/d . Taking advantage of the fact that the long-time limit of the form-factor obeys the maximum property [33], Equation (4) is a short-hand notation for the non-linear term which appears in the right hand-side of Equation (4). Concerning this non-linear term, it was first rewritten in a form more suitable for numerical integration by applying the bipolar convolution theorem [12,17] (9) and then it was evaluated with the adaptive Gauss-Kronrod quadrature rule as implemented in the GNU Scientific Library [83]. The discretization employed in the solution of the MCT equation has two straightforward consequences. First, the fact that ∆k > ∆k IET suggests that it is necessary to interpolate the static properties obtained from IET when they are employed in the solution of the MCT equations. This, however, should not impact the final result of the MCT calculations because there is no extrapolation involved and because such results have proven to be independent from the grid resolution if ∆k ≤ 0.2/d (see Section 3.3). Second, the integrand values within the long wavelength limit (k = 0) and within the short wavelength limit (k → ∞) are inaccessible, since both these limits fall outside of the grid employed in the solution of the MCT equations. This implies that in Equation (9)
Benchmarking and Convergence Study
Before proceeding with the presentation of the results stemming from the systematic solution of the MCT equations over the supercooled YOCP phase diagram, it is important to discuss how the present algorithm was benchmarked against results available in the literature.
The MCT benchmarking exercise initially focused on hard-sphere (HS) systems, since the glass transition of HS systems has been the focal point of numerous investigations over the years [15][16][17]33] making it relatively easy to find reference values for the critical form factor and the glass transition point. In addition, it is possible to employ analytical expressions for the static structural properties of the HS systems, which allows to test the isolated performance of the MCT solver without worrying about the performance of the IET solver. Regarding the static properties of such systems, they have either been obtained within the Percus-Yevick approximation by employing the Wertheim-Thiele analytical solution [84,85] or expressed through the accurate semi-empirical parameterization proposed by Verlet and Weis [86]. In what follows, the former case will be coined as the Hard-Sphere Percus-Yevick (HSPY) system (this is the same system employed for the definition of the bridge function in the VMHNC approximation), while the latter case will be addressed as the Hard-Sphere Verlet-Weis (HSVW) system. Panel (a) in Figure 2 illustrates how the present numerical solutions for the critical form factor of the HSPY and HSVW systems compare against results available in the literature. It is evident that the present numerical implementation is able to correctly reproduce both the glass transition points (which occur at η = 0.516 for the HSPY system and η = 0.525 for the HSVW system) and the corresponding critical form factors. Figure 2 of Ref. [17] (discrete circles). The inset compares the form factor at the glass transition point for the HSVW system as obtained from the present numerical implementation (solid line) and from the results reported by Wu and Cao in Figure 1 of Ref. [63] (discrete squares). The HS calculations were performed with k m = 50/σ and ∆k = 0.1/σ where σ denotes the HS diameter. Panel (b) compares the form factor at the glass transition point for the OCP system as obtained from the present numerical implementation (solid line) and from the results reported by Yazdi and collaborators in Figure 2 of Ref. [29] (discrete diamonds). Note that for the present implementation we report the form factors at two values of the coupling parameter: Γ = 575 (magenta) which is the present glass transition point prediction and Γ = 590 (blue) which is the Yazdi glass transition point prediction.
The MCT benchmarking exercise was also extended to YOCP systems, where the investigation of Yazdi and collaborators [29] provided the only literature results that can be employed as a reference. In panel (b) of Figure 2, the present solution of the MCT form factor equation in the OCP limit is compared with the corresponding OCP solution (as reported in Figure 2 of Ref. [29]). In both cases, the static properties of the OCP were computed by closing the IET system of equations with the HNC approximation. Two problems can immediately be noticed: the predictions for the glass transition point do not match (the present implementation predicts a glass transition at Γ = 575, while the reference implementation predicts a glass transition at Γ = 590) and the two implementations generate different form factors for the same coupling parameter. Unfortunately, Ref. [29] contains very little information regarding the algorithms and the parameters used to solve the MCT form factor equation. Hence, in order to rule out the presence of numerical errors in the present solution, we had to resort to a systematic convergence study on the two free numerical parameters that emerge in the solution of Equation (4) with the memory kernel expressed via Equation (9), namely the wavenumber resolution ∆k and the cutoff wavenumber k m .
In the convergence study, the wavenumber resolution was varied between ∆k = 0.05/d and ∆k = 0.4/d, while the cutoff wavenumber was varied between k m = 10/d and k m = 50/d. The results of this investigation are reported in Figure 3 and reveal that a wavenumber resolution ∆k ≤ 0.2/d and a cutoff wavenumber k m ≥ 20/d are sufficient to obtain a form factor and glass transition point Γ g which are independent from the parameters employed in the numerical implementation. The results in Figure 3 refer to the OCP limit (κ = 0) and to structural input from the HNC approximation, but similar conclusions were obtained also for selected higher values of κ and structural input from the IEMHNC or VMHNC approximations. To summarize, this section demonstrated that the present numerical implementation of the MCT equations is able to reproduce results available in the literature for HS systems. Regarding YOCP systems, a small mismatch was observed between the present results and the literature results. However, while it was not possible to pin-point the origin of this mismatch due to insufficient information, the results of a thorough convergence study ensured that the present MCT calculations are accurate also for Yukawa systems.
The MCT Glass Transition Line
The glass transition line of the YOCP was computed for fifteen screening parameters which belong to the part of the YOCP phase diagram that is most relevant to experimental realizations of Yukawa systems and for which molecular dynamics based calculations of the liquid-solid phase transition are available [24,42] The observation that suggests that the IEMHNC approximation and the VMHNC approach maintain the consistency and accuracy which characterizes them for stable YOCP liquids also for supercooled YOCP liquids. In addition, it indirectly suggests that the defacto extrapolations of the IEMHNC and VMHNC bridge functions deep into the supercooled regime are rather safe. As a consequence, both the Γ MCT-V g (κ) and Γ MCT-I g (κ) estimates can be considered to be an accurate approximation to the "exact" MCT glass transition line which would have been obtained if the MCT equations were supplied with an "exact" structural input extracted from computer simulations. On the other hand, the observation that Γ MCT−H g (κ) ≈ 2Γ MCT-I g (κ) confirms that adopting the HNC approximation would lead to inaccurate estimates for the MCT glass transition line, see the detailed discussion in Section 3.1. In fact, assuming that the MCT-I or MCT-V calculations are able to predict the "exact" MCT glass transition line within a few percent, then the MCT-H calculations lead to an overestimation of the MCT glass transition line of roughly a factor two, regardless of the value of the screening parameter. This justifies our revisiting of the YOCP glass transition results earlier reported by Yazdi and collaborators [29], since the latter were exclusively based on MCT-H calculations.
Perhaps, the most significant result of Ref. [29] was the observation that, for YOCP systems, the glass transition line is almost parallel to the melting line. As a consequence, it can be well approximated by the following analytical semi-empirical formula [29] Γ g (κ) = Γ OCP g e ακ 1 + ακ + (ακ) 2 where Γ OCP g is the glass transition point in the OCP limit. It is important to test the validity of the scaling given in Equation (10) against the accurate glass transition predictions obtained from the MCT-I and the MCT-V calculations not only because it provides a simple convenient parameterization but also because its validity is a strong indication that the glass transition line constitutes an isomorphic curve, compare Equations (10) and (5). As we shall see in the following sections, the isomorph invariance of the MCT glass transition line implies the validity of numerous state-independent symmetries that can be exploited in different ways.
In Figure 4, the analytical scaling of Equation (10) is compared against the predictions of the three MCT glass transition calculations. At this point, it should be noted that during the comparison with a type of MCT calculation, the Γ OCP g pre-factor which appears in Equation (10) is adapted to match to the corresponding value that is given in the first line of Table 1. This means that Γ OCP g = 575.5 for MCT-H, Γ OCP g = 289.8 for MCT-I and Γ OCP g = 279.7 for MCT-V. It is apparent that the semi-empirical expression performs well for the MCT-H calculations (as it was already observed in Ref. [29]) and that it performs even better for the MCT-I calculations and the MCT-V calculations. To be more quantitative, for the set of screening parameters considered in this work (κ ≤ 5.0), the relative deviations between the analytical scaling and Γ MCT-H g (κ) can become as large as 15%, while the relative deviations between the analytical scaling and Γ MCT-I g (κ) or Γ MCT-V g (κ) never exceed 5%. The high accuracy of analytical semi-empirical expression for the MCT glass transition and the equivalence of Equation (10) to Equation (5) with Λ = 1, suggest that the glass transition line can be considered to be an isomorphic curve. This observation also rationalizes why the analytical scaling agrees better with the MCT-I calculations which feature an explicitly isomorph-invariant bridge function [52] and with the MCT-V calculations which feature an implicitly isomorph-invariant bridge function [58] rather than with the MCT-H calculations which neglect the bridge function altogether. It should be mentioned that a modified version of Equation (10) was also tested, in which the analytical scaling was made fully equivalent to the expression for the YOCP isomorphs by replacing ακ with Λακ and then by determining Λ via least-square fitting. The results of such modified scaling will not be analyzed further, because for all three calculations the parameter Λ was found to be very close to unity, suggesting that the choice Λ = 1 is already nearly optimal, and that the modified scaling possesses essentially the same accuracy of the standard scaling of Equation (10).
The above results indicate that the MCT glass transition line of YOCP systems constitutes an isomorphic curve. Given that isomorphs are phase diagram lines of constant reduced excess entropy, this suggestion can be supported by computing the reduced excess entropies of the YOCP state points that lie along the glass transition line, s ex (κ, Γ g ), and then by checking to which extent they satisfy the condition s ex (κ, Γ g ) = const. The accurate computation of the excess entropy requires to perform computer simulations of supercooled fluids combined with free-energy calculation techniques [87], but the correct implementation and execution of such simulations falls outside the scope of the present work. Hence, here we computed s ex (κ, Γ g ) by utilizing two YOCP equations of state available in the literature which provide the reduced excess internal energy, u ex (κ, Γ), from which the reduced excess entropy can be straightforwardly computed by thermodynamic integration from s ex (κ, Γ) = u ex (κ, Γ) − [24,42]. The equation of state proposed by Rosenfeld and Tarazona, within the framework of an asymptotically-high density expansion for purely repulsive potentials, fits well computer simulation results in the strongly coupled stable liquid portion of the YOCP phase diagram and leads to the reduced excess internal energy expression u RT ex (κ, Γ) = M(κ)Γ + 3.0[Γ/Γ m (κ)] 2/5 [88,89]. The κ-dependent parameters a(κ), b(κ), c(κ), d(κ) and M(κ) are given in the respective references, while the coupling parameter at the melting point, Γ m (κ), can be conveniently expressed via Equation (6) [54]. It should be emphasized that both equations of state are strictly valid only for the stable fluid phase of the YOCP. In order to confirm that they can be safely extrapolated to the supercooled regime, the reduced excess internal energies u H ex (κ, Γ) and u RT ex (κ, Γ) were compared against those computed from the IET structural properties, u ex (κ, Γ) = (3/2) ∞ 0 x 2 βu(x)g(x)dx, along the glass transition line that stems from the MCT-I and MCT-V calculations. The comparison showed that both equations of state could predict the reduced excess internal energy within 2% along the MCT-I glass transition line and within 1% along the MCT-V glass transition line leading to the conclusion that both can be employed to obtain accurate estimates for thermodynamic properties of supercooled Yukawa fluids.
The reduced excess entropies along the MCT-I and MCT-V glass transition lines are reported in Table 2. The MCT-H calculations are not included due to their poor glass transition line prediction. There are deviations in the entropy predictions of the equations of state. The Rosenfeld-Tarazona expression predicts that both glass transition lines satisfy s ex (κ, Γ g ) = −5.5 within 3%, whereas the Hamaguchi expression predicts reduced excess entropies that are not constant along the glass transition lines (see the relatively large deviations reported in the last two rows of Table 2) and change noticeably between the MCT-I and MCT-V glass transition lines. At this point, one should recall that the Hamaguchi equation of state was constructed on an empirical basis by observing the functional dependencies within the stable fluid range, while the Rosenfeld-Tarazona equation of state was derived on the basis of an asymptotically-high coupling parameter expansion that ignores the liquid-solid phase transition. Thus, the Rosenfeld-Tarazona equation of state should be more accurate for supercooled YOCP liquids. Overall, from the results of Table 2, it can be concluded that the MCT glass transition line constitutes an isentropic curve with s ex = −5.5 (within 3%) reduced excess entropy. Table 2. Excess entropy along at the MCT-I glass transition line (columns 3 and 4) and the MCT-V glass transition line (columns 6 and 7) as obtained from the Hamaguchi equation of state s H ex , and the Rosenfeld-Tarazona equation of state, s RT ex . All the excess entropies are reported in units of Nk b , i.e., these are reduced excess entropies. In the last three rows, the designation AVE denotes the average value of the reduced excess entropy along the respective glass transition line (rows 1-15), while the quantities a and m report the average and maximum relative deviations between the reduced excess entropies along the respective glass transition line and their average value reported in AVE.
The MCT form Factors
The critical form factors resulting from the MCT-H, MCT-I and MCT-V calculations are compared in Figure 5 for two screening parameters. Important observations from Figure 5 are that the MCT-H, MCT-I and MCT-V calculations lead to similar form factors at their respective glass transition points (compare the solid curves) as well as that there are enormous form factor differences between MCT-H and MCT-I or MCT-V calculations at the same state point (compare the dashed curves to the blue solid curve). These observations are in line with the discussions of Sections 2.3 and 3.1. They are both manifestations of the fact that the MCT non-linear kernel on the right hand side of Equation (4) requires only the static structure factor as external input and does not contain any explicit dependence on the interaction potential. The form-factor similarity at the glass transition point can be explained by the fact that the IEMHNC, VMHNC and HNC approximations can all be expected to produce similar structure factors at their respective glass transition points; the IEMHNC and VMHNC approximations due to their high accuracy (see Section 2.3) and the HNC for the reasons discussed in Section 3.1. On the other hand, the large discrepancies observed between the MCT-H calculations and the other two sets of calculations at the same state point are caused by the poor performance of the HNC approximation in the dense fluid region which was also discussed in Section 3.1. Figure 6 illustrates how the critical YOCP form factor changes with the screening parameter according to MCT-I and MCT-V calculations; similar results (not shown here) were also obtained from MCT-H calculations. Regardless of the IET approximation employed to compute the static structural properties, it is apparent that the critical form factor is characterized by a main peak whose magnitude and position are practically independent of the screening parameter under consideration. The main peak is preceded by a rapid decay towards a state-point-dependent long wavelength limit and is followed by slowly decaying oscillations which also appear to be state-point-independent. The results of Figure 6 are rather anticipated in light of our previous discussions. Since the MCT glass transition line constitutes an isomorphic curve, then a large number of reduced unit dynamic and structural properties should be nearly invariant while traversing the glass transition line. The static structure factor is known to be an isomorph invariant quantity, thus, given the deep connection between S(k) and f (k), it can be expected that also the form factor exhibits some degree of invariance. On the other hand, the variance of f (k) near the k = 0 limit is caused by the fact that the long wavelength limit of the MCT memory kernel is given by a relation of the form αS(0) = αχ T where the pre-factor α is described by Equation (9b) of Ref. [29] and where χ T = 1 + n h(r)d 3 r is the isothermal compressibility, which is known to strongly vary along an isomorph [90] (strictly speaking, S(0) = χ T does not hold for the OCP for which the correct infinite wavelength limit of the structure factor, S(0) = 0, must be retrieved from the small argument expansion S(q → 0) = (3Γ/q 2 + 1/χ T ) −1 with q = kd [91]). While the isomorph variance of S(0) has a negligible effect on the overall degree of invariance of the static structure factor due to the nearly incompressible nature of supercooled YOCP liquids (see Figure 7), the isomorph variance of S(0) has a strong effect on the degree of invariance of the form factor over a sizeable wavenumber interval due to the large value of the proportionality constant α which augments the small compressibility values.
The MCT Vitrification Indicators
The YOCP static structure factor, radial distribution function and direct correlation function along the MCT glass transition line (MCT-H, MCT-I, MCT-V) are illustrated in Figure 7. Let us first focus on the results for the state points pertaining to the MCT-I and MCT-V glass transition lines. In accordance with the predictions of isomorph theory, the static structure factor and radial distribution function are almost invariant. On the other hand, owing to the asymptotic limit c(r → ∞) = −βu(r) and due to the connection to the compressibility via (1/χ T ) = −n c(r)d 3 r, the direct correlation function is strongly variant. Albeit not directly evident from Figure 7, the radial distribution function and the static structure factor are similar to f (k) in the sense that their short-range values are also non-invariant; g(r) is variant due to the asymptotic limit g(r → 0) ∝ exp[−βu(r)] [92], while S(k) is variant due to its connection to the compressibility S(k → 0) = χ T . However, while the short-range variance has a noticeable effect on the critical form factor, it is inconsequential for both the radial distribution function and the static structure factor since it occurs in a region where both functions are approximately zero. Finally, we point out that the invariance of the radial distribution and the static structure factor is slightly violated in the vicinity of their first maxima, but it holds to a nearly exact degree at their subsequent minima and maxima. Regarding the static properties along the MCT-H glass transition line, it can be noticed that S(k) and g(r) both show a reduced degree of invariance, in accordance to an earlier observation which reported that assuming B(r) = 0 has a detrimental effect on the invariant properties along an isomorphic line [52]. The high degree of isomorph invariance which characterizes the static structure factor and the radial distribution function at the MCT glass transition line opens up the possibility of determining a group of empirical vitrification indicators. Such indicators are inspired by the successful application of freezing indicators which allow to locate the liquid-solid phase transition line simply by monitoring some characteristic features of the liquid state S(k) and g(r). Empirical vitrification indicators would serve as phenomenological criteria for the localization of the glass transition line based solely on the structural properties of the supercooled liquid. In fact, it was observed that two commonly used freezing indicators, namely the magnitude of the first peak of the static structure factor S max employed in the Hansen-Verlet freezing rule [93] and the amplitude ratio of the first nonzero minimum to the first maximum of the radial distribution function g R employed in the Raveché-Mountain-Streett freezing rule [94] perform reasonably well also for the prediction of the glass transition line, albeit at different values. In particular, it was revealed that the MCT-I glass transition line is characterized by S max = 4.49 ± 0.15 and by g R = 0.14 ± 0.01, while the MCT-V glass transition line is characterized by S max = 4.00 ± 0.08 and by g R = 0.13 ± 0.01.
MCT-H glass transition points
Given the fact that most of the non-invariant features of S(k) and g(r) are concentrated around their first peak and taking into account that an effective vitrification indicator should remain as constant as possible along the glass transition line, it should be possible to identify even better vitrification indicators by generating simple curve metrics that do not involve the magnitude of the first peak of these quantities. For this reason, two additional prospective vitrification indicators were considered which refer to the amplitude ratio of the first nonzero minimum to the second maximum of the radial distribution function, g R2 , and the amplitude ratio of the first nonzero minimum to the second maximum of the static structure factor, S R2 . The values of these prospective vitrification indicators along the glass transition lines stemming from the MCT-I and the MCT-V calculations are reported in Table 3. Table 3. Prospective vitrification indicators along the MCT-I glass transition line (columns 3 and 4) and the MCT-V glass transition line (columns 6 and 7). S APP R2 denotes the amplitude ratio of the first nonzero minimum to the second maximum of the static structure factor obtained from approximation APP, while g APP R2 denotes the amplitude ratio of the first nonzero minimum to the second maximum of the radial distribution function stemming from approximation APP. In the last three rows, the designation AVE denotes the average value of the vitrification indicator along the glass transition line (rows 1-15), while the quantities a and m report the average and maximum relative deviations between the vitrification indicators along the respective glass transition line and their average value reported in AVE. Both prospective vitrification indicators remain almost constant along the MCT glass transition line, with minor deviations from their average value which do not exceed 1% for g R2 and 3% for S R2 . These vitrification indicators possess another desirable property, since they exhibit respectable variations within the supercooled regime prior to and post the glass transition line (see Figure 8). This characteristic highlights their potential practical use for the localization of the MCT glass transition point. Overall, the indicators g R2 and S R2 perform better than S max and g R , since they exhibit slightly smaller variations along the MCT glass transition line and result to more consistent predictions between the two IET approaches employed in the computation of the MCT glass transition line. In this regard, the VMHNC approximation and the IEMHNC approach produce really close but not identical values for g R2 and S R2 at the glass transition point. We argue that the results obtained with the VMHNC approximation should be preferred over the ones obtained with the IEMHNC approach, since the former approximation is known to produce more accurate predictions for the second coordination cell [58]. Combining the above, it can be concluded that the MCT glass transition point is characterized by g R2 = 0.30 or equivalently by S R2 = 0.35 and that these two conditions can be employed to obtain an accurate guess for the MCT glass transition line of the YOCP without having to solve the MCT equation. On a side note, it is worth pointing out that the vitrification indicator g R2 could also be employed as a freezing indicator, since the state points along the YOCP melting line obtained via computer simulations are all characterized by g R2 = 0.4 ± 0.01 (see panel b in Figure 8). for different screening parameters as a function of the coupling parameter normalized by its MCT glass transition value. Results for stable liquids (Γ/Γ g (κ) 0.6), supercooled liquids prior to the glass transition (0.6 Γ/Γ g (κ) < 1.0) and supercooled liquids post the glass transition (Γ/Γ g (κ) > 1.0). For each indicator, the superscript denotes the IET approximation employed, i.e., the VMHNC approach or IEMHNC approach. The panels report the S R2 or g R2 values for six screening parameters κ = {0, 1, 2, 3, 4, 5} and two MCT glass transition lines, namely those stemming from MCT-V calculations where Γ g (κ) ≡ Γ MCT-V g (κ) (main plot) and from MCT-I calculations where Γ g (κ) ≡ Γ MCT-I g (κ) (inset). The numerical values of Γ MCT−V g (κ) and Γ MCT-I g (κ) have been reported in Table 1. The full symbols represent the values of S R2 and g R2 at the YOCP melting point predicted by computer simulations [24].
Summary of the Results
In the present work, the glass transition line of Yukawa one-component plasmas was computed by combining the mode coupling theory of the glass transition with highly accurate structural input obtained from two advanced closures to the integral equation theory of liquids, namely the isomorph-based empirically modified hypernetted chain approach and the variational modified hypernetted chain approach. It was observed that both closures lead to consistent values for the YOCP glass transition line and it was concluded that the present results offer a greatly improved estimate compared to earlier estimates that are available in the literature. Besides the improvement upon existing results, the highly accurate structural input adopted in the present calculations allowed the identification of two vitrification indicators which can be employed to obtain an accurate guess for the YOCP glass transition line without necessitating a determination of the bifurcation point. The existence of vitrification indicators is important from a theoretical and practical standpoint; from a theoretical perspective, the possibility to identify reliable vitrification indicators that are solely based on the structural properties of the supercooled fluid is a direct manifestation of the fact that the glass transition line is an isomorph. From an experimental perspective, the vitrification indicators can be used to guide experiments aimed at reaching the glassy state since they can be readily estimated from the radial distribution function or the static structure factor, two quantities which are often easily measured in the course of an experiment either by direct camera observation in the case of soft matter [95] or by neutron diffraction in the case of atomic or molecular systems [96].
Quasi-Universality Aspects
A brief investigation of the glass transition point for the HSPY system and for three inverse power law (IPL) systems with exponents m equal to 4, 9 and 12 revealed that the structural vitrification indicators S R2 and g R2 have a quasi-universal character, i.e., the conditions S R2 = 0.35 and g R2 = 0.30 produce an accurate estimate for the glass transition point regardless of the system under consideration. In the investigation of the HSPY and IPL-m glass transition point, we proceeded as follows: (a) The static properties of the HSPY system were computed via the Wertheim-Thiele analytical solution [84,85], while the static structure factors for the IPL-4, IPL-9, IPL-12 systems were computed with the VMHNC approach. (b) MCT was employed for the IPL systems leading to the glass transition points n = 8.103 for IPL-4,ñ = 1.648 for IPL-9 andñ = 1.322 for IPL-12 whereñ = (β ) 3/m nσ IPL is a temperature-scaled density which fully specifies the state point of an IPL−m system with interaction potential u(r) = (σ IPL /r) m . On the other hand, it was not necessary to solve the MCT for the HSPY system, since it is known that it vitrifies when the packing fraction becomes η = 0.516, see Section 3.3. (c) The vitrification indicators were computed and it was revealed that the four systems satisfied S R2 = 0.35 and g R2 = 0.30 within 3% at their respective glass transition points.
In addition, the quasi-universal character of the thermodynamic vitrification indicator s ex = −5.5, obtained in Section 4.1, was tested. The reduced excess entropy of the HSPY system was computed from the Wertheim-Thiele equation of state [85] and the reduced excess entropy of the IPL systems was computed with the Rosenfeld equation of state [88]. | 13,816.6 | 2021-01-28T00:00:00.000 | [
"Physics"
] |
Effect of Hydromagnetic Mixed Convection Double Lid-Driven Square Cavity with Inside Elliptic Heated Block
Mixed convection heat transfer in a two-dimensional the effect of hydromagnetic mixed convection in a double lid driven square cavity with inside elliptic heated block is studied numerically. The left wall of the square cavity and inside the elliptic block was kept at Th while the right wall of the square cavity at a cold temperature Tc with Th > Tc. The lid is assumed to be upper wall moving in left to right and lower wall right to left directions respectively. The magnetic field of strength B is applied parallel to xaxis. Result is presented firstly for different Reynolds number ( ), Prandtl number Pr = 0.733 and presented for different Grashof number . The numerical results studied the effect of Reynolds number, Grashof number and buoyancy ratio on the local values. It is found that direction of lid and different elliptic heated block are more effective on heat and mass transfer on fluid flow with increasing magnetic field for all studied parameters.
Introduction
Mixed convection in enclosures is encountered in many engineering systems such as cooling of electronic components, ventilation in building and fluid movement in solar energy collectors, astrophysics, geology, biology, and chemical processes, as well as in many engineering applications such as solar ponds, crystal manufacturing, and metal solidifications processes. Also mixed convection involving the combined effect of forced and natural convection has been the focus of research due to its occurrence in numerous technological, engineering, and natural applications such as: cooling of electronic devices, lubrication technologies, drying technologies, food processing, float glass production, etc. [1][2][3]. Al-Amiri et al. [4] investigated numerically steady mixed convection in a square lid-driven cavity under the combined buoyancy effects of thermal and mass diffusion. The results demonstrate the range where high heat and mass transfer rates can be attained for a given Richardson number. Sharif [5] studied numerically laminar mixed convective heat transfer in two-dimensional shallow rectangular driven cavities of aspect ratio 10. The top moving lid of the cavity is at a higher temperature than the bottom wall. The effects of inclination of the cavity on the flow and thermal fields are investigated. The stream line and isotherm plots and the variation of the local and average Nusselt numbers at the hot and cold walls are presented. Chen and Cheng [6] investigated numerically the periodic behavior of the mixed convective flow in a rectangular cavity with a vibrating lid. The periodic flow patterns and heat transfer characteristics found are discussed with attention being focused on the interaction between the frequency of the lid velocity vibration and the frequency of the natural periodic flow. Khanafer et al. [7] investigated numerically unsteady laminar mixed convection heat transfer in a lid driven cavity. Teamah et al. [8] analyzed the numerical simulation of double-diffusive mixed convective flow in rectangular enclosure with insulated moving lid. Saha et al. [9] have performed the numerical effect of internal heat generation or absorption on MHD (magnetohydrodynamics) mixed convection flow in a lid driven cavity. The significant reduction in the average Nusselt number were produced as the strength of the applied magnetic field was increased. In addition, heat generation predicated to decrease the average Nusselt number whereas heat absorption increases it. Dawood et al. [10] investigated hydro-magnetic mixed convection double diffusive in a Lid Driven Square Cavity. Hussein [11] investigated the mixed convection in Square Lid-driven with Eccentric Circular Body. Munshi et al. [12] analyzed a numerical study of Mixed Convection in Square Lid-driven with internal elliptic body and constant flux heat source on the bottom wall. Nasrin [13] carried out aspect ratio effect of vertical liddriven chamber having a centered conducting solid on mixed magneto convection. Billah et al. [14] investigated the numerical analysis of fluid flow due to mixed convection in a lid-driven cavity having an heated circular hollow cylinder. Md Rahman et al. [15] studied unsteady mixed convection in a porous media filled liddriven cavity heated by semi-circular heaters.
The main aim of this work is to examine the effect of hydromagnetic mixed convection double lid-driven square cavity with inside elliptic heated block. Results will be presented by streamlines, isotherms, Nusselt number, and velocity profiles.
Mathematical Formulation
The governing equations describing the problem under consideration are based on the laws of mass, linear momentum and energy with buoyancy forces. The energy equation is written using the Boussinesq approximation. This means that all thermophysical properties of the fluid at a reference temperature are taken constant except in the buoyancy term of the momentum equation. In addition, the radition heat exchange is negligible in this study. The non-dimensional governing equations can be written as follows. The equations are non-dimensionalized by using the following dimensionless quantities where ν = µ/ρ, is the reference kinematic viscosity and θ is the non-dimensional temperature. After substitution of dimensionless variable we get the non-dimensional governing equations are:
Nusselt Number calculation
Equating the heat transfer by convection to the heat transfer by conduction at hot wall: Introducing the dimensionless variables, defined in equation (1), into equation (6), gives: The average Nusselt number is obtained by integrating the above local Nusselt number over the vertical hot wall:
Numerical Technique
The coupled governing partial differential equations, i.e., mass, momentum and energy equations are transferred into a system of integral equations by using the Galerkin weighted residual finite-element method. The integration involved in each term of these equations is performed with the aid Gauss quadrature method. The nonlinear algebraic equations so obtained are modified by imposition of boundary conditions. These modified nonlinear equations are transferred into linear algebraic equations with the aid of Newton's method. Lastly, Triangular factorization method is applied for solving those linear equations. For numerical computation and post-processing, the software COMSOL Multiphyics is used.
Program validation and comparison with previous work
The present numerical code is valided against the problem by Al-Amiri et al. [4] was modified and used for the computations in the study. The working fluid is chosen Prandtl number Pr = 0.733. The left wall is kept heated at T h and right wall is kept at cold T c . The upper surface move to right and lower surface move to left. Streamlines and isotherms plotted in Fig. 2 are showing good agreement.
Results and Discussion
The finite element simulation is performed to examine the laminar mixed convection flow and heat transfer in a square cavity having a centered elliptic cylinder. The mixed convection phenomenon in the cavity is influenced by Richardson number Ri ranging from 10 3 to 10 6 , Reynolds number Re varing from 50 to 150 and Prandtl number Pr = 0.733 respectively. Numerical computations are carried out and a parametric study is performed to illustrate the influence of the physical parameters on the resulting streamlines and isotherms as well as the velocity components at the enclose midsection, Nusselt number, velocity and temperature along the heat sources. Streamlines for Re = 50 and Pr = 0.733 are presented in Fig. 3 to understand the effect of Richardson number, Ri on flow field and temperature distribution. Most of the heat transfer occurs due to conduction except near the sliding right wall. Elliptic shape eyes formed at bottom wall of the cavity. For higher Richardson number, Ri elliptic shaped eyes are formed in the right wall of the cavity and also flow strength increases which are shown Fig. 3.
Conduction dominant heat transfer is obtained from the isotherms in Fig. 4 at Ri = 10 3 to Ri = 10 6 . With increases in Richardson number, Ri, isotherms concentrates near the right wall so isotherm lines are more bending which means increasing heat transfer through convection.
Variation of Local Nusselt number, velocity, temperature and average Nusselt number with Pr = 0.733 and Re = 50 inside the various elliptic diameter are shown in Fig. 5. When Richardson number Ri is small the local Nusselt number flow line near the bottom wall and Richardson number Ri increasing flow line increasing near the right wall. Fig. 5 shows the variation of the temperature and velocity profiles along the mid-sections of the cavity and the temperature distribution in most of the cavity is shown to retain similar behavior to that of a pure conduction regime except near the sliding top wall. In addition, Fig. 5 illustrates that the velocity components U and V
Conclusion
Effect of hydro-magnetic mixed convection in a double lid driven square cavity with inside elliptic heated block has been performed. Results have been presented in terms of streamlines, isotherms, average Nusselt number at the heated block average temperature of the fluid and the temperature at the elliptic cylinder to analyze the effect of Grashof number, Gr and Reynolds number, Re on the fluid flow and heat transfer in the cavity for the aforementioned Prandtl number, Pr. In view of obtained results, the following findings have been summarized: a. Grashof number, Gr has a great significant effect on the streamlines and isotherms at the three convective regimes, Buoyancy-induced vorex in the streamlines increased and thermal layer near the heated surface become thin and concentrated with increasing Reynolds number, Re. | 2,137.8 | 2017-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Ozone Layer Evolution in the Early 20th Century
: The ozone layer is well observed since the 1930s from the ground and, since the 1980s, by satellite-based instruments. The evolution of ozone in the past is important because of its dramatic influence on the biosphere and humans but has not been known for most of the time, except for some measurements of near-surface ozone since the end of the 19th century. This gap can be filled by either modeling or paleo reconstructions. Here, we address ozone layer evolution during the early 20th century. This period was very interesting due to a simultaneous increase in solar and anthropogenic activity, as well as an observed but not explained substantial global warming. For the study, we exploited the chemistry-climate model SOCOL-MPIOM driven by all known anthropogenic and natural forcing agents, as well as their combinations. We obtain a significant global scale increase in the total column ozone by up to 12 Dobson Units and an enhancement of about 20% of the near-surface ozone over the Northern Hemisphere. We conclude that the total column ozone changes during this period were mainly driven by enhanced solar ultra violet (UV) radiation, while near-surface ozone followed the evolution of anthropogenic ozone precursors. This finding can be used to constrain the solar forcing magnitude.
Introduction
The state of the ozone layer has recently attracted growing attention in connection with the profound reduction of the global total ozone content in the 1980s, and an understanding of the role of the ozone layer not only as a defender of life on Earth from the damaging effects of hard solar ultraviolet radiation, but also as a factor affecting the climate and biosphere in general [1,2]. The evaluation of the ozone layer is well covered since the 1930s from the ground and, since the 1980s, by space-based instruments [3]. However, predicting future ozone behavior requires an understanding of its evolution in the past, when the combination of anthropogenic and natural factors affecting the ozone layer was different from the present day. Information about the state of the ozone layer in the past has not been available for most of the time, except for some measurements of near-surface ozone at the end of the 19th century [4][5][6]. This gap can be filled either by modeling or reconstructions from different proxies. The modeling efforts are mostly aimed at understanding the ozone changes between short periods during the preindustrial and present times. These changes were driven by a strong influence of manmade halogen containing ozone-depleting substances (hODS) on stratospheric ozone and enhanced anthropogenic emissions of tropospheric ozone precursors [7][8][9][10][11][12][13][14].
The continuous evolution of the ozone layer from the preindustrial to the present, including the first half of the 20th century, has been studied with numerical [15] and statistical [16] models. The global and annual mean total column ozone (TOC) increase during the 1900 to 1950 period simulated in [15]
We use the free running model version prescribing only the quasi-biannual oscillation in tropical zonal wind, which is not reproduced at the applied vertical resolution. The solar radiation forcing was prescribed according to a reconstruction [28] in six spectral intervals of our radiation code as follows: 180 to 250 nm, 250 to 440 nm, 440 to 660 nm, 660 to 1190 nm, 1190 to 2380 nm, and 2380 to 4000 nm. The reconstruction of total solar irradiance (TSI) in [28] gave a significant increase of~1 W/m 2 per decade for the period from 1900 to 1950. This scenario gives a much larger solar irradiance forcing than the other available reconstructions [17] due to different assumption about the temporal variability of solar irradiance from the quiet Sun. The part of the solar heating rates missed in the ECHAM5 radiation code [29], and the photolysis rates, are calculated from the same solar irradiance reconstruction. Daily ionization rates by different precipitating energetic particles, as well as reactive nitrogen influx from the auroral regions, are prescribed according to recommendations for Coupled Model Intercomparison Project Phase 6 (CMIP6) [30]. The evolution of greenhouse gases, ozone-depleting substances, aerosol properties, and tropospheric ozone precursor emissions (CO and NO x ) are prescribed following [31]. The applied forcing is illustrated in [19].
With the CCM SOCOL3-MPIOM, we carried out seven ten-member ensemble model simulations covering the 1851 to 1940 period. The first experiment (referred hereafter as ALL) included all available observed and reconstructed forcing agents. To investigate the contributions of all considered forcings, we either fixed them at 1851 values or excluded them completely. For the second simulation, we eliminated the energetic particle precipitation (noEPP). The third experiment was driven by the same forcing as in ALL, but the solar irradiance in the 180 to 250 nm band, extra heating, and photolysis rates were fixed at 1851 values. This experiment, named fixUV (fixed solar ultraviolet), was designed to eliminate all forcings responsible for the initiation of the top-down mechanism [32]. For the fourth experiment (fixVIS/IR) we kept solar irradiance in the 250 to 4000 nm band at the 1851 level. This experiment helped to elucidate the role of a direct influence of solar irradiance on the troposphere and surface. For the fifth simulation (fixGHG), we use well-mixed greenhouses gases (CO 2 , N 2 O, and CH 4 ), ozone-depleting substances, and ozone precursor (NO x and CO) emissions fixed at the 1851 level. The sixth simulation (fixWMGHG) was identical to fixGHG, except that NO x and CO emissions were not fixed. Finally, the last simulation (noVOL) was performed prescribing the stratospheric aerosols at 1851 levels, which is typical for low volcanic activity. All experiments are listed in Table 1. The trend analysis was carried out for the ALL experiment applying a robust linear trend calculation for the 1910 to 1940 period with the nonparametric Sen-Mann-Kenndall trend significance test using a 90% confidence interval. We concentrated on this period to exclude the potential influence of a powerful tropical volcanic eruption in the 1902.
Analysis of the Ozone Layer Evolution Drivers
The evolution of the ozone layer is driven by a multitude of factors such as atmospheric circulation, transport, temperature, and concentration of ozone destroying reactive species [33], which in turn depend on the natural and anthropogenic forcing agents. The relative role of these drivers is difficult to elucidate from observational data, but the application of the model makes it possible.
Active Hydrogen Oxides
The active hydrogen oxides or HO x (OH + HO 2 ) catalytically destroy ozone in the atmosphere. They are more effective in the lower and upper stratosphere [34]. Hydrogen oxides are produced from water vapor via photolysis by solar UV in the Lyman-α line and oxygen Schumann-Runge bands, or by reaction with exited atomic oxygen [33]. The solar activity modulates both factors because exited atomic oxygen is also produced by ozone photolysis. Water vapor depends on atmospheric transport and methane abundance, which are both modulated by natural and anthropogenic activities. The annual and zonal mean HO x trend is illustrated in Figure 1.
The HO x trend is most pronounced (more than 10%) above 50 km and in the tropical troposphere. To identify the drivers of such changes we show the evolution of the HO x mixing ratio in Figure 2 for these two regions. Figure 2a shows that, in the upper atmosphere above 50 km, the influence of solar UV irradiance dominates, because for the fixUV model experiment the HO x mixing ratio does not change with time. For all other experiments, when the solar UV is not fixed, the HO x mixing ratio follows the behavior of solar activity. In the troposphere (Figure 2b), anthropogenic emissions play the most important role.
The HO x increase is explained by enhanced water vapor in the warmer climate [19] and increased tropospheric ozone (Section 3.2). The direct sink of hydroxyl caused by enhanced methane and CO abundances is less important in the free troposphere, however, it almost completely eliminates the HO x increase over the Northern Hemisphere, where the intensification of anthropogenic carbon monoxide emissions is most pronounced.
Atmosphere 2020, 11, 169 4 of 13 from water vapor via photolysis by solar UV in the Lyman-α line and oxygen Schumann-Runge bands, or by reaction with exited atomic oxygen [33]. The solar activity modulates both factors because exited atomic oxygen is also produced by ozone photolysis. Water vapor depends on atmospheric transport and methane abundance, which are both modulated by natural and anthropogenic activities. The annual and zonal mean HOx trend is illustrated in Figure 1. The HOx trend is most pronounced (more than 10%) above 50 km and in the tropical troposphere. To identify the drivers of such changes we show the evolution of the HOx mixing ratio in Figure 2 for these two regions.
(a) 0°, 75 km (b) 0°, 10 km For all other experiments, when the solar UV is not fixed, the HOx mixing ratio follows the behavior of solar activity. In the troposphere (Figure 2b), anthropogenic emissions play the most important role. The HOx increase is explained by enhanced water vapor in the warmer climate [19] and increased tropospheric ozone (Subsection 3.2). The direct sink of hydroxyl caused by enhanced methane and CO abundances is less important in the free troposphere, however, it almost completely eliminates the HOx increase over the Northern Hemisphere, where the intensification of anthropogenic carbon monoxide emissions is most pronounced.
Nitrogen Oxides
Nitrogen oxides or NOy (N + NO + NO2 + HNO3 + HNO4 + 2*N2O5) participate in catalytic ozone loss in the atmosphere. They are more effective in the middle stratosphere [34]. Nitrogen oxides are produced mostly from the N2O reaction with exited atomic oxygen [33]. The precipitating energetic particles also produce NOy over high latitudes during the winter season [35]. Solar activity modulates both factors because the concentration of exited atomic oxygen depends on ozone photolysis, and energetic particle precipitation depends on the solar wind. The main stratospheric loss of NOy is the cannibalistic reaction N + NO = N2 + O [33] driven by NO photolysis. The tropospheric NOy level strongly depends on anthropogenic activity [10]. The annual and zonal mean NOy trend is illustrated
Nitrogen Oxides
Nitrogen oxides or NO y (N + NO + NO 2 + HNO 3 + HNO 4 + 2*N 2 O 5 ) participate in catalytic ozone loss in the atmosphere. They are more effective in the middle stratosphere [34]. Nitrogen oxides are produced mostly from the N 2 O reaction with exited atomic oxygen [33]. The precipitating energetic particles also produce NO y over high latitudes during the winter season [35]. Solar activity modulates both factors because the concentration of exited atomic oxygen depends on ozone photolysis, and energetic particle precipitation depends on the solar wind. The main stratospheric loss of NO y is the cannibalistic reaction N + NO = N 2 + O [33] driven by NO photolysis. The tropospheric NO y level strongly depends on anthropogenic activity [10]. The annual and zonal mean NO y trend is illustrated in Figure 3. The NOy trend is most pronounced in the stratosphere, free northern troposphere, and polar mesosphere. The evolution of the NOy mixing ratio is illustrated in Figure 4 for two above-mentioned regions to identify the responsible forcing. demonstrates that the influence of solar UV irradiance dominates in the stratosphere, because without the UV forcing the NOy mixing ratio only changes slightly with time. When the solar UV forcing is switched on, the behavior of the NOy mixing ratio resembles the solar activity evolution due to the NO photolysis modulation by the solar activity. In the troposphere, the anthropogenic emissions of NO and NO2 (NOx) play the most important role leading to the substantial (by up to 90 pptv) NOy increase. The results of the model run with fixed NOx emissions, shown by the green line in Figure 4b, demonstrate the absence of detectable NOy evolution when the anthropogenic emissions of NO and NO2 (NOx) are fixed. Weak changes in NOy after 1930 are related to a slower increase in anthropogenic NOx emissions [31]. The NOy increase in the mesosphere is fully defined by the increased intensity of energetic particle precipitation caused by stronger solar and geomagnetic activity (not shown). The NO y trend is most pronounced in the stratosphere, free northern troposphere, and polar mesosphere. The evolution of the NO y mixing ratio is illustrated in Figure 4 for two above-mentioned regions to identify the responsible forcing. The NOy trend is most pronounced in the stratosphere, free northern troposphere, and polar mesosphere. The evolution of the NOy mixing ratio is illustrated in Figure 4 for two above-mentioned regions to identify the responsible forcing. demonstrates that the influence of solar UV irradiance dominates in the stratosphere, because without the UV forcing the NOy mixing ratio only changes slightly with time. When the solar UV forcing is switched on, the behavior of the NOy mixing ratio resembles the solar activity evolution due to the NO photolysis modulation by the solar activity. In the troposphere, the anthropogenic emissions of NO and NO2 (NOx) play the most important role leading to the substantial (by up to 90 pptv) NOy increase. The results of the model run with fixed NOx emissions, shown by the green line in Figure 4b, demonstrate the absence of detectable NOy evolution when the anthropogenic emissions of NO and NO2 (NOx) are fixed. Weak changes in NOy after 1930 are related to a slower increase in anthropogenic NOx emissions [31]. The NOy increase in the mesosphere is fully defined by the increased intensity of energetic particle precipitation caused by stronger solar and geomagnetic activity (not shown). demonstrates that the influence of solar UV irradiance dominates in the stratosphere, because without the UV forcing the NO y mixing ratio only changes slightly with time. When the solar UV forcing is switched on, the behavior of the NO y mixing ratio resembles the solar activity evolution due to the NO photolysis modulation by the solar activity. In the troposphere, the anthropogenic emissions of NO and NO 2 (NO x ) play the most important role leading to the substantial (by up to 90 pptv) NO y increase. The results of the model run with fixed NOx emissions, shown by the green line in Figure 4b, demonstrate the absence of detectable NO y evolution when the anthropogenic emissions of NO and NO 2 (NO x ) are fixed. Weak changes in NO y after 1930 are related to a slower increase in anthropogenic NOx emissions [31]. The NO y increase in the mesosphere is fully defined by the Atmosphere 2020, 11, 169 6 of 13 increased intensity of energetic particle precipitation caused by stronger solar and geomagnetic activity (not shown).
Temperature
Temperature is important for the processes regulating ozone balance, because kinetic reaction rates are temperature dependent [33]. Atmospheric temperature depends on a multitude of physical and chemical processes driven by both natural and anthropogenic factors. Figure 5 demonstrates temperature changes during the 1910 to 1940 period.
Temperature
Temperature is important for the processes regulating ozone balance, because kinetic reaction rates are temperature dependent [33]. Atmospheric temperature depends on a multitude of physical and chemical processes driven by both natural and anthropogenic factors. Figure 5 demonstrates temperature changes during the 1910 to 1940 period. A warming trend is visible in almost the entire atmosphere but is small and not statistically significant in the middle stratosphere. The troposphere becomes warmer, in 1940, by up to 0.6 K as compared with in 1910. The obtained surface warming was described in detail by [19]. They obtained about 0.4 K global mean warming defined mostly by well-mixed greenhouse gases (about 50%) and solar irradiance in the visible and near infrared parts of the spectrum (35%). Some contribution (about 15%) comes from the tropospheric ozone increase caused by ozone precursor emissions. Stronger warming (1 to 2 K) appears in the upper stratosphere and mesosphere. Two warming spots in the upper stratosphere over the northern and southern tropics have the same origin as in the mesosphere, but of a smaller magnitude. Over the equator the temperature changes are smaller and have only marginal significance because the solar UV forcing is less efficient in this area and does not completely dominate over the greenhouse gas forcing. Figure 6 illustrates the contribution of different factors to the temperature evolution during the period considered. Warming in the mesosphere, for the case with all drivers switched on (black line), is formed by competition between solar UV irradiance heating and cooling by well-mixed greenhouse gases with a small contribution from solar irradiance in the visible spectral region (light blue line in Figure 6a). When the solar UV irradiance is fixed (magenta line in Figure 6a), greenhouse gases cools mesosphere down by up to 2 K. In the absence of greenhouse gas changes (green and orange curves in Figure 6a), the heating from the absorption of enhanced solar UV irradiance leads to a warming of the mesosphere by up to 3 K. In addition, variable solar UV irradiance effects are visible in decadal scale variability with the magnitude of 0.75 K from the solar activity cycle. A warming trend is visible in almost the entire atmosphere but is small and not statistically significant in the middle stratosphere. The troposphere becomes warmer, in 1940, by up to 0.6 K as compared with in 1910. The obtained surface warming was described in detail by [19]. They obtained about 0.4 K global mean warming defined mostly by well-mixed greenhouse gases (about 50%) and solar irradiance in the visible and near infrared parts of the spectrum (35%). Some contribution (about 15%) comes from the tropospheric ozone increase caused by ozone precursor emissions. Stronger warming (1 to 2 K) appears in the upper stratosphere and mesosphere. Two warming spots in the upper stratosphere over the northern and southern tropics have the same origin as in the mesosphere, but of a smaller magnitude. Over the equator the temperature changes are smaller and have only marginal significance because the solar UV forcing is less efficient in this area and does not completely dominate over the greenhouse gas forcing. Figure 6 illustrates the contribution of different factors to the temperature evolution during the period considered. Warming in the mesosphere, for the case with all drivers switched on (black line), is formed by competition between solar UV irradiance heating and cooling by well-mixed greenhouse gases with a small contribution from solar irradiance in the visible spectral region (light blue line in Figure 6a). When the solar UV irradiance is fixed (magenta line in Figure 6a), greenhouse gases cools mesosphere down by up to 2 K. In the absence of greenhouse gas changes (green and orange curves in Figure 6a), the heating from the absorption of enhanced solar UV irradiance leads to a warming of the mesosphere by up to 3 K. In addition, variable solar UV irradiance effects are visible in decadal scale variability with the magnitude of 0.75 K from the solar activity cycle. In the middle tropical troposphere, the analysis is rather complicated. Except the dominating contribution from tropospheric ozone precursors over well-mixed greenhouse gases (compare orange and green lines in Figure 6c), it is difficult to identify the most important factors.
Analysis of the Ozone Layer Evolution
The ozone changes depend on all drivers considered in Subsection 3.1, as well as on the transport processes related to continuous climate warming during the considered period [19]. Figure 7 demonstrates the ozone changes between 1910 and 1940. The obtained results allow three major areas with different ozone behavior to be identified as follows: the mesosphere, the middle stratosphere, and the troposphere. Ozone depletion is visible in the entire mesosphere, with the maxima in the polar regions of up to 10%. The opposite effect occurs in the middle stratosphere and troposphere, where the ozone concentration increases up to 5% and 20%, respectively. In the lower and upper stratosphere ozone trends are small and statistically insignificant. In the middle tropical troposphere, the analysis is rather complicated. Except the dominating contribution from tropospheric ozone precursors over well-mixed greenhouse gases (compare orange and green lines in Figure 6c), it is difficult to identify the most important factors.
Analysis of the Ozone Layer Evolution
The ozone changes depend on all drivers considered in Section 3.1, as well as on the transport processes related to continuous climate warming during the considered period [19]. Figure 7 demonstrates the ozone changes between 1910 and 1940. The obtained results allow three major areas with different ozone behavior to be identified as follows: the mesosphere, the middle stratosphere, and the troposphere. Ozone depletion is visible in the entire mesosphere, with the maxima in the polar regions of up to 10%. The opposite effect occurs in the middle stratosphere and troposphere, where the ozone concentration increases up to 5% and 20%, respectively. In the lower and upper stratosphere ozone trends are small and statistically insignificant. The time evolution of the annual zonal mean ozone mixing ratio and the contribution of different forcings for key altitude and latitude areas are shown in Figure 8. In the southern polar upper mesosphere (Figure 8a), the ozone evolution is reversed relative to solar activity for all experiments but has different magnitudes. The greatest contribution to the negative ozone trend in the mesosphere is related to the energetic particles, which produce more reactive hydrogen and nitrogen oxides during high solar and geomagnetic activity. The other drivers do not significantly affect the ozone evolution in this atmospheric region. The solar cycle is visible even in the absence of UV variability because the energetic particles forcing also has a decadal scale variability. The time evolution of the annual zonal mean ozone mixing ratio and the contribution of different forcings for key altitude and latitude areas are shown in Figure 8. In the southern polar upper mesosphere (Figure 8a), the ozone evolution is reversed relative to solar activity for all experiments but has different magnitudes. The greatest contribution to the negative ozone trend in the mesosphere is related to the energetic particles, which produce more reactive hydrogen and nitrogen oxides during high solar and geomagnetic activity. The other drivers do not significantly affect the ozone evolution in this atmospheric region. The solar cycle is visible even in the absence of UV variability because the energetic particles forcing also has a decadal scale variability. The time evolution of the annual zonal mean ozone mixing ratio and the contribution of different forcings for key altitude and latitude areas are shown in Figure 8. In the southern polar upper mesosphere (Figure 8a), the ozone evolution is reversed relative to solar activity for all experiments but has different magnitudes. The greatest contribution to the negative ozone trend in the mesosphere is related to the energetic particles, which produce more reactive hydrogen and nitrogen oxides during high solar and geomagnetic activity. The other drivers do not significantly affect the ozone evolution in this atmospheric region. The solar cycle is visible even in the absence of UV variability because the energetic particles forcing also has a decadal scale variability. In the tropical mesosphere (Figure 8b) all drivers of ozone evolution can be separated into three groups. For the fixUV case, the ozone mixing ratio steadily decreases with time due to an increase in HO x (Figure 2) caused by an increase in the production of water vapor from enhanced methane emission. The cooling of the mesosphere caused by the increase in well-mixed greenhouse gases (see the discussion of Figure 6a) suppresses the intensity of the ozone destruction cycles and leads to a small ozone increase, however, it cannot compensate for the influence of HO x . The lack of a variable solar UV irradiance also explains the absence of cyclical ozone behavior which is visible for all other cases. For the fixGHG and fixWMGHG cases, the gradual ozone increase is driven mostly by the solar UV changes, which more than compensate for the HO x increase due to the enhanced H 2 O photolysis ( Figure 2). Thus, the ozone evolution of the ALL experiment is formed from the competition between a greenhouse gas induced HO x increase and a solar UV irradiance enhancement.
In the middle stratosphere over southern middle latitudes (Figure 8c), variations in solar UV irradiance play a dominant role that lead to a substantial increase in the ozone mixing ratio and almost constant values in the case when solar UV irradiance is fixed (case fixUV). In the lower troposphere over northern latitudes (Figure 8d), the ozone evolution is driven by tropospheric ozone precursors (fixGHG case) and, to a lesser extent, by the well-mixed greenhouse gases (fixWMGHG case). In the latter case, the ozone increase is related to enhanced methane emissions. The geographical distribution of the changes in annual mean total ozone from 1910 to 1940 driven by all considered forcing agents is illustrated in Figure 9. The simulated total ozone changes are positive and statistically significant all over the globe except in the western Pacific. In the tropical and high latitude belts, the changes are about 6 DU. More pronounced total column ozone trends are found over the middle latitudes in both hemispheres. There, the ozone change from 1910 to 1940.
Atmosphere 2020, 11, 169 9 of 13 In the tropical mesosphere (Figure 8b) all drivers of ozone evolution can be separated into three groups. For the fixUV case, the ozone mixing ratio steadily decreases with time due to an increase in HOx (Figure 2) caused by an increase in the production of water vapor from enhanced methane emission. The cooling of the mesosphere caused by the increase in well-mixed greenhouse gases (see the discussion of Figure 6a) suppresses the intensity of the ozone destruction cycles and leads to a small ozone increase, however, it cannot compensate for the influence of HOx. The lack of a variable solar UV irradiance also explains the absence of cyclical ozone behavior which is visible for all other cases. For the fixGHG and fixWMGHG cases, the gradual ozone increase is driven mostly by the solar UV changes, which more than compensate for the HOx increase due to the enhanced H2O photolysis ( Figure 2). Thus, the ozone evolution of the ALL experiment is formed from the competition between a greenhouse gas induced HOx increase and a solar UV irradiance enhancement.
In the middle stratosphere over southern middle latitudes (Figure 8c), variations in solar UV irradiance play a dominant role that lead to a substantial increase in the ozone mixing ratio and almost constant values in the case when solar UV irradiance is fixed (case fixUV). In the lower troposphere over northern latitudes (Figure 8d), the ozone evolution is driven by tropospheric ozone precursors (fixGHG case) and, to a lesser extent, by the well-mixed greenhouse gases (fixWMGHG case). In the latter case, the ozone increase is related to enhanced methane emissions. The geographical distribution of the changes in annual mean total ozone from 1910 to 1940 driven by all considered forcing agents is illustrated in Figure 9. The simulated total ozone changes are positive and statistically significant all over the globe except in the western Pacific. In the tropical and high latitude belts, the changes are about 6 DU. More pronounced total column ozone trends are found over the middle latitudes in both hemispheres. There, the ozone change from 1910 to 1940. The 1910 to 1940 time period from the run ALL, the area where the trend is significant at the 90% or better level, is marked by color shading and reaches 12 DU (about 4%) over North America, as well as over the northern and southern parts of the Pacific Ocean. Over Europe, the total ozone increase is slightly smaller but still exceeds 10 DU. These areas are typical locations of spring maxima in the total column ozone distribution caused by a spring-time acceleration of the meridional circulation transporting ozone down from its production area. Therefore, these changes in the total column ozone can be largely attributed to an increase in stratospheric production by enhanced solar UV irradiance. The contribution from tropospheric ozone is small because about 90% of the total column ozone is located in the stratosphere. The 1910 to 1940 time period from the run ALL, the area where the trend is significant at the 90% or better level, is marked by color shading and reaches 12 DU (about 4%) over North America, as well as over the northern and southern parts of the Pacific Ocean. Over Europe, the total ozone increase is slightly smaller but still exceeds 10 DU. These areas are typical locations of spring maxima in the total column ozone distribution caused by a spring-time acceleration of the meridional circulation transporting ozone down from its production area. Therefore, these changes in the total column ozone can be largely attributed to an increase in stratospheric production by enhanced solar UV irradiance. The contribution from tropospheric ozone is small because about 90% of the total column ozone is located in the stratosphere.
Discussion
Our results show the importance of an accurate prescription of both natural and anthropogenic forcings to reconstruct past climate and ozone layer trends. The estimate of the total column ozone trend during the first half of the 20th century, obtained by [15] using mostly anthropogenic forcing, does not exceed 1.0 DU. A consideration of the natural forcing by [16] led to the much higher value of almost 3.0 DU. In the present work, using new estimates for the solar forcing from [28], we obtained a three times stronger effect reaching almost 8 DU for the global annual mean value, and 12 DU over the northern and southern middle latitudes. The enhanced magnitude of the total column ozone changes is explained solely by the applied strong solar forcing. Thus, a poor understanding of the solar forcing automatically leads to large uncertainty in the simulated total ozone evolution. However, a high sensitivity of the total column ozone to solar UV forcing can be used to resolve long standing issues about the absolute value of the magnitude of past solar irradiance variability discussed recently by [17]. This problem can potentially be solved by comparing the simulated total ozone behavior with direct measurements or proxy-based reconstructions. Unfortunately, direct comparisons of simulated and observed total column ozone trends is not possible from 1910 to 1940 because the observations are available only from 1926 [36]. The proxy-based reconstructions of total ozone are not available at the moment, but there is some progress in this direction. One possible approach is to retrieve surface UV-B radiation level at the surface from the analysis of UV-B absorbing objects in the plants or spores [37][38][39][40]. However, it is not clear how to separate the influence of stratospheric ozone variations from the spectral solar irradiance variability, which is not well constraint on long-term time scales [17]. The simulated changes in the total column ozone are mainly caused by the ozone evolution in the middle stratosphere driven by steadily growing extraterrestrial solar UV irradiance. The simulated shape of the stratospheric ozone increase, shown in Figure 7, resembles the ozone response to the solar UV irradiance enhancement during the recent period [40] only in the tropical middle stratosphere (between 30 and 40 km). The elevated secondary ozone enhancement in the upper stratosphere over midlatitudes obtained by [40] from observations and model simulations are not visible in our results. The difference can be explained by different circulation regimes in these two periods. A possible influence from the circulation is illustrated in [40] from a comparison of free running and specified dynamics model runs. In the case of a free running model, the upper stratospheric spots of enhanced ozone are much less pronounced in comparison with specified dynamics runs. This difference should be related to different circulation fields, because the treatment of chemical and transport processes is identical in both model versions.
An accurate knowledge of the tropospheric ozone evolution is also important, because it can play an important role in the explanation of the early 20th century warming (ETCW). It was shown in [19] that tropospheric ozone precursors (CO and NO x ) are the third most important factor influencing climate during this period, after greenhouse gases, solar visible, and infrared radiation. Therefore, an underestimation of the CO and NO x emission intensification can explain an underestimation of the ETCW magnitude in many climate models [18]. The simulated annual mean tropospheric ozone mixing ratio in 1910 varies between 15 and 30 ppb over the northern mid-latitudes depending on the location and season, which overestimates 10 to 15 ppb obtained from direct surface ozone measurements at different locations in central Europe [6,10]. It should be noted, however, that these historical measurements probably underestimate ozone mixing ratios due to interference from water vapor and other species [41]. The subsequent ozone increase (Figures 7 and 8d) of about 4 ppb (~15%) during 1910 to 1940, in our experiment, is close to the 11% increase simulated by [10], and 16% obtained from direct ozone measurements at mountain sites [5]. Our simulations of the rate of the tropospheric ozone increase agree with the isotope analysis of air trapped in the ice and snow, as well as with results from the GISS-E2.1 model [41].
The presented analysis can be extended to cover trends of halogenated species and atmospheric dynamics. We do not expect substantial contributions from the halogenated species because of their very small (more than six times as compare with present day) abundance and absence of trends during the considered period. Dynamical changes caused by climate warming could consist of an altered tropopause height, state of the polar vortices, or Brewer-Dobson circulation (BDC) intensity. However, the analysis of their trends is more difficult because the response of dynamical properties has a low signal-to-noise ratio. For example, the lower stratospheric ozone depletion in the tropical area caused by enhanced BDC in a warmer climate, and clearly visible in the simulation of the future climate [42], is not significant in our case (Figure 7). The same can be said about the dipole-like polar temperature changes which characterize polar vortex strengthening ( Figure 5). Probably, these changes should be examined in the future on seasonal or even monthly time scales.
Conclusions
In this study of ozone layer evolution during the early 20th century, we exploited the chemistry-climate model SOCOL-MPIOM driven by all known anthropogenic and natural forcing agents, as well as their combinations. Using results from seven ten-member ensemble runs, we demonstrate the time evolution of the main factors responsible for t ozone production and loss from the ground to the mesopause. We demonstrate that in the mesosphere the ozone mixing ratio trend during the 1910 to 1940 period is negative and driven by energetic particles, incoming solar UV radiation, and greenhouse gases. In the middle stratosphere, the ozone increased from 1910 to 1940 by up to 5%, mostly due to the enhancement of solar UV radiation.
Our calculations emphasize the dominant role of anthropogenic factors in the troposphere, where an increase in CO and NO x emissions leads to an increase in ozone mixing ratios by up to 15%. The general agreement of the increase in tropospheric ozone with previously published estimates allows us to conclude that a climate influence from this forcing is rather well constrained and cannot explain the underprediction of the ETCW magnitude by many climate models [18,19].
We obtained a significant global scale increase in the total column ozone exceeding 12 Dobson Units over northern and southern middle latitudes. We conclude that total column ozone changes during this period were driven mostly by an enhancement in solar UV radiation. Our simulation results can be used to constrain the solar forcing magnitude if past ozone or solar UV-B radiation becomes available. | 8,332.8 | 2020-02-06T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
The Resistance of Heat-Modified Fast Growing Woods Against Decay Fungi
Fast growing woods from plantations forest generally have low quality and require improvement to resist degrading organisms. This study aimed to evaluate the resistance of heat-modified sengon, jabon, mangium, and short rotation teak woods against decay fungi. Heat treatment was applied at two different temperatures (150 °C and 180 °C) and for three different times (0, 2, and 6 hours). The decay resistance test used white rot (Schizophyllum commune Fr) and brown rot (Tyromyces palustris) fungibased on modified SNI 01-7207-2014 standard. The chemical analysis of heat-modified wood used Gas Chromatography-Mass Spectrometry. The results showed that the white rot fungal resistance was significantly affected by the interaction of wood species, temperature and period of heating, while the brown rot fungal resistance was significantly affected by the interaction of wood species and heating temperature. Heating at 180 °C for 6 hours increased the fungal resistance of sengon, jabon and mangium woods. However, the fungal resistance of teak wood improved by heating at 150 °C for 6 hours. The durability improvement of the heat-modified woods were suspected due to the appearance or increase of antifungal substances such as benzoic acid, sinapaldehyde, vanillin and 2-methylantraquinone.
Introduction
The woods supply from plantation/ community forests are expected to overcome the increasing demand of wood mainly by industries. Many woods from community forests that are classified as fast growing species and have low quality, especially in terms of strength, durability, and dimensional stability. Therefore, the use of fast growing woods requires special treatment to fulfil the quality specifications. Indonesia is a tropical country that has high risk of biodeterioration in building wood components, including damage by decay fungi. According to Priadi, decay fungi generally attack wooden components that are exposed to wetting, such as by direct or indirect rainwater [1].
Heat treatment is one of the most widely developed wood modification technologies to improve wood quality. Heat treatment of mahogany at 90, 120, and 150 °C improved dimensional stability and reduced hygroscopicity of wood, but made darker wood color and reduced its mechanical properties [2]. Heat treatment at temperatures of 120, 150, and 180 °C decreased strength, caused discoloration, and decreased the density of sengon, jabon, and mangium woods [3,4]. Heat treatment increased the resistance of spruce, pine, fir, and poplar woods against decay fungi, while the increase in termite resistance was not significant [5]. Heat treatment is easy to do, relatively affordable and environmentally friendly, thus avoiding the use of chemicals to preserve wood [6]. (2) where: ρ = wood density (g cm -3 ), B = the weight of wood (gram), V = the volume of wood (cm 3 ), Δρ = the change of wood density (%), ρ0 = wood density before treatment (g cm -3 ), ρ1 = wood density after treatment (g cm -3 ).
Wood powder sample (40-60 mesh) as much as ±5 g for chemical analyses were prepared from the tested density samples represented the control and heated treatments. Wood powder (100 mg) was extracted with 1 ml of 10 ppm methanol at 45 C for 2 hours. Chemical substance analysis was carried out using GC Agilent Technologies 7890A and MS Agilent Technologies 5975C with four main components, namely oven, front inlet, column, and detector. The sample was injected into the oven with a temperature of 290 C and then forwarded to the front inlet (GC) split mode with an initial temperature of 290 C, a pressure of 13.887 psi, and a flow rate of 33.7 ml minute -1 , for two minutes. The GC system was connected to a Mass Spectrometry (MS) equipped with a fused capillary silica column of 30 mm × 0.25 mm × 0.25 m (length × diameter × film). The components were separated using helium as carrier gas at a constant flow of 1 ml min -1 and flowed into the detector. The difference in mass and conductivity were then defined as the mass spectrum. The interpretation of the GCMS mass spectrum was compared with the component spectra stored in the W10N14.L and wiley7n.l databases. The test results were then analyzed using GCMS Data Analysis software.
The wood resistance against decay fungi was tested based on modified SNI 01-7207-2014 [7]. The wood samples were 5 cm × 2.5 cm × 1.3 cm (W × H × R). White rot fungus (Schizophyllum commune Fr) and brown rot fungus (Tyromyces palustris) were prepared on Potato Dextrose Agar (PDA) media. The composition to produce 1 liter PDA media consisted of 200 g of potato wedges, 20 g of agar, 20 g of dextrose, and 250 mg of the antibiotic chloramphenicol. The PDA media was sterilized in an autoclave at a temperature of 121 C and a pressure of 1.02 atm for 20 minutes. The inoculation of the test fungi was carried out aseptically in laminar air flow follow. Then the incubation was held at room temperature (28 C) until the surface of the PDA media covered with fungal mycelium (±14 days).
The wood samples were oven dried at 103±2 C to obtain a constant weight (W1). Then the wood samples were exposed to the test fungal culture at room temperature (28 C). After 12 weeks of fungal exposure, the wood samples were removed from the fungal culture, and cleaned from the mycelium. Furthermore, the wood samples were oven at 103±2 C to a constant weight and weight (W2). The weight loss of wood due to fungal attack was calculated using Equation 3.
where: WL = wood weight loss (%), W 1 = dry weight of wood before test (g), W 2 = dry weight of wood after test (g).
Data analyses used Microsoft Excel 2013 and SPSS Statistics 17.0 applications. The effects of wood species, temperature and time of heating on wood density and resistance against decay fungi used a Factorial Completely Randomized Design with three factors [8]. Factor A was 4 four wood species (sengon, jabon, mangium, and teak), factor B was 2 heating temperatures (150 C and 180 C), and factor C was 3 heating time (0, 2, and 6 hours). Then the Duncan test was performed when the analysis of variance (ANOVA) showed a significant effect at 95% confidence interval.
Wood density
The results showed that teak wood had the highest density of 0.76±0.06 g cm -3 , followed by the lower density of mangium, sengon and jabon with the values of 0.64±0.05 g cm -3 , 0.35±0.02 g cm -3 , and 0.28±0.02 g cm -3 respectively. Heat treatment generally reduced the density of the four wood species, especially at 180 C (figure 1). The decrease in wood density occurred due to a decrease in the equilibrium moisture content of the wood, the evaporation of wood extractive substances, and the degradation of wood components, especially hemicellulose [9, 10].
The increase of heating time from 2 to 6 hours generally caused the decrease in wood density. This study was in accordance with Guller and Karlinasari et al that the value of wood density decreased with the increase of temperature and time of heating [11,3].
The analysis of variance (ANAVA) at a 95% confidence interval, showed that the interaction between wood species and heating temperature significantly affected the density change of wood. Based on the following Duncan test, the density decrease of heated jabon and mangium woods at 180 C was significantly higher than that of the heated woods at 150 C. The highest density decrease occurred in mangium wood, while the least density change occurred in jabon wood. This indicated the volume shrinkage of mangium wood was higher than that of the other tested woods. Idris et al stated that mangium wood has a shrinkage value from wet to oven dry of 2.68% and 7.4% in radial and tangential directions respectively [12].
Chemical analyses
The chemical analysis using GCMS (table 1) showed the appearance and increase of some antifungal compounds in the heated four woods species. Esteves et al reported that heat treatment caused the formation of new extractable compounds that occur due to hemicellulose degradation [13]. Some extractives disappeared from the wood, such as glycerol, oleic acid, linoleic acid, and b-sitosterol. The new compounds formed in heated Eucalyptus globulus wood were monosaccharides, hemicellulose dehydration products, and lignin derivative compounds such as syringaldehyde, syringic acid, and sinapaldehyde. According to Candelier et al the decrease in extractive content can be caused by the evaporation of low molecular weight compounds produced during heat treatment [14]. Synapaldehyde appeared in the heated sengon, jabon, mangium and teak, while vanillin was found in heated jabon and mangium woods. The presence of sinapaldehyde and vanillin could affect the durability of the heated woods compared to the unheated ones. Benzoic acid only appeared in heated jabon wood. The proportion of tectoquinone (2-methylanthraquinone) in teak wood increased after heat treatment. The percentage increase of compound in heated wood could be caused by the degradation of other components in the wood. According to Xue et al furfuraldehyde is a hemicellulose decomposition product whose concentration increased rapidly with increasing heating temperature of poplar wood [15].
Some substances in the tested woods decreased and increased after heat treatment. The content of vanillin in mangium wood increased after heat treatment. Syringaldehyde and vanillin are phenolic compounds belonging to the aldehyde group. This is in line with Xue et al that increasing temperature up to 180 C increased aldehyde production [15]. In addition, Bourgois et al stated that the chemical changes in heated wood was affected by the heating time and temperature [16].
The resistance against white rot fungi
Wood decay can be indicated by the weight loss of wood. The higher wood weight loss, the lower resistance of wood against fungus. This research showed that heat treatment reduced the weight loss of some woods due to white rot fungus (S. commune) attack (figure 2). The weight loss of heated sengon and teak woods at 150 C and heated sengon and jabon wood at 180 C were less than that of unheated woods (controls). This fact was supported by the chemical analysis (table 1), which found some substances in the heated wood that can protect against decay fungi. For example, the proportion of tectoquinone (2-methylantraquinone) in teak increased after heat treatment. The analysis of variance (ANAVA) at 95% confidence interval showed that the interaction between wood species, temperature, and heating time had a significant effect on the wood weight loss by white rot fungal attack. Duncan's further test showed that the weight loss of mangium wood was not significantly different from that of teak wood, but the both values were significantly lower than that of sengon and jabon woods. The weight loss difference between those woods was related to the differences in the types and quantities of antifungal substances in the wood samples (table 1).
The Duncan's test results also showed that the weight loss values of heated sengon and jabon woods at 180 C for 6 hours were significantly less than that of the unheated samples as can be seen in figure 2. Based on the results of chemical analysis using GCMS, synapaldehyde content appeared in the heated sengon and jabon woods. The content of benzoic acid was also found in heated jabon wood. According to Munib, benzoic acid can inhibit the growth of microorganisms [17]. Kamden et al also stated that chemical components in wood can be toxic to decay fungi and prevent its growth [18]. The wood weight loss could be affected by the toxicity and quantity of substances in wood and the capability of fungi. This research showed that the presence of tecctoquinone caused better resistance of teak wood than sengon and jabon woods. Heat treatment could affect the extractive compounds in teak and mangium woods and affect differently to the attack of fungi. However, the heat treatment at 180 C did not increase the resistance of mangium and teak woods against S. commune fungi.
The resistance against brown rot fungi
Weight loss occurred in all woods fed to brown rot fungi (T. palustris). The weight loss of teak wood was the lowest. Mangium wood was also had lower weight loss than that of sengon and jabon woods (figure 3). Teak wood contained tectoquinone (2-methylantraquinone). Haupt et al stated that tectoquinone was identified as a fungal growth inhibitor compound [19]. This research showed that heat treatment reduced the weight loss of some wood samples, which indicated better wood resistance against brown rot fungus (T. palustris). Heated jabon and mangium woods at 150 C and 180 C had lower weight loss than the control samples. This is presumably due to the changes in hemicellulose during heat treatment. As stated by Boonstra et al that changes in wood (especially hemicellulose) during heat treatment can increase the resistance against brown rot fungus C. puteana [10]. In addition, according to Esteves and Pereira, hemicellulose is the most susceptible wood components to heat degradation [5]. Weiland and Guyonnet reported that heat treatment on pine and beech woods caused the transformation of hemicellulose from hydrophilic and easily digestible to hydrophobic [20].
The analysis of variance at the 95% confidence interval showed that there was a significant effect of interaction between wood species and heating temperature on wood weight loss by brown rot fungus. The Duncan's test resulted that heating at 180 C resulted in lower weight loss values of jabon and mangium woods than that of the control samples. In addition, the weight loss of heated jabon wood at 180 C was significantly lower than that of heated jabon wood at 150 C. The resistance improvement of heated jabon and mangium woods against brown rot fungus was also in accordance with Boonstra et al that heat treatment at 180 C for 6 hours increased wood durability against decay fungi [10]. This improvement in fungal resistance was supported by the chemical analysis. Heated jabon contained some antifungal substance such as benzoic acid, vanillin, and sinapaldehyde, while heated mangium wood contained vanillin, sinapaldehyde, and 2-methylantraquinone (table 1). Sinapaldehyde is a group of aldehydes that act as antifungals [21]. Vanillin is known as a compound that can prevent or slow down the growth of fungi [22]. Based on figures 2 and 3, it can be seen that the value of wood weight loss due to brown rot fungus (T. palustris) attack was higher than that due to white rot fungus (S. commune) attack. According to Highley and Illman, brown rot fungus is one of very destructive organisms on wood [23]. In some previous research were also reported that wood decay by T. palustris was higher than that by S. commune [24,25]. Green and Highley stated that white rot fungi reduced the degree of polymerization (DP) of holocellulose gradually during the decay process, while brown rot fungi was able to rapidly depolymerize holocellulose [26]. In addition, brown rot fungi decreased wood strength faster than white rot fungi which indicated a higher degree of holocellulose depolymerization by brown rot fungi.
Conclusion
The weight loss of wood by white rot fungus (S. commune) was significantly affected by the interaction of wood species, temperature and heating time, while the weight loss of wood by brown rot fungus (T. palustris) was significantly affected by the interaction of wood species and heating temperature. Heated sengon and jabon woods at 180 C for 6 hours were significantly more resistant against white rot fungus attack compared to the control and other treatments. Heated jabon and mangium woods at 180 C for 6 hours were also significantly more resistant against brown rot fungus attack than the control and other treatments. The heat treatment of 150 C for 6 hours slightly increased the resistance of teak wood from white rot fungi and brown rot fungi. The best heat treatment for sengon, jabon and mangium wood was heating at 180 C for 6 hours, while for teak was heating at 150 C for 6 hours.
The resistance increase of heated wood against decay fungi was thought to be related with the appearance or increase of some antifungal substances such as benzoic acid, sinapaldehyde, vanillin, and tectoquinone (2-methylantraquinone). | 3,742.2 | 2021-11-01T00:00:00.000 | [
"Materials Science"
] |
WNK1 Regulates Phosphorylation of Cation-Chloride-coupled Cotransporters via the STE20-related Kinases, SPAK and OSR1*
The WNK1 and WNK4 genes have been found to be mutated in some patients with hyperkalemia and hypertension caused by pseudohypoaldosteronism type II. The clue to the pathophysiology of pseudohypoaldosteronism type II was its striking therapeutic response to thiazide diuretics, which are known to block the sodium chloride cotransporter (NCC). Although this suggests a role for WNK1 in hypertension, the precise molecular mechanisms are largely unknown. Here we have shown that WNK1 phosphorylates and regulates the STE20-related kinases, Ste20-related proline-alanine-rich kinase (SPAK) and oxidative stress response 1 (OSR1). WNK1 was observed to phosphorylate the evolutionary conserved serine residue located outside the kinase domains of SPAK and OSR1, and mutation of the OSR1 serine residue caused enhanced OSR1 kinase activity. In addition, hypotonic stress was shown to activate SPAK and OSR1 and induce phosphorylation of the conserved OSR1 serine residue, suggesting that WNK1 may be an activator of the SPAK and OSR1 kinases. Moreover, SPAK and OSR1 were found to directly phosphorylate the N-terminal regulatory regions of cation-chloride-coupled cotransporters including NKCC1, NKCC2, and NCC. Phosphorylation of NCC was induced by hypotonic stress in cells. These results suggested that WNK1 and SPAK/OSR1 mediate the hypotonic stress signaling pathway to the transporters and may provide insights into the mechanisms by which WNK1 regulates ion balance.
The WNK1 and WNK4 genes have been found to be mutated in some patients with hyperkalemia and hypertension caused by pseudohypoaldosteronism type II. The clue to the pathophysiology of pseudohypoaldosteronism type II was its striking therapeutic response to thiazide diuretics, which are known to block the sodium chloride cotransporter (NCC). Although this suggests a role for WNK1 in hypertension, the precise molecular mechanisms are largely unknown. Here we have shown that WNK1 phosphorylates and regulates the STE20-related kinases, Ste20-related proline-alanine-rich kinase (SPAK) and oxidative stress response 1 (OSR1). WNK1 was observed to phosphorylate the evolutionary conserved serine residue located outside the kinase domains of SPAK and OSR1, and mutation of the OSR1 serine residue caused enhanced OSR1 kinase activity. In addition, hypotonic stress was shown to activate SPAK and OSR1 and induce phosphorylation of the conserved OSR1 serine residue, suggesting that WNK1 may be an activator of the SPAK and OSR1 kinases. Moreover, SPAK and OSR1 were found to directly phosphorylate the N-terminal regulatory regions of cation-chloride-coupled cotransporters including NKCC1, NKCC2, and NCC. Phosphorylation of NCC was induced by hypotonic stress in cells. These results suggested that WNK1 and SPAK/OSR1 mediate the hypotonic stress signaling pathway to the transporters and may provide insights into the mechanisms by which WNK1 regulates ion balance. WNK 2 kinases (with no lysine (K)) comprise a family of novel serine/ threonine protein kinases conserved among multicellular organisms (1,2). The kinase domain of this family is unique in that it lacks the conserved lysine residue previously known to be important for ATP binding in the catalytic site. A conserved lysine in subdomain I of the WNK kinases is thought to be essential for their catalytic activity (1,3). There are four human WNK family members. WNK1 and WNK4 were identified as genes mutated in families of patients with pseudohypoaldosteronism type 2 (PHA II) human hypertension (4). The WNK1 gene mutation consists of a deletion within its first intron, leading to increased expression, whereas mutations in the WNK4 gene are found in the coding sequence near the coiled-coil domains.
PHA II patients are treated by thiazide diuretics, which function as antagonists of the Na-Cl cotransporter (NCC, also known as thiazidesensitive cotransporter (TSC) or Na-Cl transporter (NCCT)), suggesting that the activity of NCC could be potentially involved in the development of PHA II. Previous studies using Xenopus oocytes have showed that wild-type WNK4 inhibits the surface expression and the activity of NCC, whereas one of the disease-causing mutants of WNK4 attenuated this inhibitory effect (5,6). However, comparison of wild-type and mutant WNK4 revealed no differences in NCC surface expression in polarized epithelial cells (MDCK II cells), suggesting that the regulation of intracellular NCC localization by WNK4 might be unrelated to the pathogenesis of PHA II (7). WNK4 has also been reported to inhibit surface expression of the secretory potassium channel (ROMK) and Cl Ϫ base exchanger SCL26A6 (CFEX), in addition to NCC, in Xenopus oocytes (8,9). Furthermore, the disease-causing mutant of WNK4 was shown to increase paracellular chloride permeability in MDCK cells (10,11). In contrast to WNK4, little is known about the functions and regulation of WNK1. WNK1 does not directly affect NCC activity in Xenopus oocytes but has been shown to modulate the inhibitory effects of WNK4 on NCC (6). Although WNK1 activates the MEK5-ERK5 pathway and phosphorylates synaptotagmin, there is no direct evidence to link WNK1 and transporter function (12,13). Moreover, a recent study reported that WNK1 regulates the epithelial sodium channel through glucocorticoid-inducible kinase (SGK1), but the mechanisms of SGK1 activation by WNK1 have not been fully elucidated (14,15).
NCC contains 12 transmembrane domains and is closely related to the Na-K-2Cl cotransporters, NKCC1 and NKCC2 (16 -18). NCC and NKCC2 are expressed in the kidney and function in renal salt reabsorption, whereas NKCC1 is expressed ubiquitously and plays a key role in epithelial salt secretion and cell volume regulation. NKCC1 cotransport activity is controlled by the phosphorylation/dephosphorylation of several threonine and serine residues in response to decreases in cell volume or intracellular [Cl]. Three of the phosphoacceptors in the N terminus of NKCC1 have been identified, and the amino acid sequences surrounding these residues are highly conserved among the members of the cation-chloride-coupled cotransporter family, suggesting that phospho-regulatory mechanisms are conserved among these cotransporters (19). Although several protein kinases, such as SGK1 and c-Jun N-terminal kinase, have been proposed as candidates for the activators of NKCC1, there is no evidence showing that any kinase directly phosphorylates NKCC1 in vivo (20,21). It has been previously reported that the STE20-related kinases, SPAK (also called PASK (proline-alanine-rich Ste-20-related kinase)) and OSR1, bind to the N-terminal regions of the cation-chloride cotransporters KCC3, NKCC1, and NKCC2 (22). Moreover, WNK4 has been identified as a putative SPAK-binding protein by yeast two-hybrid screening (23). Expression of a dominant-negative form of SPAK decreased cotransport activity and phosphorylation of NKCC1 (24). Therefore, SPAK is thought to play an important role in the regulation of NKCC1.
In this study, we have identified SPAK as a WNK1-binding protein and provided evidence that WNK1 acts as a direct activator of SPAK and OSR1. Moreover, we have shown that SPAK and OSR1 directly phosphorylate the N-terminal regulatory regions of NKCC1, NKCC2, and NCC. These results have raised the possibility that WNK1 regulates the activities of a number of transporters through SPAK/OSR1 and that this regulation contributes to the pathogenesis of hypertension.
Yeast Two-hybrid Screening and MS/MS Analysis-Full-length human WNK1 was fused to the GAL4 DNA-binding domain, and yeast two-hybrid screening was performed as described (26). LC-MS/MS analysis was performed as described previously (27). Briefly, FLAG-WNK1 was expressed in HEK293 cells and immunoprecipitated by anti-FLAG antibody. The immunocomplexes were eluted with a FLAG peptide and then digested with Achromobacter protease I, and the resulting peptides were analyzed using a nanoscale LC-MS/MS system.
Antibodies-Antibody to WNK1 was generated with a peptide corresponding to the N-terminal 18 amino acids of human WNK1. Anti-SPAK/OSR1 antibody was prepared by immunizing rabbits with a keyhole limpet hemocyanin-conjugated synthetic peptide (RAKKVR-RVPGSSG, amino acids 362-374 of human SPAK and amino acids 314 -326 of human OSR1). Anti-phospho OSR1 polyclonal antibody was produced in rabbit by immunizing with a keyhole limpet hemocyanin-conjugated synthetic phosphopeptide corresponding to residues 319 -332 of OSR1 (RRVPGS (pS) GRLHKTE). The serum was affinitypurified with phosphopeptide-and the non-phosphopeptide-conjugated cellulose. Monoclonal antibodies against FLAG and T7 were purchased from Sigma and Novagen, respectively.
Immunoprecipitation and Immunoblotting-HEK293 and MDCK cells were cultured in Dulbecco's modified Eagle's medium with standard supplements. HEK293 cells were transfected with the indicated plasmids by the calcium phosphate precipitation method at 50 -80% confluence. After 24 h after transfection, cells were lysed in 1% Triton X-100 lysis buffer (50 mM Tris-HCl, pH 7.5, 150 mM NaCl, 1 mM EGTA, 1 mM EDTA, 1% Triton X-100, 1 mM orthovanadate, 50 mM sodium fluoride, 1 mM phenylmethylsulfonyl fluoride, 1 g/ml aprotinin, 1 mM dithiothreitol, 0.27 M sucrose). Protein complexes were immunoprecipitated with the indicated antibodies according to standard procedures. Isolated protein complexes were separated by SDS-PAGE and transferred to polyvinylidene fluoride membranes (Hybond-P, Amersham Biosciences). Blots were probed with the indicated antibodies, and bound antibodies were visualized using horseradish peroxidase-conjugated secondary antibodies (Amersham Biosciences) and Western Lightning chemiluminescence reagent Plus (PerkinElmer Life Sciences) according to standard procedures. For 32 P labeling, transfected cells were incubated for 6 h with [ 32 P]phosphate (1 mCi/ml) and then lysed as described above.
Expression of GST-tagged Fusion Proteins in Escherichia coli-The pGEX constructs were transformed into E. coli BL21 cells, and a 0.5-liter culture was grown at 37°C to an A 600 of 0.8. Isopropyl-D-galactosidase was added to final 0.2 mM to induce protein expression, and the cells were cultured for another 16 h at 20°C. Cells were harvested by centrifugation and lysed by freeze-thawing and sonication in 1% Triton X-100 lysis buffer. Glutathione S-transferase (GST)-tagged proteins were puri-FIGURE 1. Association of WNK1 with SPAK and OSR1. SPAK and OSR1 were immunoprecipitated (IP) from 0.4 mg of lysates prepared from HEK293 cells (left) or MDCK cells (right) with 2 g of SPAK/ OSR1 antibody, fractionated by SDS-PAGE, and immunoblotted (IB) with the indicated antibodies. WNK1 was immunoprecipitated from 1 mg of HEK293 lysates with 10 g of the WNK1 antibody (middle). As a control, immunoprecipitations were also performed in parallel experiments with rabbit IgG (Chemicon International).
fied from the lysates using glutathione-Sepharose and eluted from the resin in 10 mM glutathione.
RESULTS
Identification of the STE20-related Kinases SPAK and OSR1 as WNK1-associated Molecules-To identify a protein(s) that physically associates with WNK1, we employed two strategies: yeast two-hybrid screening and FLAG tag immunoprecipitation assays coupled with LC-MS/MS analysis. Several positive clones and putative binding proteins were identified, including STE20-like kinase SPAK. As SPAK was identified by both approaches, we analyzed it further. To investigate whether endogenous SPAK is associated with WNK1 in living cells, we generated antibodies to a peptide of SPAK (the sequence is a 100% match of the corresponding sequence of OSR1, a kinase closely related to SPAK) and a peptide of human WNK1. The anti-SPAK/OSR1 antibody reacted with three bands of 62, 60, and 58 kDa in HEK293 cell extracts and with two bands of 62 and 58 kDa in MDCK cell extracts (Fig. 1). The specificity of anti-SPAK/OSR1 antibody was confirmed by the competition mOSR1) were co-transfected with various WNK1 and WNK4 mutants as indicated. Protein complexes were co-immunoprecipitated (IP) using either T7 antibody (␣T7) or FLAG antibody (␣FLAG) and then immunoblotted (IB) with ␣T7 or ␣FLAG. hWNK, human WNK. DECEMBER 30, 2005 • VOLUME 280 • NUMBER 52 experiments using the antigen peptide used for immunization (Supplemental data, Fig. S1). The anti-WNK1 antibody reacted with a band of ϳ250 kDa in several cell lines (Fig. 1). We subjected a lysate from HEK293 cells to immunoprecipitation with anti-WNK1 antibody and then immunoblotted the precipitates with anti-SPAK/OSR1 antibody. SPAK and OSR1 were found to coprecipitate with WNK1 ( Fig. 1, middle). WNK1 was also detected with immunoprecipitates of SPAK and OSR1 from HEK293 cells or MDCK cells, indicating that these form an endogenous complex in living cells (Fig. 1, left and right).
NCC Phosphorylation by WNK1 and SPAK/OSR1
SPAK and OSR1 bind to the cation-chloride transporters KCC3, NKCC1, and NKCC2 through a putative binding motif (R/K)FX(V/I) within the N-terminal tails of the transporters (22). In a yeast twohybrid screen, SPAK was found to bind to WNK4, which contains a putative binding motif (23). A search of the amino acid sequences of human WNK1 revealed the presence of four putative SPAK-binding motifs ( Fig. 2A). To further investigate the role of these motifs in WNK1 binding to STE20-related kinases, we co-expressed various forms of WNK1 with SPAK or OSR1 and performed co-immunoprecipitation experiments. Wild-type WNK1 was found to bind strongly to rat SPAK and mouse OSR1. WNK1(F1258A/F1869A) or WNK1(F1946A/ F1958A), in which we mutated two of four SPAK-binding motifs by replacing the Phe residues with Ala, moderately deceased binding to SPAK/OSR1, and the additional mutations to the binding motifs of WNK1 showed gradually weak binding to SPAK/OSR1 depending on the number of mutations (Fig. 2B). These results suggested that WNK1 can associate with SPAK/OSR1 through four putative binding motifs.
WNK1 Phosphorylates the Evolutionary Conserved Serine Residues of SPAK and OSR1-To achieve specific and efficient phosphorylation of their substrates, many Ser/Thr protein kinases interact with the substrate via sites distinct from the phosphoacceptor sequence (28,29). We first investigated whether SPAK and OSR1 are substrates of WNK1 kinase. By expression in bacteria, we produced a GST-tagged rat SPA-K(KM) in which the Lys residue within subdomain II was replaced with Met. FLAG-tagged WNK1 was isolated from HEK293 cells and added to GST-SPAK(KM), and the mixture was analyzed by an in vitro kinase assay in the presence of [␥-32 P]ATP. In addition to WNK1 autophosphorylation, we observed that SPAK(KM) was phosphorylated in a timedependent manner (Fig. 3A, Wild type). This phosphorylation was dependent on the kinase activity of WNK1 since phosphorylation was barely detectable when a kinase-dead form of WNK1 was used (Fig. 3A, D368A). To prove that SPAK is directly phosphorylated by WNK1 and not by putative kinases complexed with WNK1, we isolated several bacterially expressed WNK1 fragments tagged with GST. In vitro kinase analysis showed that the purified wild-type WNK1-(1-665) directly phosphorylated SPAK, whereas three forms of kinase-dead WNK1, K233M, D368A, or S382A, did not (Fig. 3B). Thus, our results indicated that the STE20-like kinase SPAK is a direct substrate of WNK1.
We next performed in vitro kinase assays using several deletion mutants of SPAK (Fig. 4A). GST-SPAK-(348 -553) and GST-SPAK-(369 -553), but not GST-SPAK-(400 -553), were phosphorylated by WNK1, indicating that the WNK1 phosphorylation site(s) is located in the C-terminal regulatory region (369 -400) of SPAK (Fig. 4B). Fulllength GST-SPAK and GST-SPAK-(1-399) were also phosphorylated by WNK1, but the phosphorylation of these proteins is weaker than that of the C-terminal regulatory domain alone (Fig. 4B). It seems likely that the WNK1 phosphorylation site of SPAK is covered by the N-terminal region of SPAK including the kinase domain. There are four Ser/Thr residues, Ser-379, Ser-380, Thr-386, and Ser-394, within the 369 -400 fragment of SPAK (Fig. 4C). To determine which residues are the site(s) of phosphorylation, various SPAK mutants were tested as substrates.
The fragment 348 -553(S380A) was not phosphorylated, indicating that Ser-380 is a major site of WNK1 phosphorylation (Fig. 4B). MS/MS analysis of tryptic peptides generated from phosphorylated recombinant SPAK proteins also indicated that Ser-380 is the site of phosphorylation (data not shown). Two small conserved regions were found in the C-terminal regions of SPAK and OSR1 and were named the PF1 and PF2 domains, respectively (Fig. 4A). Ser-380 of rat SPAK is located within the PF1 domain, and this Ser residue was highly conserved in other SPAK-related kinases, OSR1 and Drosophila Fray (Fig. 4C). We mutated the equivalent serine residue (Ser-325) in OSR1 to an Ala residue and examined phosphorylation by the WNK kinases. We found that both FLAG-WNK1 and FLAG-WNK4 were able to phosphorylate the full-length kinase-dead forms of SPAK and OSR1 but that mutation of the Ser residues to Ala of SPAK and OSR1 completely abolished or significantly reduced the phosphorylation (Fig. 4D). These data suggested that WNK1 phosphorylates these specific serine residues in SPAK-related kinases.
Ser-325 mutation of OSR1 Causes Its Activation-To examine the role that Ser phosphorylation of SPAK-related kinases plays in their regulation, we generated several mutants of OSR1. It has been recently reported that the N-terminal regulatory domain of p21-activated pro- tein kinase (PAK) is a physiological substrate of OSR1 (30). Wild-type GST-OSR1 exhibited a small amount of autophosphorylation, detectable by long exposure of the gel, but GST-OSR1KM showed no detectable activity (Fig. 5A). Since mutation of OSR1 Ser-325 to Asp mimics phosphorylation of this site, we generated GST-OSR1(S325D) and tested its activity. GST-OSR1(S325D) showed increased phosphorylation of GST-PAK3-(65-136), relative to wild-type OSR1 (Fig. 5B), indicating that the mutation Ser-325 to Asp of OSR1 causes constitutively activation of OSR1. Surprisingly, the mutation of the same site to Ala or Gly, which was expected to abolish phosphorylation, also resulted in causing constitutively activation of OSR1 (Fig. 5B, GST-OSR1(S325A), and data not shown). To further investigate the mechanisms of OSR1 activation, we examined several truncated forms of OSR1 (Fig. 5C). OSR1-(1-433) and OSR1-(1-344), truncated proteins lacking the PF2 domain, exhibited higher kinase activities than wild-type OSR1. In contrast, OSR1-(1-300), a truncated protein lacking both the PF1 and the PF2 domains, showed no detectable kinase activity (Fig. 5D). These results suggested that the PF1 domain of OSR1 is essential for kinase catalytic activity and that the PF2 domain is involved in regulating the catalytic activity.
SPAK and OSR1 Directly Phosphorylate the N-terminal Tail of Cation-Chloride Cotransporters-The activity of NKCC1 is regulated by phosphorylation/dephosphorylation, and examination of phosphorylation sites on NKCC1 revealed that three Thr residues in the N-terminal region, Thr-184, Thr-189, and Thr-202, are necessary for transport activity (19). NKCC2 and NCC are also members of the family of cationchloride-coupled cotransporters, and the N-terminal regions of both cotransporters are conserved with that of NKCC1 (Fig. 6A). To test the possibility that SPAK and OSR1 are responsible for the phosphorylation of NKCC1, and also regulate NKCC2 and NCC, we prepared GSTtagged N-terminal fragments of each of these transporters. Both FLAG-SPAK and FLAG-OSR1 isolated from HEK293 cells were found to phosphorylate GST-NKCC2-(1-181), NKCC1-(1-289) and NCC-(1-138) (Fig. 6B). These proteins were also phosphorylated by GST-OSR1(S325D) in vitro (Fig. 6C), suggesting that phosphorylation was direct. The intensity of each phosphorylated band is comparable with or much higher than that observed using the GST-PAK3-(65-136) substrate, indicating that these cotransporters are good substrates for SPAK and OSR1. We next investigated the phosphorylation site(s) in the thiazide-sensitive transporter NCC using a series of mutations of the Ser/Thr residues that correspond to Thr-184, Thr-189, and Thr-202 of NKCC1. The T53A, T58A, and S71A mutants of NCC showed slightly reduced phosphorylation when compared with wild-type NCC. Moreover, there was no detectable increase in phosphorylation of the triple mutant, T53A/T58A/S71A (Fig. 6D). These results suggested that at least in vitro, SPAK/OSR1 directly phosphorylates the conserved Ser/ Thr residues within the N-terminal regulatory region of NCC corresponding to shark NKCC1.
WNK1 and SPAK/OSR1 Are Activated by Low Cl Ϫ Hypotonic Stress-Recent reports have shown that activation of shark and human Na-K-Cl cotransporter (NKCC1) by low Cl Ϫ hypotonic stimulation is inhibited in cells expressing a dominant-negative mutant of SPAK and that the phosphorylation state of NKCC1 correlates with that of SPAK (24). Therefore, we tested whether hypotonic and low Cl Ϫ conditions in cells lead to the activation of SPAK/OSR1. HEK293 cells were incubated with isotonic or low Cl Ϫ hypotonic buffer, and endogenous WNK1 was immunoprecipitated and subjected to in vitro kinase assay using GST-SPAK-(348 -553) as a substrate. We found that WNK1 kinase activity increased within 5 min and was sustained for at least 60 min by incubation in low Cl Ϫ hypotonic conditions (Fig. 7A). We next examined the effect of low Cl Ϫ hypotonic stimulation on phosphorylation and activation of SPAK/OSR1. When HEK293 cells transfected with an empty vector were incubated with hypotonic and low Cl Ϫ buffer, SPAK/OSR1 autophosphorylation and kinase activity against GST-PAK3-(65-136) were increased (Fig. 7B). The phosphorylation of Ser-325 in OSR1 also occurred in cells under hypotonic and low Cl Ϫ conditions. These results, together with those obtained in vitro, suggested that WNK1 functions as an activator for SPAK/OSR1 in response to Cl Ϫ hypotonic stress in cells.
To clarify the role of the WNK/SPAK/OSR1 pathway in the phosphorylation of NCC, we performed a 32 P labeling experiment. Because we were unable to detect endogenous NCC in HEK293 cells or other cell lines by immunoblotting and immunostaining, T7-tagged mouse NCC was expressed in HEK293 cells. As shown in Fig. 7C, NCC was highly phosphorylated under low Cl Ϫ hypotonic conditions. This result agreed well with the in vitro phosphorylation data and suggested that activation of the WNK1/SPAK/OSR1 pathway leads to the enhanced phosphorylation of NCC in cells.
DISCUSSION
In this study, we identified the STE20-like kinases, SPAK and OSR1, as targets of WNK1. WNK1 phosphorylates SPAK/OSR1 at a Ser residue within the PF1 domain, which is highly conserved among the mammalian SPAK/OSR1, Drosophila Fray, and Caenorhabditis elegans Y59A8B.23 gene products. WNK4 and WNK3 were also able to phosphorylate this residue ( Fig. 2F and data not shown). In addition, C. elegans WNK1 phosphorylated the conserved Ser residue of the C. elegans SPAK/OSR1 homolog in vitro. 3 Thus, phosphorylation of SPAK/ OSR1 by WNK kinase may be a common regulatory mechanism among species.
OSR1 mutants having point mutants in their PF1 domain or trunca-3 T. Moriguchi and H. Shibuya, unpublished data. tion mutants lacking the PF2 domain exhibited higher kinase activities than wild-type OSR1 (Fig. 5). Moreover, a truncated mutant lacking the PF1 domain of OSR1 displayed no detectable kinase activity (Fig. 5D). It has been reported that many STE20-related kinases contain autoinhibitory domains and that removal of these regulatory domains results in a significant increase in kinase activity (31). Therefore, our results suggested that mutation of Ser-325 may cause constitutive activation of kinase activity due to conformational changes resulting from removal of the autoinhibitory PF2 domain rather than by an effect of negative charge. In the case of OSR1, the PF1 domain appeared to play an essential role in kinase catalytic activity, whereas the PF2 domain might be involved in regulating catalytic activity. Mutation of the site in OSR1 that is phosphorylated by WNK1 resulted in enhanced OSR1 kinase activity, indicating that WNK1 plays an important role in the activation of SPAK and OSR1. However, in vitro phosphorylation of recombinant SPAK and OSR1 proteins by WNK1 or co-expression of SPAK and OSR1 with WNK1 in cells resulted in only weak activation of SPAK and OSR1 (data not shown). Therefore, WNK1 might not be the sole activator of SPAK and OSR1. The phosphorylation of multiple sites by several kinases has been shown to be required for the full activation of some kinases. For example, Akt is activated by phos-phorylation on two residues, one in the activation loop of the kinase domain and the other located C-terminal to the catalytic domain (32). Phosphorylation of these sites in Akt is catalyzed by two kinases, 3-phosphoinositide-dependent kinase-1 (PDK1) and another tentatively called PDK2. Further studies will be needed to fully identify the kinase(s) that phosphorylates and activates SPAK/OSR1.
We also demonstrated that SPAK and OSR1 directly phosphorylate not only NKCC1 but also NKCC2 and NCC. These cation-chloridecotransporters contain 12 transmembrane domains that are flanked by hydrophilic N-and C-terminal domains. It has been previously shown that three phosphorylation sites on the N terminus of shark NKCC1, Thr-189, Thr-184, and Thr-202, are necessary for full activation of transport activity (19). The sites of OSR1 phosphorylation in NCC include the three conserved Thr residues within the N-terminal regulatory region of the cation-chloride-coupled cotransporter family (Fig. 5), suggesting that WNK1 and SPAK/OSR1 could contribute to the regulation of transport activity. In fact, this hypothesis is supported by the recent finding that expression of both SPAK and WNK4 with NKCC1 in Xenopus oocytes results in a significant increase in NKCC1 activity (33). NCC, the mammalian thiazide-sensitive Na-Cl transporter, is expressed at the apical membrane of the distal convoluted tubule. Loss-of-function mutations in NCC have been shown to cause Gitelman syndrome, a disease characterized by salt wasting, hypokalemic metabolic alkalosis, and hypocalciuria. These clinical symptoms are the opposite of the symptoms observed in PHA II patients. The mutations in WNK1 associated with PHA II are intron deletions that cause increased expression of WNK1. Our findings supported the hypothesis that WNK1 phosphorylates and activates NCC, and this may provide a good explanation for pathogenesis of PHA II. However, the physiological relevance of these phosphorylation events to hypertension must be further evaluated by examining the regulation of NCC transport activity. In contrast to NCC, NKCC2, the bumetanide-sensitive cotransporter, is expressed in the apical membrane of the thick ascending limb of Henle's loop. Disruption of the NKCC2 gene causes Bartter syndrome, an autosomal recessive disease characterized by metabolic alkalosis, hypokalemia, and hypercalciuria accompanied by a reduction in arterial blood pressure. Thus, it might be possible that activation of NKCC2 could account for hyperkalemia and hypertension in patients harboring WNK1 mutations.
Tissue distribution reveals that WNK1 is widely expressed (1, 2). WNK1-deficient mice exhibit embryonic lethality, which indicates that WNK1 has important functions in many tissues, in addition to the kidney (34). NKCC1, SPAK, and OSR1 are also ubiquitously expressed and have multiple functions, such as regulation of cell volume, modulation of neuron excitability, AP-1-dependent gene expression, and regulation of the actin cytoskeleton (17,30,35). In this study, we identified a signaling pathway consisting of the PHA II disease-associated kinase WNK1 and the STE20-related kinases SPAK and OSR1, which culminates in the phosphorylation of several cotransporters. We hope that these findings will contribute to our understanding of the biological function for WNK1, not only in the pathogenesis of hypertension but also in other processes. FIGURE 7. Activation of WNK1 and SPAK/OSR1 by low Cl ؊ hypotonic stress. A, HEK293 cells were incubated in isotonic buffer (Control) or low Cl Ϫ hypotonic buffer (Hypo, low Cl Ϫ ) for the indicated times. The kinase activity of endogenous WNK1 was measured by an immune complex kinase assay using GST-SPAK-(348 -553) as a substrate. The amount of immunoprecipitated WNK1 was detected by immunoblotting with the WNK1 antibody (immunoprecipitation (IP), ␣WNK1; immunoblotting (IB), ␣WNK1), and the phosphorylated GST-SPAK-(348 -553) was detected by an image analyzer (BAS 2500) ( 32 P). B, HEK293 cells were incubated in low Cl Ϫ hypotonic buffer for the indicated times. Endogenous SPAK/OSR1 was immunoprecipitated with the SPAK/OSR1 antibody, and subjected to an immune complex kinase assay using GST-PAK3-(65-136) as a substrate. The amount of SPAK/OSR1 in each immune complex was determined by immunoblotting (immunoprecipitation, ␣SPAK/OSR1; immunoblotting, ␣SPAK/ OSR1). To monitor the Ser phosphorylation state of OSR1, lysates prepared from transfected cells were subjected to immunoblotting with the phospho-OSR1 antibody. Similar results were obtained in three different experiments. C, phosphorylation of T7-tagged NCC in HEK293cells. HEK293 cells were transfected with T7-NCC, metabolically labeled with [ 32 P]phosphate for 6 h, and then placed with isotonic buffer (Control) or low Cl Ϫ hypotonic buffer for the indicated times prior to lysing. T7-NCC was immunoprecipitated using ␣T7 antibody. | 6,203.4 | 2005-12-30T00:00:00.000 | [
"Medicine",
"Biology"
] |
CFA with binary variables in small samples: a comparison of two methods
Asymptotically optimal correlation structure methods with binary data can break down in small samples. A new correlation structure methodology based on a recently developed odds-ratio (OR) approximation to the tetrachoric correlation coefficient is proposed as an alternative to the LPB approach proposed by Lee et al. (1995). Unweighted least squares (ULS) estimation with robust standard errors and generalized least squares (GLS) estimation methods were compared. Confidence intervals and tests for individual model parameters exhibited the best performance using the OR approach with ULS estimation. The goodness-of-fit chi-square test exhibited the best Type I error control using the LPB approach with ULS estimation.
INTRODUCTION
In the behavioral and social sciences, datasets often consist of binary variables. For example, essentially all test data are binary because multiple choice, true/false, and other question formats are usually coded in terms of whether the answer is correct or not. Many other types of tests require a diagnosis; e.g., classifying someone as depressed, mentally ill, or having a learning disability, also results in binary data. A critical question in such data is whether they represent indicators of underlying latent categorical variables, or, instead, are indicators of underlying continuous latent variables. In medical diagnosis, such as the outcome of an HIV test, the latent attribute is often considered binary, i.e., a person is either HIV positive or HIV negative. With most educational and psychological data, on the other hand, it is typically believed that the latent construct of interest is continuous, and a positive score on a binary indicator simply means that a certain threshold on the latent trait has been exceeded.
When a distinction is made between continuous latent attributes and their observed binary indicators, the Pearson correlations among the binary variables will not accurately represent the correlations among the latent attributes. The oldest measure of a relationship between two dichotomous variables that represent categorized continuous variables is the tetrachoric correlation coefficient (Pearson, 1900). In the population, the tetrachoric correlation is defined simply as a product-moment correlation between two underlying quantitative variables that have a joint bivariate normal distribution. The sample tetrachoric correlation is computed on two dichotomous variables and represents an estimate of the association between the underlying continuous constructs.
The matrix of sample tetrachoric correlations can be used to conduct a factor analysis of binary variables and to fit more general structural equation models (Christoffersson, 1975;Muthén, 1978Muthén, , 1984Muthén, , 1993Lee et al., 1990Lee et al., , 1995Jöreskog, 1994Jöreskog, , 2002Jöreskog, -2005. The approach of Christoffersson (1975) obtains parameter estimates directly by fitting the model to sample proportions using a generalized least squares (GLS) approach based on the asymptotic covariance matrix of sample proportions. This approach has recently been extended and generalized (Maydeu-Olivares and Joe, 2005;Maydeu-Olivares, 2006). Muthén (1978Muthén ( , 1984 proposed a less computationally intensive approach which first estimates sample thresholds and sample tetrachoric correlations, then fits the model to sample tetrachoric correlations using a GLS approach based on the asymptotic covariance matrix of the tetrachoric estimator. Lee et al. (1995) have proposed yet another approach that estimates thresholds and tetrachorics simultaneously (for each pair of variables) rather than sequentially and incorporates continuous variables.
Whether one fits the model to sample frequencies or to sample tetrachorics, this methodology is mathematically and computationally complex. The definition of the tetrachoric correlation itself involves an integral (see below), and requires complex computational algorithms (Kirk, 1973;Brown, 1977;Divgi, 1979). Many approximations to this coefficient have been proposed to reduce the computational intensity at a time when computer time was limited and costly. At least ten simple approximations have been proposed over the years, starting with Pearson (1900) and continuing on with Walker and Lev (1953), Edwards (1957), Lord and Novick (1968, p. 346), Digby (1983), and two more by Becker and Clogg (1988). Even though nowadays computers can handle large tasks, some of these proposed approximations are so good the question naturally arises whether they can be used directly to fit factor analytic and more general correlation structure models. These approximations may be particularly useful in smaller sample sizes, when more computationally intensive approaches may break down. For example, simulation work implies that sample sizes of 100, 250, or even 1000 may be needed at a minimum for these methods, depending on the model and the particular version of the estimation method (Flora and Curran, 2004;Beauducel and Herzberg, 2006;Nussbeck et al., 2006). Yet many researchers have smaller data sets and are faced with understanding their latent structure. Small samples are common in applications where measurements are expensive (e.g., fMRI measurements of absence or presence of activity in multiple brain regions), when specific types of participants are difficult to obtain (e.g., Parkinson's patients, executives, identical twins), or when research volunteers must be monetarily compensated for their participation in a lengthy assessment. When the purpose of the study is to assess the tau-equivalence of a unidimensional scale, a large sample size may not be required to accurately estimate the common factor loading. Bonett and Price (2005) proposed yet another approximation based on the odds ratio (OR) which improves on Becker and Clogg (1988) in terms of accuracy. They also provided asymptotic standard errors for this approximation. Additionally, Bonett and Price (2007) suggested that this methodology could be adapted to correlation structure models if a consistent estimator of the covariance matrix of the new tetrachoric approximation is obtained. In this paper, we develop the technical details for this new correlation structure methodology based on the Bonett and Price (2005) coefficient, and we compare the performance of this odds-ratio methodology (hereafter, OR) against the methodology of Lee et al. (1995, hereafter, LPB). The LPB methodology is available in EQS (Bentler, 2006).
Three simulation studies were conducted to compare the OR and LPB approaches. In Study 1, sample tetrachoric correlations and their standard errors were compared, without any structured model. In Study 2, a confirmatory factor analysis (CFA) model was fit to data using GLS with either the LPB or the OR asymptotic covariance matrix estimator. In Study 3, a CFA model was fit to data using unweighted least squares (ULS) estimation with robust standard errors and test statistics (Satorra and Bentler, 1994) using either the LPB or the OR asymptotic covariance matrix estimator.
CORRELATION STRUCTURE MODELS WITH BINARY VARIABLES
Without loss of generality, assume that each observed variable takes on values 1 or 2. For each pair of binary variables, a 2 × 2 contingency table can be computed, using either sample frequencies or sample probabilities. Table 1 illustrates the notation used in such contingency tables. Here, f ij is the sample frequency, p ij is the sample probability that the pair of variables takes on values (i, j), and the "+" notation is used to indicate marginal sample frequencies and probability. We add 0.5 to each cell in the frequency table before computing sample probabilities. It can be shown that adding 0.5 to each cell frequency of the 2 × 2 table minimizes the bias of the log-transformed odds ratio (Agresti, 2013, p. 617). This small sample correction disappears asymptotically.
Let z = (z 1 , ..., z s ) be an s × 1 vector of observed binary variables. Let y = (y 1 , ..., y s ) be an s × 1 vector of underlying continuous variables, and we assume thaty ∼ N(0, ). The variables z were obtained by categorizing variables y as follows: where a = 1, ..., s. The threshold h a for each variable is related to the probabilities for z a as follows: where (x) is the cumulative distribution function for standard normal distribution. Thus, observed marginal probabilities p 2+ can be used to obtain estimates of thresholds. Without loss of generality, assume that diag( ) = I, since the scale for the underlying continuous variables generally cannot be recovered after categorization has occurred. The off-diagonal elements of are tetrachoric correlations. The tetrachoric correlation ρ ab between y a and y b is related to the probabilities of z a and z b as follows: Thus, observed sample probabilities p 22 from each bivariate contingency table can be used to compute an estimate of the tetrachoric correlation, but the computations involved are complicated. We assume that the continuous latent variables y are generated by a latent variable model. In this study, we hypothesize that the underlying continuous variablesy were generated from a factor model: where is the s × m matrix of factor loadings with many elements fixed to 0, ξ is the m × 1 vector of factors, and ζ is the s × 1 vector of errors. This implies the following covariance structure for : where = cov(ξ ) with diag( ) = I for model identification, = cov(ζ ), and θ is a vector of all model parameters (i.e., factor loadings and factor covariances). The diagonal of (θ) is fixed to be 1 and hence the parameters in are dependent on other parameters and do not need to be directly estimated.
THE OR METHOD
Instead of computing the tetrachoric correlation as defined implicitly by (2), the OR method computes another coefficient of association between z a and z b , defined in the population as where π in the numerator refers to the irrational number (3.1415. . . ),w ab = π 11 π 22 π 12 π 21 , π ij is the population counterpart of p ij in Table 1 corresponding to variables z a and z b , c = 0.5(1 − |π 1+ − π +1 | /5 − (0.5 − π min ) 2 ), and π min is the smallest marginal probability. In the sample, we estimate the odds ratio asŵ ab = (f 11 + 0.5)(f 22 + 0.5) (f 12 + 0.5)(f 21 + 0.5) so that the sample odds ratio is defined even if the frequency table has zero counts. Estimates of cell probabilities are also computed from the 2 × 2 table of frequency counts with the 0.5 additions to obtainĉ and the following tetrachoric estimatê ρ * ab = cos π 1 +ŵˆc ab .
(5) Bonett and Price (2005) found that this approximation to the tetrachoric correlation was more accurate than the existing most accurate approximation of Becker and Clogg (1988). The quality of the approximation in (4) varies as a function of the population tetrachoric correlation and of the population thresholds for the two variables. We have studied the difference between the tetrachoric correlation implicitly defined by (2) and the approximation in (4) using the plotting feature of Mathematica5. It was found that the larger the correlation between the variables, the greater the potential bias was, and the more extreme the thresholds were, as long as they were oppositesigned, the worse the approximation was. Figures 1, 2 illustrate the approximation error of ρ * ab . In Figure 1, the difference (ρ * ab − ρ ab ) is plotted as a function of the tetrachoric correlation ρ ab when thresholds are fixed to −0.8 and 0.3. The approximation gets worse for higher absolute values of the correlation, peaking when the correlation is about 0.9, at which point the OR approximation underestimates the tetrachoric by 0.08. If the threshold −0.8 is replaced with −1.5, the approximation error at this point reaches −0.13. Of course, when thresholds are high and opposite-signed, all existing methods will have trouble because the cell probabilities will be close to zero. Figure 2 plots ρ * ab − ρ ab as a function of one threshold, fixing the other threshold to 0.8 and the tetrachoric correlation to 0.5. The approximation error is minimal for any positive value of the other threshold, and does not exceed 0.08 if the other threshold is less extreme than −1.2. For high negative values of this threshold, however, the approximation error becomes considerable. Again, this is the situation where the standard tetrachoric approaches tend to break down as well. We will provide some empirical evidence on the breakdown of these estimators below. A particular advantage of the OR method is that an estimate of the asymptotic covariance matrixV ρ * of the s(s − 1)/2 vector ρ * can be computed easily. First, the covariance matrix of the vector of log-odds ratios log (w) is computed, using standard results about multinomial distributions (e.g., Agresti, 2013). Then, the asymptotic covariance matrix of the transformation given by (5) is computed using the delta method. In this step,ĉ is treated as a constant, since its variance is small relative to the variance of ρ * (Bonett and Price, 2005). The resulting expressions for elements ofV ρ * are simple compared to the complicated expressions for covariances of the tetrachoric correlations, and can be easily programmed using matrix-based languages such as R, SAS IML, Gauss, or Matlab. Details of the derivation and the typical elements ofV ρ * are given in the Appendix (see Supplementary Material).
In our OR approach, GLS parameter estimates are obtained by minimizing the fitting function: where θ is the vector of parameters from (θ) = + ψ. We note that becauseρ * is consistent for ρ * in (4) but not for ρ, the vectorized version of implicitly defined by (2), the estimator in (6) is not consistent for θ when the model holds. Thus, this estimator should not be used in large sample sizes, but its simplicity may offer advantages at smaller sample sizes. Approximate standard errors for model parameters can be obtained from the roots of the diagonal of (ˆ * V −1 ρ * ˆ * ) −1 , whereˆ * is the matrix of model derivatives evaluated at the OR parameter estimates. An approximation to the model fit statistic can also be computed as T OR = (N − 1)F OR and referred to a chi-square distribution with s * − q degrees of freedom, but the quality of this approximation is not known.
THE LPB METHOD
The LPB method (Lee et al., 1995) was developed to handle any combination of categorical and continuous variables by estimating a correlation matrix that is a mixture of Pearson, polyserial, and polychoric correlations and obtaining an appropriate estimate of its variability. Note that a polychoric correlation between two binary variables is a tetrachoric correlation. A unique feature of the LPB approach is that it estimates sample thresholds and each polychoric correlation simultaneously. For binary variables, the LPB method is asymptotically equivalent to all other existing methods, e.g., Christoffersson (1975), Jöreskog (1994), Muthén (1984). All of these are limited information approaches, estimating each ρ ab from the corresponding 2 × 2 contingency table based on variables z a and z b .
Let i, j = 1, 2 and let f ij be the sample frequencies, as before. We again employ the 0.5 addition to these frequencies to reduce the likelihood of non-convergence. Binary variables only have one finite threshold, but for convenience let us define, for variable z a , h a,1 = −∞, h a,3 = ∞, and h a,2 = h a . Then, estimates of thresholds and tetrachoric correlations are obtained by minimizing the negative log-likelihood: Denote the maximizer of (7) byβ ab = (ĥ a ,ĥ b ,ρ ab ) . Letρ = {ρ ab } be the vector of estimated tetrachoric correlations. The LPB method obtains parameter estimates by minimizing the fitting function: where θ is the vector of parameter estimates from the correlation structure model = (θ). The matrixV ρ is the appropriate submatrix of the covariance matrix of threshold and tetrachoric estimates, computed as a triple productĤ −1ˆ Ĥ −1 , whereĤ is a block-diagonal matrix with blocks of the formĤ ab , consistently estimating H ab = lim N→∞ 1 N ∂ 2 F ab ∂β ab ∂β ab , andˆ is an estimate of the asymptotic covariance matrix of 1 Details can be found in Poon and Lee (1987) and Lee et al. (1995). Standard errors for parameter estimates can be obtained from the roots of the diagonal of (ˆ V −1 ρˆ ) −1 , whereˆ is the matrix of model derivatives evaluated at the LPB parameter estimates. The test statistic T LPB = (N − 1)F LPB is asymptotically chi-square distributed with s * − q degrees of freedom.
ROBUST APPROACHES BASED ON ULS ESTIMATION
The OR and LPB approaches described above involve GLS estimation as the fitting functions (6) and (8) involve inverses of asymptotic covariance matrices of sample estimates of tetrachoric correlations. These weight matrices grow very quickly in size as the number of variables increases and may be very unstable in smaller sample sizes. GLS estimation, although asymptotically efficient, may not perform properly in small samples (Hu et al., 1992;West et al., 1995). Evidence exists that its analogs for categorical data also perform poorly in smaller sample sizes (Muthén, 1993;Flora and Curran, 2004). ULS estimation, which uses a simpler consistent but inefficient estimator, uses corrected standard errors and test statistics (Yang-Wallentin et al., 2010;Savalei, 2014). These ULS methods exist for both continuous and categorical data (Muthén, 1993;Satorra and Bentler, 1994) and have been found to perform well in smaller samples (Yang-Wallentin and Jöreskog, 2001;Savalei and Rhemtulla, 2013). We develop and study ULS estimates with robust standard errors and test statistics for both the OR and the LPB approaches. ULS estimation with robust standard errors is implemented as follows for the OR approach. Saturated estimates of population tetrachoric correlations are obtained according to (5). Estimates of model parameters are obtained by minimizing F LSOR = (ρ * − ρ(θ)) (ρ * − ρ(θ)), and standard errors for these parameter estimates are computed from the roots of the diagonal of the robust covariance matrix (ˆ * ˆ * ) −1ˆ * Ṽ ρ * ˆ * (ˆ * ˆ * ) −1 .
The model test statistic is computed as The correction by k is intended to bring the mean of the distribution of T LSOR closer to that of a chi-square distribution with s * − q degrees of freedom, but because the OR correlations are approximate, this statistic may be a very rough approximation, and its usefulness is to be determined. The robust LPB method is developed similarly. Saturated estimates of population tetrachoric correlations are obtained from (7). Estimates of model parameters are obtained by minimizingF LSLPB = (ρ − ρ(θ)) (ρ − ρ(θ)), and the robust covariance matrix is computed We now describe the results of three simulation studies designed to investigate the performance of GLS and ULS estimation with OR and LPB methods. The goal of Study 1 was to compare the saturated estimates of tetrachoric correlations: the OR approximationρ * and the LPB estimateρ. Study 2 investigated parameter estimates, standard errors, and test statistics obtained from GLS estimation. Finally, Study 3 investigated the performance of ULS estimation with robust standard errors and test statistics. The focus was on small sample performance.
METHOD
Data were generated from a model similar to one used by Lee et al. (1995) to evaluate their method. This is a 2-factor CFA model with 8 variables and 2 factors, with covariance structure (θ) = + ψ, where = λ λ λ λ 0 0 0 0 0 0 0 0 λ λ λ λ , = 1.0 0.5 0.5 1.0 , The factor loadings λ were set to equal either 0.6 or 0.8. With factor loadings of 0.6, the correlations among variables within the same factor are 0.36, and the correlations among variables across different factors are 0.18. With factor loadings of 0.8, the correlations among variables within the same factor are 0.64, and the correlations among variables across different factors are 0.32. The generated continuous data were then categorized to create dichotomous data using a set of eight thresholds. The thresholds were chosen to be either mild or moderate. The mild set of thresholds was set to be (0.5, −0.5, 0.5, −0.5, 0.5, −0.5, 0.5, −0.5). This set of thresholds is relatively homogenous and cuts the continuous distribution very near its center. The moderate set of thresholds was chosen to be (−1, 0.8, −0.6, 0.2, −0.2, 0.6, −0.8, 1). This set of thresholds is more heterogeneous and the cut-off point is often far from center. This set of thresholds also creates some pairings of high opposite-signed thresholds, a difficult situation for most methods to handle. Sample size was set to N = 20, 50, or 100. With continuous data, sample sizes in the 20-40 range were studied by Nevitt and Hancock (2004). Thus, there were a total of 12 conditions in this 2 (λ = 0.6 or 0.8) × 2 (thresholds are mild vs. moderate) × 3 (N = 20, 50, 100) design. This design remained the same across the three studies. Although some SEM simulation studies have used 5000 or more replications per condition, the LPB method is computationally intensive and 500 replications were generated within each condition.
The goal of Study 1 was to examine the correlations and their standard errors produced by the OR method and the LPB method. Saturated model was thus fit to data. The goal of Study 2 was to assess the GLS estimates in both OR and LPB methods. The 2-factor model was fit to data, and GLS estimation was carried out with the weight matrix computed using either the OR or the LPB formulae. The goal of Study 3 was to examine the ULS estimates with robust standard errors and test statistics. The 2-factor model was fit to data using ULS estimation and the standard errors and test statistics were corrected using the asymptotic covariance matrix computed based on either the OR or the LPB formulae.
To compare accuracy of estimated parameters, average estimates of all parameters were computed as well as their empirical standard deviations. Additionally, root mean squared error (RMSE), which is the square root of the average squared deviation of the parameter estimate from its true value, was also computed. This measure may be preferred to the empirical standard deviation measure because it combines bias and efficiency, and is thus an overall measure of the quality of an estimator. The OR method relies on an approximation to the tetrachoric correlation and will produce biased parameter estimates. To compare accuracy of standard errors, estimated standard errors are reported, to be compared to both the empirical standard errors and to the RMSE. To evaluate the performance of the test statistics in Studies 2 and 3, empirical rejection rates are reported.
STUDY 1
The results for Study 1 are presented in Table 2. The four types of generated data are labeled as follows: Condition I represents mild (homogenous) thresholds; Condition II represents moderate (heterogeneous) thresholds; Condition A represents high factor loadings (0.8); and Condition B represents lower factor loadings (0.6). For readability, the results are combined by the size of correlation. In the A conditions, all population correlations were either 0.64 or 0.32. In the B conditions, all population correlations were either 0.36 or 0.18. The LPB method had trouble achieving convergence in some conditions. When fewer than 500 replications converged, the actual number of replications is noted in the last column of the table. The OR method converged for all replications under all conditions. LPB method did not converge in about 4% of the cases at the smallest sample size and with heterogeneous thresholds (the II conditions). For the converged replications, standard error estimates associated with the LPB estimator were sometimes enormous, leading to non-sensical average estimated standard errors. To deal with this problem, estimated standard errors greater than 100 were excluded from that column only. This occurred only at N = 20, and the number of replications that were thus removed is noted in the table. This problem largely went away when the sample size was N = 50 or higher.
Examining average parameter estimates, we find that both the OR and the LPB method underestimate the size of the correlations, and this bias is worse for (a) smaller sample sizes, (b) larger correlations, and (c) heterogeneous thresholds. The worst case is in Condition IIA for N = 20, when the average estimate of the correlation of 0.64 is 0.43 for the OR method and 0.45 for the LPB method. The LPB correlations are slightly closer to the true value but this difference is small. We have reason to believe that this downward bias occurs because of the addition of 0.5 to the frequency tables to remedy zero frequency cells. Without the 0.5 addition, the LPB method is extremely unstable and often cannot proceed with the computations. We advocate this small sample correction, therefore, despite its impact in terms of small sample bias. By N = 100, the average value of the estimated correlations is reasonably close to the true value.
Even though we report empirical standard errors, the comparison of empirical and estimated standard errors is technically only appropriate for the LPB method, because this method produces consistent parameter estimates. However, we find that empirically, the two methods do not differ much in terms of bias, and we proceed with comparing estimated standard errors to both empirical standard errors and to the RMSEs. For the LPB method, the empirical and the estimated standard errors are very close in most cases. However, the estimated standard error is always less than the actual empirical standard error. This is expected as estimated standard errors are based on asymptotic results. This pattern is reversed for the OR method. The estimated standard error for the OR method is always larger than the empirical standard error, which is actually appropriate given the bias. The difference is most pronounced for the largest correlation of 0.64 when thresholds are heterogeneous and sample size is small. The most appropriate measure of the overall quality of the estimator, combining both bias and efficiency, is the RMSE. The average RMSE difference between OR and LPB methods is −0.00004, which is slightly in favor of the OR method but is tiny. The largest difference is in Condition IIB at N = 20, where the difference in RMSEs is −0.01 (0.32 vs. 0.33). The RMSE difference is in favor of OR for smaller correlations. Based on number of converged cases, the RMSE measure of bias and efficiency of parameter estimates, and the quality of estimated standard errors, we conclude that the OR method slightly outperforms the LPB method, and this difference is most pronounced in smaller samples.
STUDY 2
The results for Study 2 are presented in Table 3. For readability, the results are combined by type of parameter: factor correlation or average factor loading. The population factor correlation was 0.5 in all conditions. In the A conditions, all loadings were 0.8, and in the B condition, all loadings were 0.6, so that an average is appropriate. The LPB method failed to converge in all replications for N = 20. Fitting even a small structural model with six parameters to such a sample size may be difficult. Notably, the OR method reaches convergence for the majority of the cases at N = 20. In addition to convergence problems, outlying cases presented more of a problem in this study. Whereas in Study 1, outlying cases were only observed for estimated standard errors, in this study outlying cases were observed for parameter estimates as well, and they occurred for both methods, making it difficult to conduct any meaningful comparisons. Thus, outlying replications were defined as any replication where the absolute value of any parameter estimate exceeded 100. The columns labeled "OR N" and "LPB N" report the number of cases used in the analysis, with the number of excluded outliers in parentheses. The difference is due to non-convergence. For example, in Condition IA, the OR method produced 488 converged cases, of which 2 were outliers, resulting in a total of 486 usable cases. The LPB method generally had more trouble with convergence than the OR method did, with the most pronounced difference occurring when factor loadings were high and thresholds were heterogeneous (Condition IIA). Only 168 cases converged for LPB method in this condition at N = 50, compared to 494 cases for the OR method. Convergence was generally worse for both methods when thresholds were heterogeneous. Examining average estimates of the factor correlation, we find that both methods overestimate its value, more so at the smaller sample sizes, and LPB is more biased than OR in all conditions. By N = 100, the estimates produced by the OR method are reasonable (the average estimated factor correlation is around 0.56-0.59 across the four conditions), but the bias of the LPB method is still substantial, with the average estimate ranging from 0.58 to 0.70. The bias of the LPB estimator is worse for heterogeneous thresholds. The averaged factor loadings are somewhat biased downward for the OR method at N = 20, and LPB is unable to produce any estimates at this sample size. The average factor loadings for higher sample sizes for the OR methods are very reasonable, but for the LPB method they are somewhat biased upward. The surprising conclusion, therefore, is that the OR method seems to be less biased, on average, than the LPB method, despite the theoretical prediction of the opposite pattern. This result illustrates the difference between asymptotic results and small sample behavior.
Because in smaller samples the bias of parameter estimates is substantial, the RMSE and the empirical standard error often differ significantly. It is thus unclear how to evaluate the performance of the estimated standard errors. However, comparing them to either the empirical standard errors or to the RMSE leads to similar conclusions: the estimated standard error is severely downward biased for both methods at smaller sample sizes. The empirical standard error is huge especially for factor correlations at N = 20 (OR only), and this is not reflected in the estimated standard error. The difference is substantial for factor loadings as well: it is in the magnitude of 0.1 for homogenous thresholds and 0.2 for heterogeneous thresholds. However, at N = 50, and when thresholds are homogenous, the OR method produces more comparable empirical and estimated standard errors. The LPB method still exhibits substantial bias. For heterogeneous thresholds, both methods require at least N = 100 before the estimated standard errors are reasonably similar to empirical standard errors. The difference in the RMSEs is in favor of the OR method in 14 out of 16 comparisons, and this difference is more pronounced for factor loadings. The OR method thus appears to be superior both in terms of convergence rates and the overall quality, using the bias/efficiency RMSE measure. Table 3 also presents the estimated coverage probabilities for the 95% confidence intervals of the two model parameters. The estimated coverage probabilities for the OR and LPB approaches are far below the nominal 0.95 level and neither confidence interval approach can be recommended with GLS estimation. Table 4 reports the rejection rates of the goodness-of-fit test statistics using the OR and LPB approaches with GLS estimation. Good performance is not expected here, as sample sizes are too small to have reached convergence to chi-square for the LPB statistic, and the OR statistic is not chi-square distributed because the OR estimator is not consistent.
The LPB statistic rejects too many models across all sample sizes and conditions. It therefore cannot be used to evaluate model fit in such small samples. The OR statistic performs poorly at N = 20, over-accepting models. At larger sample sizes, it performs nearly optimally for higher factor loadings (the A conditions), and over-rejects models for lower factor loadings (the B conditions), though not nearly as much as the corresponding LPB statistic. The goodness-of-fit test using GLS estimation performs better using the OR approach than the LPB approach.
STUDY 3
The results for Study 3 are presented in Table 5. The format of presentation is the same as for Study 2. The most noticeable difference as compared to Study 2 is that ULS estimation has led to drastically fewer convergence problems as compared to GLS estimation. Convergence is still worse for heterogeneous thresholds, but at least 85% of cases converged in all conditions even at the smallest sample size. There is generally no difference in convergence rates between OR and LPB methods, except at the smallest sample size of N = 20 for conditions with heterogeneous thresholds, when the LPB method produces quite a few more non-convergent cases. We implemented the same method of outlier deletion based on parameter estimates as in Study 2. Interestingly, the number of outlying cases that had to be excluded is somewhat more for ULS estimation than for the GLS estimation; it may be that cases that failed to converge under GLS are more likely to produce poor parameter estimates under LS. Even though ULS estimation is used in both approaches, they still differ because a different saturated estimator of the tetrachoric correlation was used in optimization. For small sample sizes (100 or less) ULS estimation is better than GLS estimation: the average ULS parameter estimates appear to be much more accurate than the GLS estimates from Study 2. There is not much difference across methods or across conditions in the estimates of the factor correlation. Interestingly, the factor correlation is almost always overestimated. The OR method does somewhat better, producing averages closer to the true value of 0.5. The average factor loading is again underestimated, but the bias is considerably less. Here, the OR method does better with higher factor loadings (the A conditions), while the LPB method does better with lower factor loadings (the B conditions).
Estimated robust standard errors with ULS estimation are much more similar to actual empirical standard errors than for the GLS estimates in Study 3. With ULS estimation, the OR method tends to match empirical and estimated standard errors a bit better for the factor correlation, while the LPB method does a bit better with factor loadings, excluding some cases where at N = 20 this method still produces very large standard errors. Interestingly, empirical standard errors across methods are nearly identical for the A conditions (higher factor loadings), but the LPB method is slightly more efficient in the B conditions (lower factor loadings). Returning to the RMSE as a global measure of estimator quality, we find that the differences in RMSEs are in favor of the LPB method in 19 out of 24 conditions; however, the largest difference in RMSEs is 0.016, and the average is 0.004, so that the advantage is minimal. Table 5 also presents the estimated coverage probabilities for 95% confidence intervals of the model parameters. The Type I error rates for a test that the parameter value equals zero is (100-Cov)/100 where Cov is the estimated coverage probability. The OR approach has estimated coverage probabilities that are closer to 0.95 and Type I error rates that are closer to 0.05 than the LPB approach. (2) "Phi" refers to the factor correlation (always 0.5). "L" refers to the factor loading (0.8 or 0.6). Conditions I and II correspond to factor loadings of 0.8 and 0.6, respectively; Conditions A and B correspond to mild and moderate thresholds, respectively. "Mean," "Est SE," "Emp SE," "RMSE," and "Cov" refer to the average estimated correlation, average estimated standard error, empirical standard error of estimates, the root mean squared error, and coverage of 95% CIs. "OR N" and "LPB N" refer to the number of converged cases with no outliers, used in all of the computations. In parentheses is the number of outlying cases (p > 100).
Conditions with *, **, and *** had additional 11, 1, and 10 outliers removed, respectively, when computing the average estimated SEs only, for the LPB method. Table 6 reports the rejection rates for the robust goodnessof-fit test statistics for both methods. These are Satorra-Bentler scaled chi-square statistics (Satorra and Bentler, 1994), which rely on the estimated asymptotic covariance matrix of sample correlations but do not require its inverse. Neither of these statistics is chi-square distributed, and both are approximations. The LPB statistic has mean that is equal that of a chi-square variate, while the OR scaled statistic is incorrect even in the mean because the original OR saturated estimator is biased. The ULS test statistic based on the OR method over-accepts models in nearly all conditions. The LPB robust statistic performs quite well, except in the A conditions (lower factor loadings and heterogeneous thresholds), where it over-rejects models.
Lastly, we briefly compare the results of Study 2 and Study 3. It is often said that GLS estimation is asymptotically efficient while ULS estimation is inefficient. Our results show that the word "asymptotically" is important in the definition of efficiency of the GLS estimator. Not only does the simple ULS estimator have the advantage of greater stability, as captured by high convergence rates, but it also appears to be more efficient in smaller sample studies studied here. The average difference in the RMSEs between the GLS and the ULS estimators is 0.036 for the OR method and 0.058 for the LPB method, so that the ULS estimator actually has less empirical variability around the true parameter values in the sample sizes studied. While these numbers are small, they nonetheless demonstrate that an estimator with best asymptotic properties is not necessarily the best estimator in practice.
DISCUSSION
This paper developed the statistical theory for a new structural modeling methodology based on a recently proposed OR estimator of the tetrachoric correlation (Bonett and Price, 2005), including both GLS and ULS estimation methods. We also extended the Lee et al. (1995) method to ULS estimation with robust corrections to the standard errors and test statistics. The algebra and statistics used to develop these extensions follow directly from Satorra and Bentler (1994).
The new OR methodology is easy to implement. It does not require integration as does the direct tetrachoric estimator and can be easily programmed. Its asymptotic covariance matrix also is easy to compute. The GLS OR approach outperforms the GLS LPB method in all conditions. Perhaps the main advantage of the OR method is that it converges more often than the LPB method, especially when sample size is small and/or there are moderatesize thresholds. Moderate-size opposite-signed thresholds often lead to breakdown of traditional methods. The ULS OR approach is largely equivalent to the ULS LPB approach.
Obviously, larger sample sizes will give more reliable parameter estimates as well as more powerful test results. The corrected test statistic (Satorra and Bentler, 1994) for the ULS LPB method worked well in much smaller samples than have recently been studied or recommended in categorical variable research (e.g., Flora and Curran, 2004;Beauducel and Herzberg, 2006;Nussbeck et al., 2006). Of course, at very small sample sizes the test statistic may not be very useful as it may lack power. However, the power issue notwithstanding, this robust statistic for the LPB approach maintains Type I error remarkably well.
In the conditions studied, there was no detectable greater bias in parameter estimates when the OR methodology was used. Asymptotically, there will be a bias, particularly when the correlations are very large and based on very dissimilar thresholds, as we illustrated with Mathematica5 plots. The values of correlations and thresholds we used in our simulations were chosen to represent more typical values that should show some minimal bias. Evidently, when the sample size is not too large and considered within a structural model based on many correlations with varying potential for bias, such bias is not necessarily visibly propagated to the model's fundamental parameters. Further research is needed to determine the sample size at which the LPB method performs better than the OR methods. Such a determination should, however, be made in a relative sense, since the very conditions that likely will yield problems for the OR methodsuch as extreme but opposite thresholds associated with positively correlated variables-also will cause traditional tetrachoric-based methods to break down. While under some circumstances no method may perform perfectly, we predict relatively favorable success for the OR method in moderate sample sizes.
We developed robust least squares approaches both for the OR and LPB methods based on the Satorra and Bentler (1994) methodology, and found that the ULS estimator and the associated robust standard errors were very good. Whether or not an estimator that may be more efficient asymptotically, such as the diagonally weighted least squares (DWLS) estimator, would perform better at such small samples as those studied here remains to be determined in future research. The ULS and the DWLS estimators have been found to perform similarly (Maydeu-Olivares, 2001), and ULS may be preferred (Rhemtulla et al., 2012). We suspect that the stability of ULS in small samples may be more important in practice than any theoretical and asymptotic improvements in efficiency.
In addition to CFA applications, the OR approach is promising in other applications. The OR computations are extremely fast and could have important applications in the exploratory factor analysis of questionnaires with a large number of dichotomous items. Zao (2007) developed accurate methods of constructing confidence intervals for the difference in Pearson correlations computed from the same sample. The Zao confidence interval approach can now be extended to OR tetrachoric approximations using the new results given in the Appendix.
The OR approach has now been implemented in the current version of EQS so that researchers can compare the results of this new method with other methods 1 . Programmers who want to develop OR methods for other SEM packages will now be able to check their results against the EQS results. | 9,688 | 2014-11-19T00:00:00.000 | [
"Mathematics"
] |
The Analysis of Inflation Rate Dynamic in Central and South-Eastern Europe ’ s States in the Context of The EU Accession
In the year 2004 ten states from Central and South-Eastern Europe joined the European Union. The majority of them have registered a significant consumer price increase in the year 2004. The goal of this paper is to examine the fundamental factors that have influenced inflation rate after EU accession and to analyse the causes of the inflation differential in EU member states which acceded in 2004. The impact of EU accession was different in analysed countries, the increasing of inflation rate in accession year being determined by the adoption of the Common Agricultural Policy, the harmonization of the structure and rates of indirect taxes, the introduction of the Common Customs Policy, the free movement of goods, the free movement of capital and the expected inflation. From the analyses we have done, we have remarked that the main cause of inflation differential has been the oil price on the international market, because of the different degree of dependence on oil import of these countries, but also on the different weight of electricity, gases and other fuels in the consumer basket.
Introduction
Following the rapid change of the political systems and the restructuration process of the economies, the countries from Central and Eastern Europe have begun the political and economic integration with the European Union.These countries have expressed their wish to accede to the European Union and realign their economies towards the West.Some of them have managed to attract important sums of foreign direct investments, most of which coming from the member states of the European Union.The European Union has supported this process through the conclusion of the European Agreements, which gave the institutional framework for the future integration, in terms of trade and other economic relations.
The European Council in Copenhagen in June the 22nd and 23rd 1993 agreed that the associated countries from Central and Eastern Europe who wish and satisfy the required political and economic conditions will become members of the European Union.
At the reunion of the European Council in Copenhagen in December 2002, the EU enlargement was decided with ten states, such as: Cyprus, the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Malta, Poland, Slovakia and Slovenia.The decision to expand the European Union is an important step towards modelling of the future political, institutional and economical structures of Europe.
From a macroeconomic point of view, the enlargement of the European Union is a "profitable investment", because it has positive effects both on the economies of the new member states, and the European Union as a whole, especially, in the registration of high economic growth rates.
Inflation is extremely important, according to the political agenda of the EU, as can be seen from the main conclusions reached at the Helsinki seminar (1999): Accession countries therefore need to continue to implement monetary policies geared towards achieving and maintaining price stability, and to support this process with prudent fiscal policies and adequate structural reforms" (European Central Bank, 2000).
The paper is organised as follows: Section 2 discusses the evolution of inflation rate in transition to a market economy period, Section 3 estimates the impact of EU accession upon inflation rate in Central and South-Eastern Europe's states; Section 4 analyses the causes of inflation differential in Central and South-Eastern Europe's states; Section 5 presents concluding remark.
The Implications of the Transition to a Market Economy upon the Inflation
The majority of countries in transition from Central and Eastern Europe have struggled with a strong inflationary process, manifesting itself in the first years of transition as corrective inflation, following that persistent imbalance between supply and demand to change it into a structural inflation (Table 1).
The consumer price evolution in the transition economies from Central and Eastern Europe can be divided in four stages (ICEG European Center, 2002).In the first phase (until 1992) almost all of the countries have registered a corrective inflation associated with the liberalization of prices and of trade and the significant depreciation of the exchange rate.But this liberalization was not complete, the weight of goods in the consumer basket whose prices are administered being included between 13% and 24% in the countries from Central and Eastern Europe.Of the prices administered, the liberalization of energy prices for households represents one of the most important tasks which had to be finished until the accession.Reininger (2000) analyses the evolution of the energy price in four acceding countries from Central and Eastern Europe: the Czech Republic, Poland, Hungary and Slovakia in the 1992-1999 period.His results shows us that the energy prices in the candidate countries have reached the level of those in the European Union for industrial consumers, while the prices charged for households were low even in the 1998-1999 period.The author concludes that major adjustments are necessary in order to reach the level of the EU economies.The adjustment of energy prices has had a significant impact upon the consumer price index in the acceding countries, because they hold approximately 15% in the consumer basket (Backé, Fidrmuc, Reininger & Schardax, 2002).
The second period (1992/1993-1998) was marked by the decrease of the inflation rate at moderate levels.In the next stage, between the years 1998-2000, the inflationary process was strongly influenced by the crises in Asia and Russia, through the negative shock of demand (the decrease of external demand) and the positive shock of supply (the decrease of oil prices) which have tempered the inflation rate.In this period, the inflation rate has registered values with one or two numbers.The last price evolution stage which characterized these economies (from the beginning of the year 2001) was marked by a disinflationary process.
The registration of different inflation rates in the 1992-1998 period is due to the existence of macroeconomic imbalances in some economies and of the type of monetary and exchange policies adopted by each country.These countries have adopted, on different transition stages, different monetary policies, depending on the specific characteristics of each country.Despite all of these, we can observe a shift from monetary policy strategies based on the exchange rate, used, generally, at the beginning of the transition, to strategies based more on the inflation targeting, in a more advanced transition stage.
Another cause which has slowed down the disinflationary process was the high fiscal deficit due to the low level of income collection for the state budget and the unchanged maintenance of expenditure.The cover of the fiscal deficit through seigniorage has constituted the main inflationary source in some transition economies.
The inflation rate divergence during the ongoing transition is explained by the economic analysts through differences between the level of economic development and the capacity to uphold the reforms necessary to become a market economy.If in 1990 all the index equal to 1 in Slovakia, in 1991 the majority of the indices has increased with one, two or three; index score of Price liberalisation was 4. In this year inflation rate has increased by 50.4%, in 1992 the inflation rate returning to value in 1990.
In 1989 inflation rate in Slovenia was 1285.3%, while the index score was above 2 for Small scale privatisation and Price liberalisation.In 1990 index of Securities markets & non-bank financial institutions has increased to 2 and index of Price liberalisation has increased from 2.67 to 3.67, imprinted the increase of consumer prices by 551.6%.In 1993 the inflation rate has registered a significant decrease (from 207.3% to 32.9%), due the high values of reform indices.
We remark the positive correlation between reform indices score and inflation rate in the first years of transition.Disinflationary process has begun when reform indices has registered high values, which means that structural reforms have had a significant impact upon the evolution of inflation rate in the transition period.
Therefore, the acceleration of inflation rate in 1991 in all countries is justified by beginning of reforms necessary to become a market economy.
In contrast with the transition countries we have the member states of the European Union, whose inflation rates maintained themselves at relatively low levels, which suggests the differences between a mature market economy and a forming one.
The Inflationary Effects of Accession to the European Union
The years prior the European Union accession were marked by a significant disinflationary process in most of the accession countries, seeing as price stability is one of the requirements to join the EU.The registered progress by the accession countries starting with 2001 were due to the favourable shocks in supplying (the decrease of oil prices) and the deceleration of foods prices, in some countries, but also due to the policies used to combat inflation.This signifies the importance of price stability as statutory objective of the central banks in each accession country.Even if the inflation tempered, it is a major preoccupation for the monetary authorities, the evolution of inflation being an indicator in the convergence evaluation with the euro area.
The statistic data (Table 2) show that the negative aspect of EU accession refers to the accentuated increase of inflation rate in accession year ( 2004), the highest inflation rate registering in Latvia (from 2.9% to 6.2%), Poland (from 0.8% to 3.5%), the Czech Republic (from 0.2% to 2.8%), Lithuania (from -1.1% to 1.2%) and Hungary (from 4.7% to 6.8%).The alignment of some prices to the consumer goods and services and of some taxes, imposed in the context of accession to the level of the old EU countries created objective inflationary pressures in the new member states.
www.ccsen th an es on a global level and a better diversity of financial risks.On this way, the capital account liberalization can lead to economic growth and social welfare (Altar, Albu, Dumitru & Necula, apud Fisher, 1998).The rapid economic growth generated by the inflow of the EU funds created inflationary pressures upon internal demand.This process is reflected by the increasing of the inflation rate in Poland starting with 2007 (Office of the Committee for European Integration Department of Analyses and Strategies, 2008).The statistics data show that the analysed countries have registered significant increases of the real GDP after EU accession, until 2007, being an inflationary factor (Figure 1).6).According to the Eurobarometer 62 from May 2005, European citizens perceive a negative role of the European Union in the development of the inflation rate, which means that the expected inflation has negatively influenced the inflation rate in 2004.Less than 35% from the citizens of the analysed countries consider that the European Union has a positive role upon inflation, the most pessimistic being the Czech (13%) and the Polish (11%).
The inflationary effects of the accession process in the Central and South-Eastern Europe countries have been on a short-run, but the catching process, in order to adopting the euro, has been on a long-run, leading to inflation differential between member states.
The Causes of Inflation Differential in EU Member States from Central and South-Eastern Europe
The inflation rate from the European Union countries does not converge to a common level.Numerous research have analysed the convergence of the inflation rate in the European Union and the causes of differential among them.The inflation differentials between EU countries are generated by five factors.1).Maier (2004) analyses the inflationary consequences of price convergence of tradable goods in the accessing countries in 2004 and the future member states of European Union (Romania, Bulgaria, Turkey).Taking into consideration the prices differential of the tradable goods (they have 40% in consumer basket), the convergence of these prices is a source of inflation differential.When the price differential are hidden, there are the inflation differential.As a result of the convergence process of the price of tradable goods, the inflation in the new member states could be on average by 1.5 -3.5% higher than in the euro area.
The price level convergence towards a common level is a prime source of inflation differential, because the price level in the member states varies from one country to another.The countries where the price level is lower by 20% than the euro area average are exposed to a rate of inflation higher by 1% over the euro area (Horváth & Koprnická, 2008).
2).The real convergence, necessary for adopting the single currency, has a major impact upon the inflationary process, because the reduction of disparities in terms of GDP/capita is accompanied by price increase of services.The inflation differential between countries can be explained based on the Balassa-Samuelson effect, due to the lower development level in the accession countries vis-à-vis the euro area.Depending on the services weight in the consumer basket, the increase of their prices will have a higher or lower impact upon overall inflation.In the year 2009 the weight of services in the consumer basket in the analysed countries was between 25.33% (Lithuania) and 39.33% (Malta).De Grauwe and Skudelny (2000) have estimated on the long-run the effects of the differential of productivity between the tradable sector and non-tradable sector upon inflation rate in EU member states, highlighting that the impact of a productivity shock upon inflation rate can be substantial, meaning an increase of 8% in the inflation differential.
3).Another cause of inflation differential is given by the exchange rate.The impact of the exchange rate is reflected, firstly, on the import prices, then fuels the prices of tradable goods on the internal market and finally, the overall inflation.The biggest influence is exerted by currency fluctuations against the euro, given that the imports from the European Union have a significant share in the international trade of the member countries.The share of imports from European Union in the total of imports is approximately 60-80% in the analysed countries.But the fluctuations of the exchange rate depend on the type of exchange rate arrangement, therefore the inflation rate is not influenced by exchange rate variations in case of the Currency Board (Lithuania) or is influenced very little in case of other conventional fixed peg arrangements (Latvia).In case of floating (the Czech Republic, Poland), the variations of the Czech koruna and of the Polish zloty have a significant influence upon the inflation rate.Honohan and Lane (2003), investigating the causes of divergent inflation rates among EMU member countries in the 1999-2001 period, highlight that, despite the common currency, the exchange rate fluctuations have had a substantial impact on changes in inflation rate and inflation differentials in EMU.This is explained by the different degree of exposure of member states to trade outside the euro area.The divergent inflation rates have EU accession, which signifies the importance of price stability in accession countries.The impact of accession on the inflation rate was both positive (the introduction of the Common Customs Policy and the free movement of goods) and negative (the adoption of the Common Agricultural Policy, the harmonization of the structure and rates of indirect taxes, the free movement of capital and the expected inflation).
The statistical data shows us that inflation rates in European Union states members are not convergent, the causes are the following: the price level convergence, the manifestation of the Balassa-Samuelson effect, the exchange rate, the oil price shocks, the different weight of goods and services in the consumer basket.From the analyses we have done, we can notice that the exchange rate fluctuations, the dependence on oil import and the weight of Housing, water, electricity, gas and other fuels and of Transport in the consumer basket were the main causes of divergent inflation rates in EU member states from Central and South-Eastern Europe.Among the analysed factors, the impact of exchange rate depends on monetary policy strategies in the Member States, which means that monetary strategies heterogeneity explains inflation differentials in the European Union.
Also, the results highlights the diminishment of inflation differentials vis-à-vis the euro area after the accession, with the exception of the Baltic States.The explanation is given by the intensifying of the Balassa-Samuelson effect after accession and impossibility of appreciation of the national currency in these countries, the impact being only on the inflation rate.
Table 1 .
The average inflation rate in Central and Eastern European countries(%, 1989-1999) Source: European Bank for Reconstruction and Development, http://www.ebrd.com/pages/research/economics/data/macro.shtml#macro (Staehr, 2003)for Reconstruction and Development calculates nine reform indices: Large scale privatisation, Small scale privatisation, Enterprise restructuring, Price liberalisation, Trade & Forex system, Competition Policy, Banking reform & interest rate liberalisation, Securities markets & non-bank financial institutions, Overall infrastructure reform.An index score equal to 1 indicates no reform relative of a "standard" planned economy, while the maximum score 4.3 corresponds to a well-functioning market economy(Staehr, 2003).In the Baltic States, in 1991, inflation rate has registered values with 3 digits (172.2%-224.7%),while in 1992 inflation rate has accelerated to approximately 1000%.Regarding reform indices, in 1991 in Estonia majority of indices has registered the value of 1, while in Latvia and Lithuania only Price liberalisation indicated a index score equal to 2.67, rest of indices being equal to 1.In 1992 majority of indices scores has increased and in 1993 index of Price liberalisation was 4.33 in Estonia, Latvia and 4 in Lithuania.In this year inflation rate has decreased significant, following a disinflationary trend in the next years.Hungary has registered in 1989 the index score of Price liberalisation of 2.67, Trade & Forex system of 2, while the inflation rate was 17%.The highest value of inflation rate was in 1991 (35%); in this year, index score of Price liberalisation has indicated 4.33, Trade & Forex system has indicated 4.
In Poland the reform has begun early, in 1989 Small scale privatisation and Price liberalisation has indicated a index score equal to 2, respective 2.33.In 1990 only one of the indices (Securities markets & non-bank financial institutions) has indicated a score equal to 1, justifying the high inflation rate (585.8%) in this year.Disinflationary trend is correlated with the reform index score, these indices being above 2.67 in 1996.Small scale privatisation and Trade & Forex system have registered maxim value -4.33. | 4,142.6 | 2013-01-11T00:00:00.000 | [
"Economics"
] |
Progressive approach to eruption at Campi Flegrei caldera in southern Italy
Unrest at large calderas rarely ends in eruption, encouraging vulnerable communities to perceive emergency warnings of volcanic activity as false alarms. A classic example is the Campi Flegrei caldera in southern Italy, where three episodes of major uplift since 1950 have raised its central district by about 3 m without an eruption. Individual episodes have conventionally been treated as independent events, so that only data from an ongoing episode are considered pertinent to evaluating eruptive potential. An implicit assumption is that the crust relaxes accumulated stress after each episode. Here we apply a new model of elastic-brittle failure to test the alternative view that successive episodes promote a long-term accumulation of stress in the crust. The results provide the first quantitative evidence that Campi Flegrei is evolving towards conditions more favourable to eruption and identify field tests for predictions on how the caldera will behave during future unrest.
L arge calderas with areas of 100 km 2 or more are among the most-populated active volcanoes on Earth. They commonly show episodes of unrest at intervals of B10-10 2 years 1 and, although the minority end in eruption, each raises concern that volcanic activity might be imminent. An outstanding goal therefore remains to distinguish between pre-eruptive and noneruptive episodes.
With an unprecedented 2,000-year record of historical unrest and eruption 2 , Campi Flegrei provides key insights for understanding the dynamic evolution of large calderas. Three episodes of major unrest have occurred since 1950, in April 1950-May 1952, July 1969-July 1972and June 1982-December 1984. The last occasion of such behaviour occurred during the century before the caldera's only historical eruption in 1538 (refs 2,6). The current unrest is consistent with a reactivation of the magmatic system after 412 years and, hence, with an increase in the threat from volcanic activity to the caldera's population of almost 360,000 people, as well as to the three million residents of Naples immediately outside its eastern margin.
The largest ground movements recorded since Roman times have been concentrated near the modern coastal town of Pozzuoli at the centre of the caldera (Fig. 1). They have been dominated by a secular subsidence of c. 1.7 m a century 2 that has been interrupted by at least two extended intervals of net uplift, by about 17 m in c. 1430-1538 (ref. 2) and about 3 m since 1950 (refs 5,7). The pattern of recent uplifts has been radially symmetric, decaying to negligible movements at distances of about 5 km from the centre in Pozzuoli 3,4,8 . The cause of deepseated subsidence has to be confirmed, but the uplift is consistent with an elastic-brittle crust being pressurized at depths of about 2.5-3 km, near the base of the geothermal system (Fig. 1). Pressurization has been attributed to intrusions of magma, fed from a primary magma reservoir 7-9 km below the surface, and to disturbances of the geothermal system [8][9][10][11][12][13][14][15][16] . A sill geometry is preferred for the magma intrusions, because it requires the least overpressure to drive the observed magnitudes of uplift 15 , and inversions of geodetic data for the 1970-1972 and 1982-1984 uplifts yield intruded volumes of 0.02-0.04 km 3 , sill diameters of 4-6 km and mean thicknesses on the order of metres ( Fig. 1) 15,17 . Some 26,000 micro-earthquakes, or volcano-tectonic (VT) events, have been recorded across the central zone of the caldera during the current unrest ( Fig. 1), about 80% of which have been located at depths between 1 and 3 km, and o3% at depths of 4 km or more 3,[18][19][20][21] . More than 98% have had magnitudes of 2.5 or less 18 , indicating the predominance of slip along faults B0.01-0.1 km across, or ten to a hundred times smaller than the dimensions of the deforming crust. The crust therefore contains a distributed population of faults that are much smaller than the dimensions over which deformation has occurred.
To evaluate the potential for eruption, conventional studies have focussed on interpreting the major unrest of 1982-1984 (refs 4,8-16). Implicit assumptions have been that the next unrest will resemble its predecessor and, hence, that the shallow crust and magmatic system at Campi Flegrei has returned to conditions similar to those before 1982. A necessary implication is that the potential for eruption will also be similar to that during 1982-1984. However, recent measurements from a pilot borehole for the Campi Flegrei Deep Drilling Project suggest that stress has instead been accumulating in the crust 22 . Successive episodes of uplift may thus be driving the crust towards a critical stress for bulk failure and, hence, to a greater potential for eruption than previously assumed.
We here propose that the whole sequence of unrest since 1950 belongs to a single, long-term evolutionary sequence of accumulating stress and crustal damage. We apply a new model of elastic-brittle rock behaviour 23,24 to demonstrate that the increasing levels of VT seismicity associated with successive uplifts reflect changes in how the crust accommodates the strain energy supplied by magmatic intrusions. In particular, the behaviour follows the trend expected as the dominant factor controlling deformation changes from the elastic storage of strain energy to the release of that energy by faulting. Continuation of the trend will favour bulk failure in the crust and, hence, a greater potential for eruption than during previous emergencies. The results emphasize the importance of incorporating rock-physics criteria into strategies for evaluating the potential for eruption, especially at volcanoes that have yet to establish an open pathway for magma to reach the surface. They also highlight the need to raise awareness among vulnerable communities that a lack of eruption during recent emergencies cannot be used to infer that an eruption is also unlikely during a future crisis.
Results
Unifying episodes of unrest. After correction for secular subsidence 2,15 , the three major unrests at Campi Flegrei since 1950 have been characterized by initial uplifts for 2-3 years at mean rates of 0.3-0.6 m per year at the Serapeo in Pozzuoli, followed by minor corrected subsidence and subsequent recovery over 10-33 years (Fig. 2). The total corrected uplift at Serapeo has been c. 4 m (Fig. 2).
Rapid uplift occurs when the crust is extended over a newly intruded sill. We thus view the post-1950 unrest as equivalent to a total of 6-7 years of rapid uplift under increasing differential stress during intrusions, interrupted by decadal intervals of approximate stasis (Fig. 2). As a result, we expect the combined episodes of uplift to show the VT-deformation behaviour of an elastic crust with a large number of small faults 23,24 (Fig. 2).
Regimes of deformation. The ideal sequence of behaviour starts from lithostatic equilibrium. Initial deformation is elastic, for which strain is accommodated by deformation of unbroken rock around faults (Fig. 3). As the total strain increases, the crust's behaviour becomes quasi-elastic, for which most deformation is elastic, but a small proportion is accommodated inelastically by fault movement (which is recorded as VT seismicity). The proportion of faulting increases until it becomes the only mechanism for accommodating additional strain. At this stage, the strain stored elastically remains constant and additional deformation is controlled inelastically by fault movement alone 23,24 (Fig. 3; see equations (2)-(4) in the Methods section). In addition, the rock between faults is expected to become increasingly damaged, with a greater linkage in the inelastic regime among cracks much smaller than the faults themselves 25 . The sequence finishes with bulk failure and the potential escape of magma through a newly propagating fracture. The stored strain can then be released as the crust relaxes elastically around the newly opened fracture, as well as around the pressure source 26 that caused the precursory deformation.
The quasi-elastic and inelastic regimes are described by exponential and linear trends between inelastic and total deformation 23,24 (see equations (3) and (4) in the Methods section). The total number SN of VT events is a natural proxy for total inelastic deformation (not only vertical deformation), whereas the ratio Dh/R of maximum uplift to the horizontal radius of ground uplift is a field measure proportional to total deformation. In terms of field parameters, the exponential trend for the quasi-elastic regime becomes 23,24 where (SN) 0 denotes the number of VT events at the start of quasi-elastic behaviour and l ch is a characteristic displacement. Equation (1) uses the number of VT events to measure the amount of damage in the crust caused by an increase in differential stress, regardless of the source of stress. In extension, Dh/l ch ¼ S d /s T , the ratio of differential stress to tensile strength, which has a maximum value of 4 or 5.6 for eventual bulk failure in tension or in mixed tension and shear [27][28][29] . Here S d refers to the accumulated differential stress in the crust after stress relaxation due to fault movement has been taken into account. Among large calderas, equation (1) has been tested 24 at Rabaul, in Papua New Guinea, where a caldera-wide uplift of 2.3 m near its centre occurred for 23 years before an intra-caldera eruption in 1994. The uplift changed from quasielastic to inelastic when Dh/l ch ¼ 4 ( Fig. 3), with the quasi-elastic regime accounting for about 80% of the total sequence 24 . Similar behaviour has been observed at stratovolcanoes, but over shorter timescales of B0.1-1 year. For example, the quasi-elastic regime has continued for 80% or more of total sequences with durations of several months before flank eruptions at the frequently erupting volcanoes Kilauea 23,30 and Etna 31 , but for as little as 40% of the total 3-month sequences before the 2011 eruption of El Hierro in the Canary Islands 32,33 , which occurred after a repose interval of more than 200 years (Fig. 4).
The repeated similarity of VT-uplift trends for different volcanoes is remarkable. It reveals a fundamental similarity in the process of damage accumulation in the crust, regardless of site-specific structures and order-of-magnitude differences in ARTICLE dimensions and process timescales, and supports our hypothesis that bulk deformation at volcanoes can be approximated to that of a crust with a large and distributed population of small discontinuities.
Regimes of deformation at Campi Flegrei. The combined corrected uplift at Campi Flegrei (with intervals of stasis removed) also follows the classic elastic-brittle sequence for deformation in extension (Fig. 5). The crust behaves elastically for Dho1.75 m and, after a short transition, becomes quasi-elastic for Dh42.3 m with l ch ¼ 1 m (Fig. 5). The current corrected uplift of about 4.2 m gives Dh/l ch E4.2, which suggests that the crust is now approaching the transition from quasi-elastic to inelastic deformation (Fig. 5). Virtually the same VT-uplift trend appears when using uplift uncorrected for secular subsidence (Fig. 5). Background subsidence since 1950 has thus not had a significant effect on events differential stress accumulation in the shallow crust. The VT-uplift trend is similar to that observed at Rabaul and supports our view that the entire sequence of unrest since 1950 reflects a long-term accumulation of stress in the crust (Fig. 5). This interpretation is reinforced by the remaining interval of significant VT seismicity between 1972 and 1982 (Fig. 2), which was characterized by a gradual decay in VT event rate from 200 to 300 events per month and a minor corrected, ground subsidence and recovery of about 5% of the total uplift. This was followed by a new 30-month episode of corrected uplift that, for its first 8 months until March 1983, raised the ground at Pozzuoli by 0.4 m without significant seismicity. When VT events again occurred, they accelerated to rates of about 300-500 events per month in o3 months (Fig. 2).
The VT decay with minor ground movement resembles an extended aftershock sequence, in which fracturing and fault slip relax stresses in the surrounding rock under a constant bulk strain 34 . Before faulting can resume, the surrounding rock must be re-stressed elastically until the local stresses have returned to their values before relaxation 35 . Renewed uplift will thus occur without VT events until the stress necessary for continued faulting has been regained. From equation (1) The increase in differential stress during elastic recovery is proportional to the accompanying uplift; it is also numerically equivalent to the stress previously lost by seismic relaxation. To a first approximation, stress and uplift change in proportion when behaviour is quasi-elastic 23 , so that the fraction of total stress lost during relaxation is approximately the ratio of uplift during elastic recovery to total uplift before relaxation, that is 0.4 out of 2.5 m or 16%. This value is consistent with independent estimates of the proportion of energy lost by seismicity during 1972-1982. The proportion of total stress relaxed by seismicity is B(E s /E T ) 1/2 , where E s and E T are the seismic energy released and total energy supplied 36 . Extrapolating the analysis of the 1982-1984 unrest 19,20 , the seismic energy lost during 1972-1982 is B10 13 J, whereas the total energy supplied until 1972 is BpR 2 Zrg(Dh/3) B10 15 J, where the radius R and thickness Z of the deforming crust are 5 and 3 km, respectively, the mean crustal density r is 2,200 kg m À 3 , g is gravity, Dh is 2.4 m (for the interval 1950-1972) and Dh/3 is the mean uplift across the crust approximated to a cone. The estimated stress relaxation is thus B(10 13 /10 15 ) 1/2 or 10%.
For comparison, the seismic energy released since 1982 is c. 5 Â 10 13 J (Fig. 6) or about 5% of the energy supplied during the (Figs 2 and 6).
Corrected subsidence without seismicity is favoured by a contemporaneous decrease in either or both the differential stress applied to the crust and the pore-fluid pressure within the crust. Differential stress is generated by magma overpressure, which can be decreased by reducing the volume of magma by gas loss on vesiculation or by thermal contraction on solidification. At Campi Flegrei, the magmatic sills causing each episode of unrest have thicknesses of metres. These solidify within years 15 and so are not able to accommodate movements over 16 years. Reductions in differential stress through magmatic action are thus unlikely controls on the corrected subsidence since 1984. The corrected movement, however, can be accommodated by the relaxation of pore pressure in the geothermal system by the diffusion of pressurized fluids 9,16,19,37 . Diffusion is suggested also for the uplift since 2000, because both uplift and subsidence have occurred at similar rates and lengths of time, with variations in the influx of magmatic fluids from depth being a preferred control on the geothermally driven ground movement [37][38][39][40][41][42] .
Viewed as a single sequence, therefore, unrest at Campi Flegrei can be explained by the evolving deformation of an elastic-brittle shallow crust. This first quantitative interpretation of the caldera's long-term behaviour shows that there is no need to require significant non-brittle flow due to viscous 19 or plastic 43 movements at timescales of B10 years. Thus, during the 1969-1972 uplift, the bulk behaviour evolved from elastic to quasi-elastic and may now be close to the next transition from quasi-elastic to inelastic. The VT-uplift trend, in particular, is following that observed at Rabaul before its 1994 eruption and Ground oscillation since 1984 is consistent with a drop (6-7) to lower pore pressure (blue dashed curve) and subsequent recovery (7)(8). Deformation occurs in the elastic (white shading) and quasi-elastic (yellow) regimes, and approaches the inelastic (magenta) regime. (b) The variation of total number of VT events with combined corrected uplift during the rapid uplifts of 1950-1952, 1969-1972 and 1982-1983 (1i þ 2i þ 3i in Fig. 2) shows a short transition from elastic to quasi-elastic behaviour. In the quasi-elastic regime, the VT event number increases as SN ¼ 295 exp (h/l ch ) and l ch ¼ 1 m (r 2 ¼ 0.99). A return to the main VT-uplift trend (blue solid curve) may coincide with the emergence of inelastic crustal deformation. suggests that long-term stress accumulation may be a general feature of unrest at large calderas.
Discussion
Our interpretation predicts that if the current uplift continues to a corrected value of about 4.5 m at Pozzuoli, the crust in Campi Flegrei will have returned to the stress conditions that prevailed in 1984 at the end of the last major uplift (Fig. 6). We would then expect any additional uplift to continue the VT-deformation trend interrupted in 1984 and, hence, to be accompanied by a significant increase in VT seismicity, regardless of the specific mechanism that is increasing the applied differential stress. Should the rate of uplift also return to the rapid values of 1982-1984, we would further expect the onset of VT event rates as high as 800-1,000 per month. Rapid uplift, however, is not essential. At Rabaul, for example, the approach to eruption was preceded by 2 years at a maximum recorded uplift rate of about 0.15 m per year, which was about three times smaller than the peak rates that had been registered 10 years previously 24 . A return to the long-term VT-deformation trend at Campi Flegrei may thus occur at uplift rates and VT event rates slower than observed during previous emergencies. The indirect stress ratio Dh/l ch suggests that the differential stress accumulated in Campi Flegrei's crust is about four times its tensile strength (Fig. 5) and so is approaching the transition from quasi-elastic to inelastic deformation regimes. An increase in linkage among small-scale cracks between faults is also expected to occur at the transition to inelastic behaviour. This would favour an increase in bulk permeability and, hence, a faster escape of fluids from the geothermal system, which is consistent with the onset of corrected subsidence in 1984. A return to the long-term VT-deformation trend may therefore be characterized by inelastic behaviour under a constant maintained stress, for which increases in total deformation are determined by additional fault movement (Fig. 3). Such a transition would be associated with VT event rates increasing in proportion to the rate of uplift.
The few field data available for large calderas and stratovolcanoes suggest that the quasi-elastic regime contributes between 40 and 80% of the total precursory deformation (Fig. 4). Assuming this range, a corrected uplift of 4.2 m at the end of quasi-elastic behaviour at Campi Flegrei (Fig. 5) indicates that the inelastic regime may continue until reaching a total corrected uplift of between 5 and 10 m before an eruption can be expected. A transitional value of 4 for Dh/l ch assumes that bulk failure occurs in tension. The value increases towards 5.6 as the failure mechanism involves tension with an increasing component of shear [27][28][29] . Increasing shear could thus raise the transitional uplift by some 25% and, hence, yield a total corrected uplift of between 6.25 and 12.5 m before an eruption.
The estimated limits on total uplift are smaller than the 17 m of caldera-wide uplift inferred to have occurred during the century before the caldera's last eruption in 1538 (refs 2,6). A greater total uplift would be favoured by a larger uplift before the transition to inelastic behaviour, without necessarily changing the proportion of uplift in the two deformation regimes, or by a greater proportion of uplift in the inelastic regime alone. A larger transitional uplift would be favoured if the pre-1538 intrusions had been required to break connected horizons of rock stronger than those providing resistance today (to increase the uplift required before tensile failure). Otherwise, the difference may indicate that mechanisms for reducing effective bulk rigidity, such as bedding-plane slip 43,44 , become significant as deformation proceeds (to enable greater uplift for a given applied stress); that, at timescales of B10 2 years, non-brittle (and seismically quiet) processes, such as viscous flow 19 , also contribute to deformation (to permit greater uplift than from elastic-brittle behaviour); that additional intervals of fault slip under constant strain reduce the accumulated stress (to enable a greater total uplift before the failure stress is eventually achieved); or that fluid pressure in the hydrothermal system has become large enough to contribute significant uplift.
Although these mechanisms would favour a greater proportion of inelastic deformation at Campi Flegrei than has been recorded elsewhere, none of them guarantees that an uplift of 17 m needs to occur before eruption. The onset of inelastic behaviour thus represents a significant increase in the potential for volcanic activity and provides a new criterion for defining levels of alert. In common with other volcanoes for which few or no precursory data are available from previous eruptions [45][46][47] , expert elicitation is a favoured method for evaluating unrest at Campi Flegrei 48 . The method estimates the probability of an eruption given the occurrence of selected precursory criteria, such as critical rates or amounts of ground uplift. By necessity, the critical values are determined empirically from volcanoes elsewhere and so are not well constrained 49 . However, the VT-deformation trends (Fig. 3) are generic and can be applied in the absence of historical information. The change from the quasi-elastic to inelastic regime therefore complements probabilistic evaluations by providing an objective criterion for increasing alert levels.
At Campi Flegrei itself, an additional obstacle to effective warning is a low public awareness of volcanic hazard compared with the perceived threat from microseismicity 50,51 . The persistent VT seismicity in 1983-1984 damaged buildings throughout Pozzuoli and triggered the evacuation of some 40,000 people 52 . Compared with emergencies since 1950, therefore, a new episode of rapid uplift is likely to present a greater hazard from persistent ground shaking, as well as a significant increase in the potential for eruption. Past experience of rapid uplifts is thus unreliable for perceiving the level of risk during a future emergency. The residents of Campi Flegrei have experienced three episodes of rapid uplift over seven decades without an eruption. This favours the view that rapid uplifts are poor indicators of imminent volcanic activity. Recognizing the long-term evolution in precursory behaviour is essential for moderating misplaced confidence in non-eruptive outcomes and for delivering improved warnings to the public.
Methods
Quantifying regimes of elastic-brittle deformation. The VT event rate is controlled by stresses around the peripheries of faults, where damage zones develop with dimensions much smaller than the faults themselves 23 . The mean differential stress across damage zones S dz ¼ S d þ S tf , where S d is the net applied differential stress and S tf is the mean difference between the stress gained by transfer from adjacent crust relaxing during faulting and the stress lost by creating and opening discontinuities in the damage zones. Increases in S dz are thus limited by increases in either S d or in S tf , corresponding to rates of faulting limited by increases in bulk stress or in local stress transfer. By inspection, therefore, quasi-elastic deformation is associated with bulk-stress faulting and inelastic deformation with stress-transfer faulting.
From thermodynamics, the probability that damage zones fracture is given by exp [ À (S st À S dz )/S ch ], where S st is mean rock strength, S st À S dz is the additional stress required for bulk fracture and the characteristic stress S ch is the maximum equivalent stress available from stochastic fluctuations in atomic configuration. The mean rate of inelastic deformation with supplied differential stress, de in /dS sup , is then 23 where the attempt frequency (de in /dS sup ) af is the frequency with which the stochastic fluctuations in stress attempt to break the damage zones. S sup is the differential stress supplied before taking account of stress drops due to fault movement, whereas S d is the maintained stress after the seismic stress drops have been removed. The value for S ch depends on the style of deformation. Failure in compression is limited by shearing between atoms, but in extension by the tearing of bonds. As reflected by macroscopic properties, S ch in compression depends on temperature and effective confining pressure, for which S ch S* ¼ (3FT þ P c À P p )/3, where T is absolute temperature (K), P c and P p are the confining and pore-fluid pressures, and F is the molecular energy per unit volume per temperature 23 .
In extension S ch defines the tensile strength s T of unbroken rock and is effectively constant for the pressures and temperatures in the crust beneath volcanoes. Equation (2) shows that the rate of inelastic deformation with stress depends on the difference between S d þ S tf and S st . Initial implementations 53 of the model considered the limiting condition for which the stress difference is controlled by a reduction in S st , through processes such as chemically enhanced stress corrosion. These were subsequently generalized 23 to conditions for which the rate of stress drop by faulting balanced the rate of applied stress increase without the need to invoke chemical rock weakening; in this case, the rate of inelastic deformation is determined by increases in S tf .
In the quasi-elastic limit, S d ES sup ¼ Ye, where Y is Young's modulus, and S tf is negligible. Assuming that the stress distribution about the mean is constant and that the total number, SN, of VT events is proportional to inelastic strain (SN ¼ C de in ), integration of equation (2) yields where e st ¼ S st /Y, e ch ¼ S ch /Y and SN st is the number of VT events when S d ES st at the start of the inelastic regime. In this example, the failure strain e st is assumed to be constant, which implies that any weaking processes affect both failure strength and Young's modulus in the same proportion.
In the inelastic limit, S d is held approximately constant, because the mean rate of stress drop by faulting balances the mean rate of stress supplied by the pressure source. Additional increases in total strain are controlled by inelastic deformation alone (de in /deE1), for which where SN in,0 (ZSN st ) is the number of VT events before the start of the inelastic regime.
Field proxies for bulk deformation. Assuming a constant geometry of deformation, common field measures of bulk strain include ground tilt, uplift and horizontal displacement. The preferred choice depends on the form of monitoring network and on which parameter yields the largest variation. Maximum uplift Dh is the chosen parameter at Campi Flegrei, so that Dh ¼ K e and l ch ¼ K e ch , where K is a constant of proportionality. With these substitutions, equation (3) yields equation (1) in the main text.
Data availability. All relevant data are available from the authors. | 6,258 | 2017-05-15T00:00:00.000 | [
"Geology"
] |
Anisotropic monoblock model for computing AC loss in partially coupled Roebel cables
When exposed to time-dependent magnetic fields, REBCO Roebel cables generate AC loss resulting from both magnetic hysteresis and induced inter-strand coupling currents. Until now, the AC loss has been computed in a two-dimensional approximation assuming fully coupled or decoupled strands, and a finite inter-strand resistance could be simulated only with three-dimensional models. In this work, we propose a homogenization procedure that reduces the three-dimensional geometry of the Roebel cable to two dimensions, without ignoring connections between the strands. The homogenized cable consists of two parallel ‘monoblocks’ with an anisotropic resistivity. The proposed model enables computation of AC coupling loss without the need for complex three-dimensional simulations. For experimental validation, a Roebel cable with soldered strands was prepared. The inter-strand resistance was determined by applying a transverse current and measuring the voltage profile. Additionally, the AC magnetization loss of the cable was measured in fields of 1 to 50 mT with frequencies of 1 to 2048 Hz using a calibration-free technique. With the measured inter-strand resistance as input parameter, the monoblock model gives a good estimate for the AC loss, even for conditions in which the coupling loss is dominant.
Introduction
The Roebel cable is a way to make fully transposed cables of REBCO coated conductors [1] (REBCO = rare-earth metal barium copper oxide). Short lengths of REBCO Roebel cable were first demonstrated by Karlsruhe Institute of Technology (KIT) [2] and Industrial Research ltd. (IRL) [3]. IRL later Original Content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. automatized the cable assembly process and developed methods for quality control [4,5]. Roebel cables have been considered for use in high-field accelerator [6,7] or fusion magnets [8], and also for power applications such as transformers [9,10]. A unique property of REBCO Roebel cables is that the strands are transposed, but not twisted, and the anisotropic properties of coated conductors are retained. This enables magnet designs that exploit these anisotropic properties, such as 'aligned-block' coils, which use the maximum critical current density by aligning the conductor with the magnetic field [11]. The strands in REBCO Roebel cables are usually not insulated in order to create alternative paths for the current in case of defects. A disadvantage is that a time-dependent magnetic field can induce inter-strand coupling currents, which lead to an increase in AC loss [12]. In order to predict the level of AC loss, electromagnetic modelling of Roebel cables is required. Several different approaches are already described in literature: often a cross-section of the Roebel cable is extended to infinity in order to reduce the problem from three to two dimensions [13][14][15][16][17][18]. More advanced three-dimensional models also exist [19][20][21][22]. However, none of these models take into account a finite resistance between the strands, and thus cannot predict coupling losses. A network model developed by van Nugteren et al simulates the three-dimensional cable taking into account coupling between the strands [23]. To our knowledge this is the only numerical model for Roebel cables that can predict AC coupling losses.
In this work, we aim to compute the magnetization AC loss in a Roebel cable with finite inter-strand resistance using a two-dimensional model, and evaluate it with an experiment. An approximation using a homogenization procedure is applied to evaluate the cable geometry in 2D including connections and interconnects between strands. This is done using a 'monoblock' model with anisotropic resistivity. The AC loss predicted by the monoblock model is compared to measurements on a cable with controlled inter-strand resistance. The magnetization AC loss is measured over a wide range of frequencies (1-2048 Hz) and amplitudes up to 50 mT. Due to limitations of the set-up, we have not been able to validate the model for higher amplitudes.
Monoblock model
A Roebel cable is a complex three-dimensional structure. The aim is to simplify the cable to a two-dimensional geometry, without neglecting the influence of finite resistance between the strands. We consider a cable with N s strands and a transposition length ℓ t (see figure 1(a)). The strands have a thickness d s and a width w s . The width of the entire cable is given by W. The first step of the simplification is neglecting the influence of the cross-overs, where the strand go from one cable half to the other. The result is a 'tilted stack model', as shown in figure 1(b). Since the strands move up and down along the cable length, they have an angle with respect to the longitudinal x-direction. As seen from the sketch, this angle is given by: The second step is homogenizing the tilted stack into a uniform 'monoblock' (figure 1(c)). The monoblock is invariant in the x-direction, and thus a two-dimensional electromagnetic calculation in the yz-plane suffices. The monoblock has an anisotropic resistivity: In the direction parallel to the conductor, the block behaves as a superconductor, while perpendicular to the conductor the finite inter-strand resistance causes resistive behaviour. In general, the electric field and current density in the frame of the conductor are related by: where E ∥ , ρ ∥ , J ∥ and E ⊥ , ρ ⊥ , J ⊥ are the electric field, resistivity and current densities in the directions parallel and perpendicular to the conductor, respectively. The conductor frame is rotated with respect to xz-frame by an angle α in clockwise direction. The current densities in both frames are thus related by a rotation matrix: In the same way one can find for the electric field: Substitution of equations (2) and (3) into (4) results in the following E(J) relation in the xz-frame: If ρ ∥ = ρ ⊥ , the diagonal entries of the resistivity matrix are equal and the off-diagonal elements of are zero, and thus Ohm's law is retrieved.The perpendicular resistivity ρ ⊥ is a constant related to the inter-strand resistance, and can be experimentally determined. This will be discussed further in section 4.1. To simulate the behaviour of a superconductor, a non-linear power-law is used for the parallel resistivity: In this equation, J c is the critical current density of the monoblock, E c is the electric field if J ∥ = J c , and n is a nonlinearity index. The value of E c needs to match the critical field used to determine J c from measured IV-curves. We used a conventional value of E c = 10 −4 V/m. Once the current distribution and electric field have been found, the power density can be obtained from the dot product: The first term is related to currents in the plane of the superconducting tape, and will be referred to as hysteresis loss. The second term results from currents between the strands and will be called coupling loss. The loss per cycle is found by integration of the power density over the monoblock cross-section and a full cycle of the magnetic field in time.
Integral formulation
The monoblock model will now be used to compute the AC loss in a time-dependent but spatially uniform magnetic field perpendicular to the x-axis, assuming zero transport current. The current distribution is found numerically using an integral form of Maxwell's equations [24,25]. The advantage of this formulation is that, unlike in a differential form, no boundary conditions are required, and the equations have to be solved only in the conductor volume. This makes the method convenient to implement for simple geometries such as the rectangular monoblock considered here. This section will give a short description of our implementation of the method, which is described in more detail in our previous publication [26]. The vector potential in the x-direction can be expressed as follows: In this equation, A ext is a the vector potential related to the applied field and J(y, z, t) is the current density in the xdirection. The conductor cross-section is divided in rectangular elements numbered i = 1, 2, …, N, each carrying a uniform current density J i (see figure 2). The vector potential can now be written as a sum: where the elements of K are given by is a point in the center of element i. The expression for K can be evaluated by substituting u = y i − y ′ , v = z i − z ′ and repeatedly integrating by parts: As seen from figure 2, the problem is symmetric after a rotation of 180 • around the x-axis. The rotation changes the sign of the magnetic field and thus the following relations hold: By taking advantage of this symmetry, only one half of the cable needs to be simulated. This reduces the number of unknowns by half and improves the computation time by rougly a factor four (see table 1). The matrix K taking into account the symmetry becomes: yi+aj u=yi+bj zi+cj v=zi+dj (16) Using the fact that ∂A/∂t = −E − ∇ϕ and assuming that the gradient of the electric potential ∇ϕ is zero, equation (9) can be rewritten to: This system of ordinary differential equations is numerically integrated to find the current distribution in time. For the simulations in this work, Matlab's built-in solver 'ode15s' was used [27].
3.2.
Evaluation of the right-hand side of equation (17) In order to evaluate the right-hand side of (17), the electric field must be computed from the current distribution. This is done using the anisotropic E(J) relation resulting from the monoblock model (equation (5)). A difficulty is that the monoblock model considers two components of the current, while the numerical approach solves for the x component only.
To overcome this problem, inductive effects in the direction perpendicular to the conductor are neglected. In other words, it is assumed that transfer current between two strands is always homogeneously distributed over the width of the contact surface. The assumption makes it possible to eliminate the perpendicular current. However, it is not valid for very high frequencies at which skin effects influence the distribution of coupling currents. The effect on AC loss at such high frequencies will be discussed in further in section 4.3. From equation (3) we have: By integration over the strand width we find: where y 1 = W 2 − w s and y 2 = W 2 . The first term is just w s J ⊥ under the assumption of uniform current transfer. The third integral is zero because no net current can flow in the vertical direction. Therefore the perpendicular current is described by: By solving equation (18), J z can now be expressed in terms of J x : Now that J z is known, E x can be computed using equation (5): The second term of the right-hand side of equation (17) is the external contribution to the vector potential. The external contribution is chosen to be: A ext (y, z, t) = B 0 sin(ωt) (z cos(β) − y sin(β))x (23) so that: B ext (y, z, t) = ∇ × A ext = B 0 sin(ωt) (cos(β)ŷ + sin(β)ẑ) (24) Thus β is the angle between the applied magnetic field and the y-axis. Note that this choice of A ext respects the rotational symmetry and the invariance along x.
Sample preparation
In order to study the effect of partial coupling more closely, we prepared a Roebel cable whose strands are soldered together. The cable properties are listed in table 2. A 4-mm-wide coated conductor manufactured by SuperPower (SCS4050-AP) was used. As specified by the manufacturer, this wire has a minimum self-field critical current of 109 A and an average of 111 A at 77 K. Six strands of 1.9 mm width were prepared by laser-cutting. A short transposition length of 50 mm was chosen so that three full transpositions could be measured in the limited sample area of the AC loss set-up. The critical current of the separate strands was measured in a liquid nitrogen bath (77 K). The average critical current was 50.2 ± 2.8 A and the n-value was 23.7 ± 1.4. The average critical current per unit width was 27.8 A/mm before and 26.4 A/mm after cutting, a decrease of 5%. The strands were then degreased and pre-soldered with In52Sn48 using rosin flux. The pre-soldered strands were then assembled into a cable, and the cable was once more heated to 170 • C under slight pressure to solder the strands together. The monoblock critical current density was approximated by normalizing the strand critical current to the cross-sectional area which gives J c = 264 A/mm 2 . The field dependency of the critical current was not taken into account.
Inter-strand resistance
Roebel cables, like Rutherford cables, consist of a single layer of transposed strands. Even though the shape of the strands is very different, both cable types are topologically the same. Inter-strand connections in Rutherford cables are commonly described using a network model with two parameters [28][29][30], as shown in figure 3. R c is the resistance at the point where a strand in the lower touches one in the upper layer. This connection occurs twice each transposition length for any pair of strands. A resistor of R a connects adjacent strands and occurs 2N s times in each transposition length. In order to adapt this inter-strand resistance network for the continuum model, we introduce length-averaged values for the resistance between adjacent and non-adjacent strands: Both quantities have units of Ω m. The resulting inter-strand resistance network for a cable of length ℓ is shown in figure 4. The values of ρ a and ρ c were determined by applying a current between strand 3 and strand 6 and recording the voltage profile. This measurement was done at 77 K and therefore the strands were in the superconducting state. Because the applied current of 10 A is much lower than the critical current, the strands can be assumed to be equipotential planes. The strands can therefore be represented by the nodes of an electrical network, as shown in figure 4. By least-squares fitting of the network model to the measured voltage profile, inter-strand resistance values of ρ a = 0.265 µΩ m and ρ c = 1.07 µΩ m were found. The monoblock model considers current transfer between adjacent strands only. These are connected by ρ a and ρ c in parallel, thus the unit-length resistance between adjacent strands is (1/ρ a + 1/ρ c ) −1 . By multiplying this value with the strands width w s , the surface contact resistance is obtained. The perpendicular volume resistivity of the monoblock is found by multiplying this contact resistance with the number of contacts per unit length 1/d s .
AC loss measurement
The AC loss per cycle in a sinusoidal field was measured at 77 K using a calibration-free technique [32]. The applied magnetic field was uniform in space and perpendicular to the wide face of the cable (β = 90 • ). The measurements as well as the calculations using the monoblock model are shown in figure 5. Hysteresis loss is frequency independent according to the critical state model [33], although it can have a slight frequency dependence when a finite steepness of the transition is taken into account [34,35]. Coupling currents are expected to have a stronger frequency dependence of the form ω/(1 + (ωτ ) 2 ) [36], where τ is a decay time constant. To be able to detect the frequency dependent coupling loss, the measurement is done over a frequency range as wide as possible.
At the lowest amplitude of 1 mT, the AC loss increases by an order of magnitude as the frequency goes from 1 Hz to 1 kHz. This is seen in both the measurement and the calculation. Below penetration, the hysteresis losses are proportional to the third power of the magnetic field amplitude, while the coupling losses only increase with the amplitude squared. This explains the lower frequency dependence at higher amplitudes.
There is a reasonable agreement between calculation and measurement for frequencies up to 1 kHz. At higher frequencies, a decrease in AC loss is observed in both measured and predicted AC loss. Such a peak in AC loss of multifilamentary conductors can be explained using skin effect theory [37]. Due to limitations of the set-up, we could obtain only two measurement points at on the right side of the peak. The model could therefore not be adequately validated for these conditions. In figure 6, the different contributions to the total loss can be seen in more detail. At the lowest frequencies (f < 10 Hz), the loss is dominated by hysteresis loss. The hysteresis loss has a very slight frequency dependence due to the finite nvalue [34,35]. The coupling loss increases linearly with frequency and becomes the dominant contribution for frequencies above roughly 100 Hz. At the highest frequencies, the coupling loss is limited by a skin effect [13]. We observe a remarkable drop of the hysteresis loss near 1 kHz. Above this frequency the strands become effectively coupled, which leads to lower hysteresis loss at field amplitudes below penetration [14]. This effect is illustrated in figure 7, which shows the current distribution and magnetic field at frequencies of 100 Hz and 10 kHz. At 10 kHz, coupling currents shield the center from the external field. As a result, very little flux enters the superconductor from the cable center, and hysteresis loss is reduced.
Summary and outlook
The monoblock approximation reduces the three-dimensional cable geometry to much simpler two-dimensional problem. A finite resistance between strands can be introduced into the monoblock model by using an anisotropic resistivity. In this way, induced coupling currents and associated losses can be computed. We have used an integral formulation of Maxwell's equations to numerically solve the monoblock model.
A Roebel cable with soldered strands was prepared to validate the model. The inter-strand resistance was measured by applying a current between opposite strands and recording the voltage profile over all strands. From the measured interstrand resistance, an equivalent perpendicular resistivity of the monoblock of ρ ⊥ = 4.04 µΩm was found. The AC magnetization loss of the cable was measured using a calibrationfree technique in magnetic fields amplitudes ranging from 1 to 50 mT. The measured loss had a reasonable agreement with the monoblock model for frequencies up to 1 kHz. Due to limitations of the set-up, it was not possible to validate the model at higher frequencies, at which the loss characteristic of the sample may be influenced by skin effects.
The approximation with a uniform but anisotropic material may be used as well for coupling loss calculations in other structures involving tilted stacks of coated conductors, such no-insulation racetrack or pancake coils. This will be the topic of a future investigation. | 4,400.2 | 2020-06-03T00:00:00.000 | [
"Physics"
] |
The Local Brewery: A Project for Use in Differential Equations Courses
Abstract We describe a modeling project designed for an ordinary differential equations (ODEs) course using first-order and systems of first-order differential equations to model the fermentation process in beer. The project aims to expose the students to the modeling process by creating and solving a mathematical model and effectively communicating their findings in a technical report. The students are required to produce a simple first-order differential equation and find the solution, given varying initial conditions. The students are also required to analyze a more complex, nonlinear ODE system model of the fermentation process. In dealing with the nonlinear system of equations, we provide the students a Mathematica file to reduce the time spent developing the model, but allow for more time to interpret the model. We also share some perspectives on the implementation of the project, provide alternative implementations, and possible extensions to the project.
INTRODUCTION
This project was offered to freshmen students at the United States Military Academy during the MA255 course, Mathematical Modeling and Introduction to Differential Equations. The course is the second of a two-semester Advanced Mathematics Program, taken by roughly one-quarter of the freshmen cohort class. The course covers such topics as first-order differential equations, second-order equations, the Laplace transform, series solutions to differential equations, systems of first-order differential equations, and numerical methods, such as Euler's method, improved Euler's method, and Runge-Kutta method. In addition to gaining a robust mathematical body of knowledge, three of the main outcomes of the course are for the students to go through the modeling process, to create and solve mathematical models, and to effectively communicate their findings in a technical report. The project was given during the last part of the course and students were given additional class days to complete the project. In total we expected that students should spend 12 hours on the project both in and out of class. The actual project is available upon request via email to the authors.
The scenario we set up for the students was that of a consultant for a new local craft brewery in the local township. The micro-brewery, or craft brewery, is a small commercial beer brewery, usually limited to an output of a certain number of barrels per year. These breweries usually only distribute their product to a small region as opposed to larger breweries, who may distribute nationally or internationally. The 2012 report from the Brewers Association showed that the total market of beer sold in the United States was 200,200,000 barrels, where as total craft beer sales were only 13,200,000 barrels [1]. As an important side note, we recognize that the subject of the project, brewing beer, although interesting to undergraduates, is also a source of contention on college campuses and it can be a sensitive issue talking about alcohol with underage students. However, rather than avoiding the problem, this project can provide an opportunity for teachers to discuss the negative aspects of consuming alcohol. As with many mathematical applications, the mathematics does not see the social implications surrounding the problem; a good problem solver must look at a problem through the lenses of many different disciplines. Discussing the societal impacts of alcohol could be added as part of the project write-up to help students reflect on this aspect of the problem they are solving.
The general format of the project consists in the students developing a simple model, gaining information from this model, and then using a more complex model to extract additional information. This is essentially the iterative mathematical modeling process that we espouse in the Department of Mathematical Sciences at the Academy. The project consisted of four tasks. The first and second tasks use a first-order differential equation in order to estimate the final alcohol by volume (ABV) of a particular beer, given the initial amounts of sugar and yeast. The third task uses a more complicated, nonlinear system of three differential equations, taken from the Gee-Ramirez model of beer fermentation [2]. The final task uses a flavor model for a particular "graduation beer," which relies upon the starting and ending values of the different sugars, taken from the results of the third task.
TASKS 1 AND 2: SUGAR, YEAST, AND ABV
The intent of these first two tasks is to have the students gain insight in to the physical process of fermentation. They must set up the first-order linear differential equation and experiment with varying initial conditions in order to 190 Starling, Povich, and Findlay find their final ABV. The students must also explore the effects of changing the temperature and the initial amounts of sugar on the fermentation process.
For the first task, the students must create a first-order differential equation model from the given word statement: We will assume that the rate of change of the sugar is proportional to the amount of sugar at time t multiplied by a growth constant, r. The growth constant is equal to the ambient temperature (in • F) multiplied by the initial amount of yeast divided by 70, 000.
The growth constant, r, is shown here: From the prompt above, the students should create the first-order differential equation where S(t) is the amount of sugar at time t. The units for the amount of sugar is in mol/m 3 . Units for r is in 1/h and t is in hr. Note that the amount of yeast and the temperature in this model are constants and the negative sign is necessary as the amount of sugar will be decreasing as the yeast molecules eat the sugar. The students have the option to use separation of variables, the integrating factor method, or use technology (Mathematica or another program of their choice) to solve this first-order differential equation. When they are finished, they should have a function that describes the amount of sugar at a given time, t. Once they find the solution to this differential equation, they must determine the estimated ABV of the beer, with given initial amounts of sugar and yeast using the following equation In order to verify their solution, we give the students specific initial conditions with the associated amount of sugar and ABV after a given amount of time has elapsed. We then have the students calculate the value of sugar and final ABV with different initial conditions as well as using a different yeast strain, which ferments at a lower temperature. The temperature initially used is for an ale yeast, which typically ferments at a higher temperature (70 • F), whereas the second yeast is a lager yeast, which typically ferments at a lower temperature (60 • F).
The second task uses the differential equation from task one in order to achieve a beer with 10% ABV; however, we limit the amount of time for fermentation of the beer to exactly seven days and no more than 60 units of sugar. The students must determine the starting amount of yeast and sugar in order to achieve the final ABV. The solution space is infinite, as they can achieve the required ABV with 60 units of sugar and approximately 5.7 units of yeast, but can also achieve this ABV with as little as 36 units of sugar, but must use increasing starting values of yeast. The students must also repeat the process to find feasible values using lager yeast. Finally, we ask the students to limit the values of sugar and lager yeast to 42 and 10, respectively, and determine how much longer they will need to wait past seven days to reach 10% ABV. It should be noted that although the model will allow for a 10% ABV, typical home-brewed beer will only achieve an ABV in the 5-7% range, especially when using lager yeast. Future use of this project can allow for a more reasonable ABV percentage instead of the 10% value.
An implied subtask is to discuss the implications of having large amounts of residual sugars or yeast in order to get the final ABV. For example, if they choose to have 60 units of sugar as their initial conditions, they can expect to have approximately 25 units of sugar when they stop the fermentation process, which will lead to a more sweeter tasting beer. The students should answer the questions: "Is this result acceptable?" and "Does it make sense?" This is one of the three outcomes we mentioned earlier; the ability to communicate their results.
TASKS 3 AND 4: THREE SUGARS AND TASTE FUNCTION
Now that the students have a basic idea of the fermentation process and the effects of changing the yeast and temperature, we expose them to a more complex model that accounts for three types of sugar in addition to the yeast. We also expose them to a fictitious taste model that uses the results from the complex model, where we restrict the total amount of initial sugars and time used to ferment the beer. These two tasks allow the students to explore a nonlinear system of equations as well as optimizing a nonlinear objective function.
Task 3 requires the students to solve a more complicated model that is beyond the scope of the course, but with the help of technology they are able to find a solution. Providing a Mathematica file that allows students to change the initial conditions of three types of sugars and yeast, and can be used to aid in the interpretation of the results without getting stuck in the Mathematica notation. Task 4 uses the results from the third task in order to determine an optimal value of a taste function for a particular beer style. The model used in task 3 is taken from Gee and Ramirez [2], a model that has been used and referred to multiple times in the field of the modeling beer fermentation. The fermentation model accounts for the amount of yeast present in the beer (X(t)), and three different types of fermentable sugars (glucose, G(t), maltose, M(t), and maltotriose, N(t)). All units are in mol/m 3 . Units for µ i are in [1/h] and t is in h. The system of equations is given as where µ i (for i = 1, 2, 3) are growth rates and are defined as Specific parameter values are given in Table 1. Solving the system of equations (1) is more than likely beyond the scope of an introductory differential equations course. Providing a Mathematica file allows the students to find the solution and gain insight to the interactions within the system of equations. In addition to solving the system of differential equations, the students describe the interaction of the three sugars and yeast, as well as explain the effects of the inhibition constants for glucose (K iG ) and maltose (K iM ) on the solution.
An example Mathematica result to provide to the students, having initial conditions of t max = 400, G 0 = 200, M 0 = 120, N 0 = 50, and X 0 = 50, yields the concentrations of yeast and sugars is shown in Figure 1.
Task 4 requires the students to optimize the flavor of the beer using a given "taste" function that uses the differences in the starting and ending values of the sugars. The fermentation time is limited to only 10 days and a total concentration of all three sugars to 200 mol/m 3 . The students use their final values from task 3 to find the optimal taste value and are required to provide the optimal taste value with the associated initial values of sugar and yeast. Although students at this level may not have the tools to conduct nonlinear optimization on the taste function, we expect that they would experiment with the Mathematica file given to them. Taking a quick look at the taste function, one should see that the taste function value increases as the difference between the starting values and ending values of each of the sugars increases, with the coefficient associated with the concentration of maltose being the largest. The taste function is also penalized for having too high of a starting concentration of yeast or maltose. The intent was for the students to experiment with the results of this taste value by changing the initial values of sugar and yeast, in order to maximize the function. Although the idea of a taste function being derived from the initial amounts of three types of sugar is a bit fictitious, it provides a vehicle that allows the students to explore different solutions to their initial values. The actual varying tastes in beer stem from the use of different malts, hops, and yeast strains, in addition to the fermentation temperature and the amount of residual sugars.
PROJECT EXTENSIONS
For our purposes, this project fits well within the course curriculum. Due to student and course time constraints, the project was limited in some areas and there were things that we did not implement. A best-case scenario would allow for the time to describe the chemical interactions that occur within the fermentation of the beer and allow the students to create the models themselves. The models used in tasks 1, 2, and 4 were created by the authors to meet our desired student outcomes for the project; however, they can be adjusted as needed to achieve different learning outcomes. Modeling is an iterative process; simple models are modified based on initial results to capture additional physical realities. We use simplifying assumptions to make a problem more mathematically tractable, but in doing so, we remove aspects of the physical world that impact the information that we extract from our model. This is something that we sought to emphasize in our project. The following examples are a few ways to extend or modify the project to allow it to have a broader use.
1. A more accurate model for the growth of yeast during the process could be integrated in order to emphasize the biological application within the chemical process. A more detailed model for the amount of yeast at time t could be implemented, which brings into play more of the active biological processes. Given that yeast molecules need sugar to grow but also have limitations based on the alcohol in the wort, students could explore and incorporate this into the model. Extending the initial model by adding an interaction term between the yeast and the alcohol level in the wort could be an ideal way to capture this phenomenon. The temperature also plays a role in the process (which we reduced in our project with some simplifying assumptions) but a more detailed model could incorporate the impacts of temperature on the rate of growth for the yeast. 2. The underlying discipline for this project is chemistry. The interaction between the sugars that we are trying to model is a chemical process. Many chemistry departments have beer brewing clubs that utilize this process in a creative way. This project could be tied to or initiated by a chemist discussing the chemical aspects of brewing beer. The chemistry connection would help cast the topic in a more academic context to downplay the social side of the project (e.g., underage drinking). 3. In the last part of the project, we asked the students to use their model to optimize some quantity of interest, in our case, the taste of the beer. The primary goal of this part of the project is to utilize the model to make predictions and determine an "optimum" value for a given quantity of interest that depended on the solution information. There are alternate applications that could be implemented instead of the taste function to meet this goal. Given specific costs for the different ingredients, students could develop a cost function and try to determine the most economical mix to achieve the desire outcome. Also, instead of a taste function, this equation could be tied to the International Bittering Units scale, measuring the amount and types of acids found from applying hops to the beer. 4. One recommendation is to add an additional requirement that investigates the sensitivity of the models we had presented to them. That is measuring how making small changes to the initial values of the sugars and the yeast affects the outputs of the model. This could lead to discussions regarding the limitations of the models we create.
REFLECTIONS
We initially started the students out with the description of the physical phenomenon that they had to translate into a simple, linear, first-order differential equation. The creation of the first model (tasks 1 and 2) should lead the students into some fundamental discussions regarding basic modeling principles. Is the model accurate? How can I improve my model? If my model is only an abstraction of reality, can I still use it to draw some understanding of the real world? Asking these questions, leads to the development (or at least the interpretation and understanding) of the second model (tasks 3 and 4). We had intended for the nonlinear system of equations in task 3 to provide some shock value initially, but we also provided the students the tools to successfully interpret the physical properties of fermentation. Providing the Mathematica file that allowed the students to change the initial values assisted in minimizing the time that students needed to set-up the model, however, the trade-off was the ability to more quickly investigate the relationships between the sugars and yeast.
Ultimately, the students presented their findings in the form of a technical report, where we provided the format for them to use. The format consisted of an abstract, background, facts, assumptions, analysis, discussion, and their conclusion and recommendations.
Based on anecdotal feedback from students, the project was a hit. When asked the question on the course end survey: "The course project was helpful in developing my abilities to solve real world problems and effectively communicate the results," over 78% of students responded that they strongly agreed or agreed with the statement, whereas only 6% disagreed. The project was often mentioned in free-response survey questions as one aspect of the course the students believe should be continued. They found it engaging as it was a realworld problem that many of them may encounter in their futures (most were under 21 years of age) and showed them a use of the mathematics they were learning in class. Using the similar ideas found using the Modeling and Inquiry Problem, mentioned by Mike Huber [3], we wrapped the mathematics around a real-world problem. The student views the problem from a real-world perspective, make a decision and then communicates their rationale for making their decision. Much of the mathematics that the student has been exposed to up until this time in their lives, has been of the type where there is a definite answer; e.g., the square root of 16 is equal to four. When conducting the mathematics in the context of modeling, they must not just provide their answer, but also must identify the assumptions that are being made in the model, and be able to account for them in their solution. This project helped the students see both the utility of mathematical models to solve real-world problems as well as understand some of the complications and pitfalls associated with models.
SUPPLEMENTAL MATERIALS
Supplemental data for this article can be accessed on the publisher's website. | 4,522 | 2016-04-03T00:00:00.000 | [
"Mathematics",
"Engineering",
"Chemistry"
] |
The impact of robotics on development, of hot and cold executive functions, and their role in improving them in special education
Executive function (EF) skills are neurocognitive skills that support the reflective, top-down coordination and control of other brain functions, and there is neural and behavioral evidence for a continuum from more "cool" EF skills activated in emotionally neutral contexts to more "hot" EF skills required for motivationally significant tendency reversal. EF problems are transdiagnostic markers of abnormal development. A neurodevelopmental model follows the path from bad childhood events and stress to disturbance of the development of brain systems supporting reflection and EF skills and an increased risk for general psychopathology traits. Educational robotics is generally concerned with researching the effects of building and programming robots on children's learning and academic accomplishment. We recently discovered that engaging in progressively more difficult robot planning and monitoring (ER-Lab) promotes visual-spatial working memory and response inhibition in early childhood during typical development, and that an ER-Lab can be a viable rehabilitative tool for children with Special Needs. Children with Special Needs (SN) had considerably enhanced inhibition skills, and children with attentional impairment had greater gains in inhibition of motor response tasks than children with a language deficiency. The study's findings and future prospects for how ER-Lab programs could become a strong tool in classrooms with special needs children are highlighted. The key also conclusion was that there was a considerable improvement in visuo-spatial attention as well as a significant effect on robot programming skills. According to research, EF abilities can be developed through scaffolded training and are a promising therapeutic and preventive intervention target. Intervention efficacy can be increased by reducing disruptive bottom-up effects like stress, teaching both hot and cool EF skills, and incorporating a reflective, metacognitive component to facilitate far transfer of trained skills.
Introduction
The complexity of the EFs and SN relationship may be attributable in part to the fact that EFs are a complicated construct that is explained by various theoretical frameworks.Although different multi-componential models define the main basic EF components (Miyake et al., 2000;Friedman and Miyake, 2017), there is agreement on their role as valuable "tools of learning" for academic skills at different grades (Mitsea et al., 2020).The ability to alter information stored in memory is crucial in language acquisition, decoding, and text comprehension.The ability to inhibit prepotent responses, such as the suppression of compelling thoughts or memories, and resist distractor interference, which is selectively attuned to what we choose, thereby removing attention to other interferent stimuli, allows us to focus on relevant information during reading comprehension or problem solving (Stavridis & Doulgeri, 2018).Finally, the ability to quickly switch between tasks, procedures, conceptual sets, or strategies appears to be related to academic learning (Mitsea et al., 2020).These processes, concern three primary core EFs components: working memory, inhibition, and cognitive flexibility.Inhibition, working memory, and, to a lesser extent, cognitive flexibility have all been demonstrated to be reduced in various types of Special Needs (Stavridis, Papageorgiou and Doulgeri, 2017).
Educational Robotics (ER) is a teaching method that requires students to design, assemble, and program robots through play and hands-on activities.In the 1960s, ER was created by combining psycho-pedagogical cognitive development theories and social learning theories (Drigas, Vrettaros et al., 2005).ER develops a learning environment in which students can interact with both peers and robots at the same time.Most ER studies in schools have examined the impact of ER activities on "STEM" (Science, Technology, Engineering, and Mathematics), with a particular emphasis on robot design and assembly (Papageorgiou et al., 2021).Other research has looked into employing ER as an assistive device to help with motor and social communication issues (Drigas & Karyotaki, 2019).Recent research has looked at how robot programming affects cognitive and learning processes as automonitoring, attention, decision-making, problem-solving, and computational thinking (Chaidi et al., 2021).Nonetheless, the majority of the research lacked experimental designs or quantitative outcome measurements, leaving it unclear which cognitive processes may be considerably improved by ER during childhood (Demertzi, Voukelatos et al., 2018).
Cool and Hot Executive function
Executive function (EF) skills are a group of neurocognitive abilities that enable conscious, top-down control of thought, action, and emotion; are required for deliberate reasoning, intentional action, emotion regulation, and complex social functioning; and allow for self-regulated learning and adaptation to changing circumstances (Zelazo, 2015).Early-onset EF issues are a significant component of a wide range of clinical illnesses with childhood or adolescent start, including, Learning disabilities, attention deficit/hyperactivity disorder (ADHD), conduct disorder (CD), autism spectrum disorder (ASD), obsessive-compulsive disorder (OCD) depression, and anxiety (Shi et al., 2019).The ubiquity of EF difficulties across disorders suggests that disruption of EF development may be a common consequence of many different types of developmental perturbation (e.g., genetic/environmental/epigenetic, cognitive/emotional/social), though different types of perturbation may lead to different clusters of symptoms, and different aspects of EF may be implicated in different disorders or within a single disorder.As a result, the presence of EF issues can be considered a transdiagnostic signal of atypical development in general (Beauchaine & Cicchetti, 2019).
According to developmental psychopathology , which takes a developmental systems view of the etiology and life course of atypical behavior, a wide range of variables and their interactions influence biological and psychological development and can result in psychopathology (Drigas, Papoutsi & Skianis, 2021).Childhood adversity is a well-established risk factor for both internalizing and externalizing problems (Humphreys et al., 2019), and evidence suggests that adversity is associated with a general, nonspecific risk for psychopathology (Drigas & Papoutsi, 2021).Furthermore, in a crosssectional study of 2,395 children aged 6 to 12, performance on a battery of EF measures was connected with risk for a latent general psychopathology factor but not specific variables (Martel et al., 2017).
These findings suggest a developmental pathway that leads from (a) adverse childhood experiences (ACEs) and other sources of stress to (b) disruption of the development of neural systems supporting EF skills and then to (c) an increased risk for general psychopathology, including transdiagnostic features of a wide range of clinical conditions (cf. McLaughlin, 2016).The role of EF difficulties in this developmental pathway can be understood as a result of (a) the fundamental role that EF skills play in learning and adaptation across social and nonsocial contexts, (b) the relative plasticity of EF skills over an extended period of time (i.e., from infancy to early adulthood, with periods of greater plasticity occurring in early childhood and the transition to adolescence), and (c) the hierarchical character of both EF talents and the neural systems that support them.EF was historically generated from neuropsychological findings of the repercussions of prefrontal cortex (PFC) injury.Today, EF skills are known to be dependent on increasingly well-understood neural circuits involving PFC and other brain regions and they are typically measured behaviorally as three skills: inhibitory control, working memory, and cognitive flexibility (Mitsea, Drigas & Skianis, 2022).Inhibitory control is the conscious suppression of attention (or other responses) to something (for example, ignoring a distraction or ceasing an impetuous remark).Working memory is remembering information and typically modifying it in some way (for example, remembering two integers and subtracting one from the other).Thinking about a single input in numerous ways, for example, while considering someone else's perspective on a situation, is an example of cognitive flexibility.Recent neuroimaging findings show that these three neurocognitive skills activate partially overlapping brain regions, with common activation areas across tasks including frontoparietal control and dorsal attention networks (Duncan, 2013) (see the sidebar titled Neural Activity Associated with Executive Function Skills).These networks differentiate into adulthood, with differences in between-network connectivity depending on the kind of network (Kastritsi et al., 2019).Lower-order, more specialized networks (e.g., sensorimotor networks) have decreased between-network connection with age, but higher-order networks including the PFC have increased.
There is also significant behavioral and neurological evidence that EF skills range along a spectrum from "hot EF" to "cool EF" (Zelazo & Müller, 2002).Whereas cool EF refers to EF skills assessed in relatively emotionally neutral contexts and relies more on neural networks involving lateral parts of the PFC, hot EF refers to EF skills needed in motivationally significant situations and relies more on neural networks involving ventral and medial parts of the PFC.Nejati et al. 2018, point to the role of the hot-cool continuum in EF.
Cool EF can be tested in a variety of ways, including very random or decontextualized activities in a laboratory or clinic.Inhibitory control, working memory, and cognitive flexibility are all examples of cool EF.The dimensional change card sort (DCCS) is an example (see the sidebar titled The Dimensional Change Card Sort: A Measure of Executive Function Skills), a rule use task that requires all three EF skills in early childhood but increasingly serves as a measure of cognitive flexibility as inhibitory control and working memory develop (i.e., these demands become trivial as inhibitory control and working memory develop) (Drigas & Karyotaki, 2016).When tasks like the DCCS and tests of inhibitory control and working memory are utilized in motivationally meaningful circumstances, they can become relatively hot.However, it is the specific requirement of flexibly reappraising whether to approach or avoid a salient stimulus that makes EF hot and involves neuronal networks involving the more ventral and medial parts of PFC (Allan & Lonigan, 2014).
Many measures that require changing the value of specific stimulus-reward connections have been discovered to be dependent on neural circuits connecting the ventral and medial PFC with mesolimbic regions such as the amygdala and striatum (Happaney et al., 2004).Measures of reversal learning (when a previously rewarded approach-avoidance discrimination must be reversed), delay of gratification and delay discounting (when the value of an immediate reward must be reconsidered relative to a larger delayed reward), and extinction (when a previously rewarded stimulus is no longer rewarded and must now be avoided) are some examples.The Iowa Gambling Task (IGT) (Manes et al., 2002) appears to involve both hot and cold EF skills, in which initially advantageous (greater rewards) alternatives are gradually shown to be detrimental (higher prizes but much higher losses), and vice versa.
Cool EF skills (e.g., working memory) also play a role in the IGT (Manes et al., 2002), and given that cool EF skills engage and modulate hot EF skills, relatively complex hot EF tasks, such as the IGT, may activate both hot and cool EF processes (Moriguchi & Shinahara, 2019).Finally, hot EF plays a role in deliberate emotion regulation, which involves modulating approach-avoidance reactions intentionally, including through reflection and cool EF processes, as seen in decentering, psychological distancing, and related metacognitive practices (Travers-Hill et al., 2017).
Individual differences in cool and hot EF skills measured behaviorally in childhood predict a wide range of developmental outcomes, including school performance and social competence in adolescence; college grade point average and graduation ; and physical health (McClelland et al., 2013).The predictive value of EF is frequently greater than that of IQ, and long-term predictions are seen even when controlling for IQ and childhood SES (Duckworth & Seligman, 2005).It is not unexpected, then, that aberrant development of these skills might result in widespread and pervasive obstacles to brain growth and good adaptation.Evidence from young children suggests that, while poor hot EF is more strongly associated with problem behaviors in school (inattentive and overactive behavior), cool EF is more strongly associated with academic outcomes, including math and reading.Groppe and Elsner (2014) studied 1,657 children aged 6 to 11 years and discovered that cold but not hot EF was connected to fluid intelligence.In contrast, other research has indicated that hot but not cool EF is connected to emotional intelligence (Checa & Fernández-Berrocal, 2019).Both hot and cool EF are related to key aspects of social cognition, such as theory of mind (ToM) (Carlson et al., 2004), but evidence suggests that hot EF is more strongly related to emotion-related social cognition, such as ToM stories involving affect (Wilson et al., 2018) and ToMmental state/emotion recognition (Kouklari et al., 2019).Johnson (2011) proposes that the development of EF skills involves the experience-dependent functional specialization of EF skills and the neural networks that support them.Initially, confirmatory factor analysis of adults' performance on a variety of cool EF measures suggested three correlated latent variables (Miyake et al., 2000), but subsequent research has supported a hierarchical, bifactor model involving a common EF latent variable as well as updating (working memory) and shifting (cognitive flexibility) variables (Friedman & Miyake, 2017).In contrast to adult studies, several studies with young children have found that cool EF measures load onto one or two factors (inhibitory control and working memory and appears to become differentiated by middle childhood or adolescence as a bifactor structure involving common EF and multiple specific factors emerges (Cirino et al., 2018).
In contrast to the delayed differentiation within cool EF, differences between hot and cool EF have been reported rather early in development, at least when hot EF is measured using tasks including the requirement to delay approaching a tempting reward (Willoughby et al., 2011).For example, research with children as young as 2 years old (Bernier et al., 2012) have indicated evidence for hot and cool factors in children's performance on batteries of EF measures.Willoughby et al., (2011) discovered support for hot and cool EF variables in a study of approximately 750 children aged 4-5 years.A recent study of 1,900 children aged 2 to 5 years old from varied socioeconomic backgrounds found support for hot and cool EF variables across numerous direct behavioral measures of each construct (Montroy et al., 2019).However, as with cool EF, there have been inconsistent results (Allan & Lonigan, 2014), which are most likely due to the hot EF measures utilized.It is also possible that as the brain circuitry underlying them is engaged and activated, hot and cool EF become more robustly distinguished.A pattern of age-related differentiation between EF skills and other non-EF cognitive functions has also been discovered during childhood and adolescence, as measured by the NIH Toolbox Cognition Battery (Mungas et al., 2013), and this pattern may reflect use-dependent specialization of neural systems.
Educational robotics for the development of executive skills such as vision, spatial abilities, planning, and problem solving.
Many schools are using educational robotics (ER) as an innovative learning environment that allows students to acquire higher order thinking skills and talents, as well as tackle complicated challenges (Drigas & Karyotaki, 2017).It is a strong and adaptable teaching and learning tool that engages students in robot construction and control activities using particular programming tools.Students in an ER exercise typically work in groups to solve challenging challenges.Students receive quick feedback on their efforts and learn how to deal with difficult circumstances in a real-world environment through iterative design and testing (Drigas & Karyotaki, 2019).
Recent research suggests that the ER can be used to boost transversal skills such as thinking, problem solving, metacognition, programming, and teamwork (La Paglia et al., 2017;Drigas & Koukianakis, 2004;Drigas & Koukianakis, 2006).Programming a robot's behaviors to achieve a goal necessitates the capacity to mentally foresee the action, select the proper technique, and continuously update it.Choose the right process and keep it up to date.Programming small mobile autonomous robots into the physical environment necessitates planning, precision in language use, hypothesis formulation and testing, the capacity to identify action sequences, and a number of other skills that appear to mirror what thinking is all about.Furthermore, working with programmable robots allows children to test the robots' actions in the real world with all of its variables, such as indeterminacies and typical uncertainties of the environment (as opposed to simulation in virtual contexts where everything is still predefined) and their own reasoning strategies.The feedback (both positive and negative) produced by the robot/environment interaction necessitates ongoing adjustment of the programming algorithms (Demertzi, Voukelatos et al., 2018).
Children must apply procedural thinking and understand the logic of instructions in order to construct a successful program.When developing a program, children consider next, before, and until, all of which are components of sequencing-particularly temporal sequencing.Given these features, robotics activities can improve mental processes associated with the cognitive domain of the Executive Functions, such as problem solving, planning, working memory, inhibition, mental flexibility, action initiation, and monitoring (Chaidi et al., 2021).The term "EF" refers to a group of adaptive, goal-directed, top-down mental processes that are required when you need to focus and pay attention and when a spontaneous response would be insufficient (Lytra & Drigas, 2021).Executive functions enable "mentally playing with ideas, thinking before acting, meeting novel, unexpected challenges, resisting temptations, and staying focused" (Scarpa et al., 2006).
Educational Robotics Improves Executive Functions in Children with Special Needs at School
Children with Special Needs (SN) require special educational and instructional procedures due to social, physical, or mental issues.They are a highly diverse population in terms of neurofunctional, behavioral, and sociocognitive characteristics.Children with SN may have sensorial or motor disabilities, Autism Spectrum Disorders, Mild or Severe Intellectual Disabilities, and specific neurodevelopmental disorders such as Attention Deficit Hyperactivity Disorder (ADHD), Specific Learning Disorders, Specific Language Disorders, or other unspecified difficulties (McFarland et al., 2018).Despite this variation, it is now well known that specific cognitive control processes, such as Executive Functions (EFs), are frequently disrupted across developmental disorders and special needs (Mitsea et al., 2020).EFs have been found to be frequently altered in children with socioeconomic disadvantages, Mood Disorders, Attention Deficit Hyperactivity Disorder (ADHD), Autistic Spectrum Disorder (ASD), Language and Learning Disabilities, Down Syndrome (DS), neuromuscular disorders, and Cerebral Palsy (CP) (Di Lieto et al., 2017a; Battini et al., 2018; Peng and Fuchs, 2016; Stavridis & Doulgeri, 2018).The causal relationship between EF impairment and Special Needs is far from linear, as three main scenarios can be proposed: in some cases, a clear EF deficit is part of the "core cognitive difficulties" of a specific SN group; in other cases, only subtle difficulties are found; and finally, it is possible that the clinical or social problem itself causes the EF impairment (Astrea et al., 2016).
Given the predictive role that EFs play in academic achievement, early interventions in children with SN on working memory and inhibition may avert cascade consequences on quality of life, school attendance, and social functioning.Among the new technologies implemented for educational purposes, Educational Robotics (ER) has been used in educational settings with typically developed children to improve problem solving, planning, and computational thinking (Stavridis, Falco & Doulgeri, 2020), basic EFs components (Di Lieto et al., 2017b), and academical learning, particularly in the area of Science, Technology, Engineering, Mathematics and STEM.ER is a learning approach based on the design, assembly, and programming of robots that draws on constructivism and constructionism theories of learning and cognitive development as well as social learning theories (Papageorgiou et al., 2021).
Recently, a growing number of studies have offered ER to SN people in order to provide additional learning and social inclusion chances.Examples of the use of robots in clinical and educational settings have been documented, including learning difficulties, motor disorders, intellectual disabilities, autism and ADHD (Cheng et al., 2018;Bargagna et al., 2018;Stavridis et al., 2022).Indeed, educational robots have been used in the SN population to investigate specific cognitive functions, such as cognitive flexibility in children with ASD or the effect of robot-mediated learning (Krishnaswamy et al., 2014;Drigas & Dourou, 2013).Krishnaswamy's study compared the impact of robotic training on visual motor skills in children with learning impairments and visual motor delays.The findings revealed that children who engaged in ER activities improved their visual-motor skills more than children who followed the standard curriculum.Another study by Conchinha revealed two single instances who improved their learning, language, and integration by participating in ER activities with Lego Mindstorm (Conchinha et al., 2016).Finally, after discovering that intense, challenging, and entertaining ER training (ER-Lab) organized by incremental difficulty improved visuospatial working memory and inhibition in typical preschoolers (Di Lieto et al., 2017b), we validated the ER-Lab in a clinical setting with a group of children with Down Syndrome (Bargagna et al., 2018).The results shown above, suggests that the ER-Lab is a versatile instrument for cognitive improvement in SN and normally developing children; fact, it may be effective for personalizing therapies in neurodevelopmental disorders.The ER-Lab looks to include various qualities to increase the efficacy of the EFs trainings at the same time.ER-Lab activities can be intense, challenging, and adaptable to individual functioning, thus acting in the proximal development zone (Drigas, Vrettaros et al., 2005); it can promote several EF components, either simultaneously or separately, because robot programming requires sequential reasoning before acting by inhibiting impulsive responses, holding and manipulating visuo-spatial and verbal information in memory, and shifting between different commands/rules.ER activities can be carried out in any school setting, creating a group setting and an appealing learning environment, thereby promoting students' interest and motivation, allowing for interventions that focus not only on cognitive empowerment but also on social and emotional inclusion.Finally, the ER-Lab assures the presence of a mentor who may tailor the activity to the specific needs of the subject (Stavridis, Papageorgiou and Doulgeri, 2017).
Last but not least, we emphasize the significance of all digital technologies in the special educational domain and executive functions development, which are very productive and successful, and how they facilitate and improve assessment, intervention, and educational procedures via mobile devices that bring educational activities everywhere [61][62][63], various ICTs applications that are the main supporters of education [64][65][66][67][68][69][70][71][72][73][74][75][76][77][78], and AI, STEM, and ROBOTICS that raise educational procedures to new performance levers [79][80] as well as via friendly games [81][82][83].Additionally, ICTs are being improved and combined with theories and models for cultivating emotional intelligence, mindfulness, and metacognition , accelerates and improves more the educational practices and results, especially in special education treating domain and executive functions development.
Conclusion
A simplified model of the development of psychopathology in childhood and adolescence links ACEs and other sources of toxic stress to the disruption of neural systems supporting reflection and both hot and cool EF skills, and then to an increased risk for general, transdiagnostic features of a wide range of clinical conditions.The role of EF difficulties in this developmental pathway can be explained by (a) the fundamental role of EF skills in learning and adaptation, (b) the hierarchical nature of both EF skills and the neural systems that support them, and (c) the relative plasticity of EF skills over time.According to research, both hot and cool EF skills hold potential as a general target for therapeutic and preventive intervention.Indeed, a growing amount of evidence demonstrates that EF skills can be developed through scaffolded training.It is argued that intervention efficacy can be increased by minimizing disruptive bottom-up forces such as stress, and that skills training with a reflective, metacognitive component can aid promote distant transfer of trained skills.The current study's findings support the usage of robotics-based educational systems to stimulate the use of specific cognitive and attention talents.This study backs up the premise that Educational Robotics activities have an impact on executive functions since they may be used to build higher-level control components including forecasting, planning, and problem solving.Indeed, this study provides quantifiable data for analyzing the effects of a robotics laboratory on children's transversal high-level cognitive abilities.In general, the findings revealed that participation and improvement of logical reasoning ability enable participants to foresee and plan the sequence of activities required to complete a certain behavioral task. | 5,383 | 2023-07-30T00:00:00.000 | [
"Education",
"Psychology",
"Computer Science"
] |
Rationally designed azobenzene photoswitches for efficient two-photon neuronal excitation
Manipulation of neuronal activity using two-photon excitation of azobenzene photoswitches with near-infrared light has been recently demonstrated, but their practical use in neuronal tissue to photostimulate individual neurons with three-dimensional precision has been hampered by firstly, the low efficacy and reliability of NIR-induced azobenzene photoisomerization compared to one-photon excitation, and secondly, the short cis state lifetime of the two-photon responsive azo switches. Here we report the rational design based on theoretical calculations and the synthesis of azobenzene photoswitches endowed with both high two-photon absorption cross section and slow thermal back-isomerization. These compounds provide optimized and sustained two-photon neuronal stimulation both in light-scattering brain tissue and in Caenorhabditis elegans nematodes, displaying photoresponse intensities that are comparable to those achieved under one-photon excitation. This finding opens the way to use both genetically targeted and pharmacologically selective azobenzene photoswitches to dissect intact neuronal circuits in three dimensions.
In the presence of water, we observed two sets of signals in the 13 C NMR corresponding to the glutamate moiety. This is due to an acid-base equilibrium of the amino and carboxylic groups. Supplementary Figure 40 1 H and 13 C NMR spectra of compound 5a-1'.
-NHCO- -NHCO- -NHCO- The difference in energy between the cis and trans isomers (E trans-cis ) and the minimum barrier height for the thermal cistrans isomerization are given (E ‡ cis-trans ). b The excitation energy (E exc ), the oscillator strength of the 1P absorption process (f), and the absorption crosssection of the 2P absorption process (σ 2 , in GM units) are given for the both isomers of each compound. c In all the cases, the lowest-energy barrier height for the thermal cistrans isomerization was found to correspond to an inversion mechanism. The difference in energy between the cis and trans isomers (E trans-cis ) and the barrier height for the thermal cistrans isomerization are given (E ‡ cis-trans ). b The excitation energy (E exc ), the oscillator strength of the 1P absorption process (f), and the absorption cross-section of the 2P absorption process (σ 2 , in GM units) are given for both isomers of each compound. c In all the cases, the lowest-energy barrier height for the thermal cistrans isomerization was found to correspond to an inversion mechanism. d Accurate description of the thermal isomerization barrier height would require inclusion of explicit water molecules in the calculation. e Calculation did not converge. f Calculation did not converge. From the σ 2 value computed in the gas phase and the equations given in reference 4 for the solvent dependence of σ 2 , an estimate of the 2P absorption cross-section in water was made (σ 2 = 100 GM). Figure 10). Table 7. Primers used for Gibson assembly in the construction of pNMSB18.
Supplementary Methods
General procedure for the synthesis of azobenzene-based photoswitches: The preparation of ligands MAG 2P slow and MAG 2P_F slow was achieved via a multistep synthetic sequence (see Figure 2 in the manuscript). In both cases, we took the corresponding aminobenzoic acid (4a for MAG 2P slow and 4b for MAG 2P_F slow ) to form the azobenzene core (5a for MAG 2P slow and 5b for MAG 2P_F slow ), to which the different functional fragments of the target compounds were sequentially introduced: fully protected glutamate derivative 2 5 and maleimide acetic acid 3. 6 In addition, azobenzene model compounds Azo1' and Azo2' were prepared as references for the photochemical characterization of MAG 2P slow and MAG 2P_F slow , respectively.
Materials and methods for the synthesis of azobenzene-based photoswitches:
Commercially available reagents were used as received. Solvents were dried by distillation over the appropriate drying agents. All reactions were monitored by analytical thin-layer chromatography (TLC) using silica gel 60 precoated aluminum plates (0.20 mm thickness). Flash column chromatography was performed using silica gel (230-400 mesh). 1
4-[(4-Acetylaminophenyl)azo]-3-fluorobenzoic acid, 5b-1' (Supplementary
To Green and red fluorescent proteins were simultaneously excited at 488 nm for 343 ms, using bidirectional laser scanning at 400 Hz. Images were recorded with a resolution of 512x512, and with an imaging interval of 4 s. Green and red fluorescence were recorded with two different HyD detectors with a detection range from 500 to 550 nm and from 569 to 648 nm, respectively. Pinhole aperture was set at maximum (600 μm).
Whole field photostimulation flashes were fit to keep imaging interval, and periods lasted in total for 1 min. Photostimulation was done at 256x256 resolution with bidirectional laser scan. One-photon photostimulation was done at 405 nm (0.81 mW μm -2 ), and two-photon stimulation at 780 nm (2.8 mW μm -2 ). Back-photoisomerization was achieved at 514 nm (0.35 mW μm -2 ). Inter-stimulus imaging periods lasted 1.5 min. Intensity and duration of the photostimulation intervals were adjusted to obtain the optimal photoresponse and reproducibility. At the end of each experiment we reconfirmed that the neuron kept its healthy morphology.
Transgenesis was performed according to standard methods for microinjection. 11 To generate MSB104 strain a DNA mix containing 50 ng/ul pNMSB18, 1.5 ng/ul myo-2p:mCherry and 50 ng/ul Plus DNA ladder as carrier was injected into the gonad of GN692 young adult worms. The Primers used for Gibson assembly in the construction of pNMSB18 are shown in Supplementary
Imaging was performed 4 h after compound injection in TRN neurons co-expressing
GluK2-L439C-mCherry and GCaMP6s in a single focused plane. Neurons with healthy morphology and no signs of fluorescent aggregates were selected for photostimulation.
Green and red fluorescent proteins were simultaneously excited at 488 nm and 561 nm for 343 ms, using bidirectional laser scanning at 400 Hz. Images were recorded with a resolution of 512x512 and a digital zoom of 4, with an imaging interval of 660 ms.
Green and red fluorescence were recorded with two different HyD detectors with a detection range from 500 to 550 nm and from 569 to 648 nm, respectively. Pinhole aperture was set at ~500 μm.
Whole field photostimulation flashes were fit to keep imaging interval. Photostimulation was done at 256x256 resolution with bidirectional laser scan, with a digital zoom of 4.
One-photon photostimulation was done at 405 nm (15 μW μm -2 ), and two-photon stimulation at 780 nm (2.8 mW μm -2 ). Back-photoisomerization was achieved at 514 nm (1.2 μW μm -2 ). Intensity and duration of the photostimulation intervals were adjusted to obtain the optimal photoresponse and reproducibility. At the end of each experiment we reconfirmed that the neuron kept its healthy morphology.
Data analysis and statistics:
Amplitude of LiGluR photocurrents were analyzed using IgorPro (Wavemetrics). Displayed whole-cell current traces have been filtered using the infinite impulse response digital filter from IgorPro (low-pass filter with cutoff of 50 Hz).
The drift in current observed during light spectra recordings was corrected where appropriate with the IgorPro (WaveMetrics) software using a custom-made macro for drift correction.
1P and 2P calcium images were acquired with the Live Acquisition 2.1 software (Till Photonics) and stored by the Arivis Browser 2.5.5 (Arivis AG). These images were analyzed with ImageJ and the mean fluorescence value for each cell profile was calculated using the same software. The fluorescence signals were treated to obtain ∆F/F values according to: where F 0 is each cell's average signal for the experiment's baseline and F is the fluorescence signal upon stimulation. The resulting fluorescence ratios were analyzed in OriginLab. To obtain cell-averaged 1P action spectra, ∆F/F values were first normalized with respect to the maximum photoresponse obtained for each cell after perfusion of free glutamate at the end of the experiment. To obtain cell-averaged 2P action spectra, 2P ∆F/F responses were first normalized with respect to the 1P ∆F/F response at 405 nm for the same cell. | 1,800.4 | 2019-02-22T00:00:00.000 | [
"Biology",
"Physics",
"Chemistry"
] |
Optimal and Numerical Solutions for an MHD Micropolar Nanofluid between Rotating Horizontal Parallel Plates
The present analysis deals with flow and heat transfer aspects of a micropolar nanofluid between two horizontal parallel plates in a rotating system. The governing partial differential equations for momentum, energy, micro rotation and nano-particles concentration are presented. Similarity transformations are utilized to convert the system of partial differential equations into system of ordinary differential equations. The reduced equations are solved analytically with the help of optimal homotopy analysis method (OHAM). Analytical solutions for velocity, temperature, micro-rotation and concentration profiles are expressed graphically against various emerging physical parameters. Physical quantities of interest such as skin friction co-efficient, local heat and local mass fluxes are also computed both analytically and numerically through mid-point integration scheme. It is found that both the solutions are in excellent agreement. Local skin friction coefficient is found to be higher for the case of strong concentration i.e. n=0, as compared to the case of weak concentration n=0.50. Influence of strong and weak concentration on Nusselt and Sherwood number appear to be similar in a quantitative sense.
Introduction
The idea of micropolar fluid was introduced by Eringen [1][2]. This idea is a substantial generalization of the classical Navier Stokes model to discuss certain complex fluids. This particular class of fluids consists of rigid, randomly oriented spherical particles with microstructures such as liquid crystals, colloidal fluids, polymeric suspensions, hematological suspensions and animal blood etc. Extensive uses of micropolar fluid theory are given in the books of Lukaszewicz [3] and Eringen [4]. Bhargava et al [5] presented finite element solutions for mixed convective micropolar flow driven by a porous stretching sheet. Later on, Takhar et al [6] highlighted free convection MHD micropolar fluid flow between two porous vertical plates and perceived that velocity decreases with an increase in Hartman number. Stagnation point flow of a micropolar fluid towards a stretching sheet was investigated by Nazar et al [7]. Ziabakhsh et al [8] presented Homotopy analysis solutions of micropolar flow in a porous channel with heat and mass transfer. Similarly, Ishak et al [9] considered magnetohydrodynamic flow of micropolar fluid towards a stagnation point on a vertical surface. Joneidi et al [10] inspected behavior of micropolar flow in a porous channel with high mass transfer. Influence of chemical reaction and thermal radiation on MHD micropolar flow over a vertical moving porous plate in a porous medium with heat generation was discussed by Mohamed et al [11] who found that the translational velocity across the boundary layer and the magnitude of micro rotation at the wall decreased with an increase in magnetic field and Prandtl number. Some noteworthy studies on micropolar fluids with certain physical constraints can be found in [12][13][14][15][16].
In the last few decades nanofluids have proved to be extremely promising heat transfer agents in modern day industry and in numerous engineering applications of global interest like biomedical, optical and electronic fields. Choi [17] initiated the astonishing idea of nanofluids. Later numerous researchers and scientists investigated various real life flow problems under the influence of nanofluids. Buongiorno [18] presented a novel study on describing flow and heat transfer mechanisms of nanofluids. Similarly Nield et al [19] discussed natural convective boundary-layer flow of a nanofluid past a vertical plate and observed that the reduced Nusselt number is a decreasing function of thermophoresis and Brownian motion parameter. Nadeem et al [20] presented the optimized analytical solution for oblique flow of a Casson nanofluid with convective boundary conditions. They found that nanoparticle concentration is an increasing function of stretching parameter and Brownian motion while it is a decreasing function of thermophoresis, Biot number and non-Newtonian (Casson) parameter. Ganji et al [21] conducted a valuable study on heat transfer of Cu-Water nanofluid flow between parallel plates and concluded that heat flux at the surface has a direct relationship with nanoparticle volume fraction. Ganji et al [22] discussed simulation of MHD Cuo-Water nanofluid flow and convective heat transfer under the influence of Lorentz forces. Some notable studies on the topic can be found in [23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41].
The main goal of the present study is to discuss hydromagnetic flow of a micropolar nanofluid between parallel plates. Optimal homotopy analysis method proposed by Liao [24][25] is utilized to obtain graphical results against embedding physical parameters. Numerical values for skin friction, Nusselt and Sherwood numbers are obtained through OHAM and also numerically through mid-point integration scheme [26][27]. It would be obligatory to mention that mid-point integration scheme is used to verify our obtained analytical results. It is found that both the results are in a very good agreement with each other.
Mathematical Formulation
Consider steady incompressible 3D flow of an electrically conducting micropolar nanofluid between two horizontal parallel plates. Both the fluid and the plates rotate together around the yaxis with a constant angular velocity O. The x and y − axes are taken along and perpendicular to the plate respectively while the z − axis is taken normal to the xy plane. The plates are placed at y = 0 and y = h. The lower plate is being stretched by two equal and opposite forces so that the position of the point (0,0,0) remains unchanged. A uniform magnetic flux with density B 0 acts along the y-axis. The upper plate is subject to constant wall suction velocity v 0 (< 0) or constant wall injection velocity v 0 (> 0) respectively as shown in Fig 1. The governing equations of the flow problem can be stated as [5][6][7][8][9][10][11]18] @u @x þ @v @y ¼ 0; ð1Þ The appropriate boundary conditions are where u, v and w are the velocity components along the x, y, zdirections respectively while u = ax shows that lower plate is stretching.ρ is the density, ϑ is the kinematic viscosity, k à 1 is the vortex viscosity, N is the micro rotation velocity, O is the rotation velocity, B 0 is the magnetic field, σ is electrical conductivity, α is thermal diffusivity, k is the thermal conductivity, C p is the specific heat of the fluid, j is the micro-inertia density, v 0 is the suction/injection velocity, T is the temperature of the fluid, T 0 is the temperature at lower plate, T h is the temperature at upper plate. D B Brownian diffusion coefficient, D T is the thermophoresis diffusion coefficient, C is the concentration and n is the boundary parameter. The case n = 0 represents strong concentration, n = 0.5 indicates weak concentration and vanishing of the antisymmetric part of the stress tensor and n = 1 represents turbulent flow. Spin gradient viscosity γ is defined as j: Using transformations Eq (1) is identically satisfied while Eqs. (2)-(7), in dimensionless form are given by with boundary conditions f ð0Þ ¼ 0; f 0 ð0Þ ¼ 1; gð0Þ ¼ 0; yð0Þ ¼ 1; Gð0Þ ¼ Ànf @ð0Þ; ð0Þ ¼ 1; where primes denote differentiation with respect to η. N 1 is the coupling parameter, N 2 is the spin-gradient viscosity parameter, R is the Reynold number, M is the magnetic parameter, Kr is the rotation parameter, Pr is the Prandtl number, Sc is the Schmidt number, N b is the Brownian motion parameter and N t is the thermophoresis parameter which are defined by The skin friction coefficient C f , Nusselt number Nu and Sherwood number Sh are defined as where The dimensionless forms of skin friction coefficient C^f ; Nusselt number Nu and Sherwood number Sh are Method of Solution
Optimal HAM Solution
The governing system of coupled ordinary differential Eqs (10)- (14) is nonlinear and extremely complicated in nature. These equations are solved analytically by optimal homotopy analysis method (OHAM). Following Liao [24][25] we know that f(η), g(η), θ(η), G(η) and ϕ(η) can be expressed by a set of a certain type of exponential base functions in the forms k¼0 a k n;m y k expðÀmyÞ; ð21Þ in which a k m;n ; b k m;n ; c k m;n ; d k m;n and e k m;n are series coefficients. The initial guesses f 0 , g 0 , θ 0 , G 0 and ϕ 0 for f(η), g(η), θ(η), G(η) and ϕ(η) are selected as follows We select the auxiliary linear operators as The above linear operators have the following properties Where C 0 − C 11 are arbitrary constants. The zeroth order homotopic deformation equations can be written as yðZ; 0Þ ¼ y 0 ðZÞ;ŷðZ; 1Þ ¼ yðZÞ; GðZ; 0Þ ¼ G 0 ðZÞ;ĜðZ; 1Þ ¼ GðZÞ; ðZ; 0Þ ¼ 0 ðZÞ;ðZ; 1Þ ¼ ðZÞ; By means of Taylor's series The auxiliary converging parameters are chosen in such a way that the series (50) converges when p = 1. Thus we have The resulting mthorder deformation equations are in which ( The general solutions of Eqs (56)-(60) can be written as we utilize the concept of minimization by defining the average squared residual errors as introduced by [24].
Following Liao [24] where E t m is total averaged squared residual error, δη = 0.5, k = 20. Total and individual averaged squared residual errors are computed using N 1 = N 2 = N 3 = 0.10, R = 0.20, n = 0.50, M = 0.10 = Kr = Nt = Nb = λ, Pr = 1 = Sc. By means of computational software Mathematica we obtained the total and individual average squared residual errors at various order of iterations using highly efficient Mathematica package BVPh2. 0 which can be found at http:// numericaltank.sjtu.edu.cn/BVPh2_0.htm. The basic idea is to minimize the total average squared residuals and determining the corresponding local optimal convergence control parameters. For this purpose, Tables 1 and 2 are prepared for the case of various optimal convergence control parameters. Table 1 gives the minimum value of total averaged squared residual error at several order of iterations while Table 2 is presented for the individual average squared residual error at different orders of approximations using the optimal values from Table 1 at m = 6. It is quite obvious from these two tables that the averaged squared residual errors and total averaged squared residual errors continue decreasing as we increase the order of approximation. So, Optimal Homotopy Analysis Method is an extremely effective tool in obtaining convergent series solutions for highly nonlinear systems of differential equations.
Numerical Solution
Despite OHAM, the governing system of Eqs (10)- (14) is also solved numerically using midpoint integration as a basic scheme and Richardson extrapolation as an enhancement scheme with highly efficient computational software Maple as used by several other authors [26][27][28][29][30] This scheme works by transforming the governing system of nonlinear higher order differential equations into a system of first order ordinary differential equations which are then solved using iterative schemes through midpoint integration as it belongs to the class of higher order Runge-Kutta methods. A mesh size of Δh = 0.001 was set for a convergence criterion of 10 −6 in our computations. Our computed numerical results are in very good agreement with analytical results obtained by optimal homotopy analysis method. We are thus confident that our applied numerical algorithm is up to the mark.
Results and Discussion
The aim here is to discuss the behavior of velocity, temperature, micro rotation and concentration profiles against emerging physical parameters in our flow problem. Fig 14). Influence of Hartman number M and Reynolds number R on the micro rotation profile G(η) is found to be similar from Figs 15 and 16. Both these parameters enhance the micro ration profile G(η) between the parallel plates. (Fig 19), while it decreases with increasing Reynolds number R (Fig 20). Physical quantities of interest such as skin friction co-efficient, Nusselt and Sherwood number are computed through Tables 3-5. Table 3 examines the influence of embedding parameters such as N 1 , N 3 , M, Kr and R on skin friction coefficient. All these parameters have a positive influence on skin friction at the lower plate but it is interesting to mention here that Table 3. Numerical values of skin friction at the wall when N 2 = 1.0. this increasing behavior is more prominent for the case of strong concentration i.e. n = 0.0 when compared to the case of weak concentration n = 0.5. The impact of coupling parameter N 1 , Brownian motion parameter N b , thermophoresis parameter N t , Reynolds number R and Schmidt number Sc on the Nusselt number is depicted through Table 4. For higher values of coupling parameter N 1 , Nusselt number -θ 0 (0) tends to increase while it decreases with an increase in, Brownian motion parameter N b , thermophoresis parameter N t . Similarly, it is also observed that with an increase in Reynolds number R, Nusselt number -θ 0 (0) also increases while it decreases with an increase in Schmidt number Sc for strong n = 0.0 as well as weak concentration n = 0.5. Finally Table 5 depicts that Sherwood number -ϕ 0 (0) responds in an opposite manner against Brownian motion parameter N b , thermophoresis parameter N t . Moreover local mass flux -ϕ 0 (0) decreases with Reynolds number R and it increases with an increase in Schmidt number Sc for strong n = 0.0 as well as weak concentration n = 0.5.
Concluding Remarks
Hydromagnetic flow of micropolar nanofluid between two horizontal parallel plates in a rotating system has been investigated numerically as well as analytically using optimal HAM. The core outcomes of the study can be enlisted as • The effects of coupling parameter N 1 and Hartman number M on velocity profile f 0 (η) are found to be opposite compared to viscosity parameter N 2 and rotation parameter Kr.
• Influence of magnetic field M and coupling parameter N 1 on transverse velocity g(η) are opposing each other. Moreover it is observed that increasing the porosity parameter λ leads to an increase in the transverse velocity g(η).
• It is observed that temperature profile θ(η) enhances for large values of Prandtl number Pr, thermophoresis parameter Nt and Brownian motion parameter Nb for the case of strong concentration (n = 0). • Micro rotation profile G(η) raises with coupling parameter N 1, magnetic field parameter M and Reynolds number R for strong concentration i.e. when n = 0.
• Increasing the thermophoresis parameter Nt and Brownian motion parameter Nb results in a contrary behavior on concentration profile ϕ(η).
• Skin friction at the lower plate tends to increase with magnetic field parameter M, rotation parameter Kr and Reynolds number R. This behavior is found to be more prominent for the case of strong concentration i.e. (n = 0) compared to the weak concentration (n = 0.5).
• Heat flux -θ 0 (0) at the lower plate drops with Increasing thermophoresis parameter Nt and Brownian motion parameter Nb while on the other hand mass flux -ϕ 0 (0) responds in an opposite manner against these two parameters. | 3,488.8 | 2015-06-05T00:00:00.000 | [
"Physics"
] |
Visual observation of optical Floquet-Bloch oscillations
Bloch oscillations, an important transport phenomenon, have extensively been studied in static systems but remain largely unexplored in Floquet systems. Here, we propose a new type of Bloch oscillations, namely the"Floquet-Bloch oscillations,"which refer to rescaled Bloch oscillations with a period of extended least common multiple of the modulation and Bloch periods. We report the first visual observation of such Floquet-Bloch oscillations in femtosecond-laser-written waveguide arrays by using waveguide fluorescence microscopy. These Floquet-Bloch oscillations exhibit exotic properties, such as fractal spectrum and fractional Floquet tunneling. This new transport mechanism offers an intriguing method of wave manipulation, which has significant applications in coherent quantum transport.
INTRODUCTION
As a fundamental phenomenon of coherent quantum motion, Bloch oscillations, the oscillatory motion of a quantum particle with a BO period ɅBO, were first predicted by Bloch and Zener in the context of crystal under a constant electric field (1,2).Nevertheless, Bloch oscillations have never been experimentally observed in natural crystals owing to electron-phonon interactions.
SBOs refer to rescaled BOs with super large oscillation amplitude and period, where the BO period ɅBO (or its integer multiple) is slightly detuned from the modulation period ɅFL, i.e., ɅFL ~ NɅBO.
While BOs in Floquet systems with several specific cases have been investigated, a general phenomenon concerning BOs in Floquet systems and the corresponding experimental observation remain largely elusive.
In this article, we explore the optical Bloch oscillations in Floquet systems and draw two essential conclusions: (1) Floquet lattice with a period ɅFL in a linear tilted potential leading to BOs with a period ɅBO can also be mapped onto another Floquet lattice with a period ɅFBO of the extended least common multiple (LCM) of ɅFL and ɅBO; (2) When ɅFL ≠ NɅBO, the Floquet-Bloch oscillations occur with a FBO period of LCM(ɅFL, ɅBO); when ɅFL = NɅBO, spreading usually occurs.We emphasize that all the above conclusions are tenable for arbitrary Floquet engineering with a rational ratio of ɅBO/ɅFL.Therefore, Floquet-Bloch oscillations are a unified phenomenon of the existing Bloch oscillations, namely super-Bloch oscillations (ɅFL ~ NɅBO) and quasi-Bloch oscillations (ɅBO = NɅFL).
We experimentally verified our prediction in one-dimensional curved waveguide arrays fabricated with femtosecond laser writing technology.With waveguide fluorescence microscopy, we directly visualized the breathing and oscillatory motions of Floquet-Bloch oscillations.We provided a detailed analysis of FBOs, and investigated fractal spectrum and fractional Floquet tunneling.More specifically, we found that the FBO period ɅFBO is the Thomae's function (a fractal spectrum) of the ratio ɅBO/ɅFL, and several peaks of such a fractal spectrum were experimentally confirmed.In addition, the modulation-induced rescaling of the FBO amplitude depends largely on the ratio ɅBO/ɅFL, which refers to fractional Floquet tunneling.By varying the amplitude of harmonic modulation, we experimentally demonstrated that such rescaling of FBO amplitude follows a linear combination of fractional-order Anger and Weber functions.Our demonstration provides a promising method for controlling wave transport in photonics with potential applications in self-imaging, optical communication, and photonic quantum simulations.
Theory of Bloch oscillations in a Floquet lattice
Here, we employ a femtosecond-laser-written waveguide array in a fused silica substrate (Corning 7980) as an experimental platform for visualizing optical BOs in a Floquet systems (39)(40)(41).As depicted in Fig. 1A, we first considered a curved photonic lattice that consists of identical waveguides with waveguide spacing d.In the transverse direction x, the center of each waveguide core varies along the longitudinal direction z by following a combined trajectory according to x0(z) = xBO(z) + xFL(z), where xBO(z) is the circular bending term with a bend radius R (R >> z) and xFL(z) = M(z) is the periodic bending term with a modulation period ɅFL and modulation function M(z) that satisfies M(z) = M(z + ɅFL).In the case of paraxial propagation along the longitudinal direction z, the envelope ψ(x, y, z) of the optical field guided in this photonic lattice at operating wavelength λ is governed by the Schrödinger-type equation: where is the Laplacian operator in the transverse plane, n0 ~ 1.46 is the refractive index of the substrate, k0 = 2πn0/λ is the wave number, and Δn(x, y, z) = n(x, y, z) -n0 is the femtosecond-laser-induced refractive-index increase (Δn > 0) that defines the entire photonic lattice.By considering a reference coordinate frame where the waveguides are straight in the z̃ direction, namely: x = x + x 0 (z) , y = y , and z̃ = z , the paraxial equation in the transformed coordinates can be expressed as with ψ = ψ(x , y , z)exp - Figure 1C displays the cross-sectional microscope image of a fabricated sample.Each waveguide in our sample supports a well-confined fundamental mode, allowing the application of nearest-neighbor tight-binding approximation, so the propagation of guided light can be described by the following set of coupled equations: where am is the amplitude of guided mode |m⟩ in the m th waveguide and c0 is the coupling constant between the nearest-neighbor waveguides.In the absence of force F(z), i.
and the
Houston function
where is the corresponding Floquet dispersion that provides the effective transport properties over a period ɅFBO.Under the single band approximation, the Floquet dispersion is expressed as where Equation (5) implies that there are two possibilities for optical Bloch oscillations in a Floquet system.When ɅFL ≠ NɅBO, a complete cancellation of all orders of diffraction = 0 results in flat Floquet dispersion ε(kx) ≡ 0, indicating that the state experiences a periodic motion and returns to the initial state after propagating a period ɅFBO.We call this phenomenon "Floquet-Bloch oscillations," because it is a combined phenomenon of Floquet engineering and Bloch oscillation.
is in general no longer flat and the state experiences spreading.We emphasize that the above conclusions are valid for an arbitrary modulation function M(z).In this connection, the existing BOs under harmonic modulation, namely QBOs (ɅBO = NɅFL) and SBOs (ɅFL ~ NɅBO), are the specific cases of FBOs.
Visual observation of Bloch oscillations in a Floquet lattice
To illustrate the similarity and difference between Floquet-Bloch oscillations and the existing Bloch oscillations, we employed a harmonic modulation M(z) = Acos(2πz/ɅFL) (see Fig. 1D) with modulation amplitude A. Without loss of generality, we considered four specific scenarios that correspond to typical BOs (A = 0), QBOs (ɅBO/ɅFL = 3), SBO-like oscillations (ɅBO/ɅFL = 4/3), and spreading (ɅBO/ɅFL = 1).The corresponding shifts of the transverse Bloch momentum k x (z) = 1F, where the harmonic modulation contributes a sub-oscillation to the states with Bloch-momentum-oscillation amplitude (2πAk0)/ɅFL.
In the latter three scenarios, we considered the modulation amplitude A = A0ɅFL/ɅBO so that the sub-oscillation amplitude was normalized to (2πA0k0)/ɅBO.To experimentally verify our prediction, we fabricated a set of 90-mm-long samples composed of 31 identical waveguides with a waveguide spacing d = 16 μm.With such a waveguide spacing d, the coupling coefficient between straight waveguides c0 ~ 1.45 cm −1 was experimentally characterized.These waveguides follow the combined trajectories having a bend radius R = 110.8cm (corresponding to ɅBO ~ 30 mm) and the modulation period ɅFL = 10, 22.5, and 30 mm (corresponding to the ratios ɅBO/ɅFL = 3, 4/3, and 1, respectively).With the considered modulation period, A0 = 18 μm was chosen to reduce the associated radiation losses of waveguides.Further details of the fabrication processes are provided in the Supplementary Materials.
Similar to the existing Bloch oscillations, Floquet-Bloch oscillations exhibit a breathing and an oscillatory motion under a single-site excitation and a broad-beam excitation, respectively.In the following experiments, we implemented visible-light excitation (λ = 633 nm) and directly visualized both the breathing modes and oscillating modes of Floquet-Bloch oscillations by using waveguide fluorescence microscopy (see the Supplementary Materials).A coordinate transformation that maps circular arcs into straight lines was applied to digitally process the fluorescence image so that the light evolution could be visualized more intuitively.
First, we focus on the breathing modes under a single-site excitation.The narrow excitation in the real space corresponds to a broad excitation of Bloch modes in the reciprocal space, resulting in strongly diffracting wave packets.To quantify the diffraction of wave packets for the single-site excitation, we define the variance of excitation at the distance z in such a discrete system as The light is initially excited in the central waveguide resulting in a vanishing variance σ 2 (0) = 0, and a rise of the variance indicates that the light experiences broadening.Under the single-site excitation, the experimental results, respective simulations, and extracted variances σ 2 (z) for the scenarios considered in Fig. 1F are summarized in Fig. 2, where Figs. 2 (A to C), (D to F), (G to I), (J to H) corresponding to typical BOs, QBOs, SBO-like oscillations, and spreading, respectively.
Without modulation (A = 0), Figs. 2 (A and B) displays the light evolution that corresponds to typical BOs, where the measured BO period ~30 mm is consistent with its theoretical value ɅBO = Rλ/(n0d).The light first broadens until it propagates half of the BO period and then focuses into the central waveguide again at the BO period, as σ 2 reaches its maximum at z ~ 15 mm and then decreases to zero at z ~ 30 mm (see Fig. 2C).When the modulation is introduced, Bloch oscillations in the Floquet lattice exhibit diverse transport properties as expected, where the ratio ɅBO/ɅFL makes a significant difference.For ɅBO/ɅFL = 3, the FBOs are observed and reduce into QBOs, where the FBO period ɅFBO is equal to the BO period ɅBO (see Figs. 2D and 2E).The QBOs pattern is basically similar to that of typical BOs, except that light experiences additional sub-oscillations, as σ 2 oscillates with dual periods (see Fig. 2F).For ɅBO/ɅFL = 4/3, the FBOs exhibit their similarity to SBOs, where the FBO period ɅFBO ~ 90 mm is much longer than the BO period ɅBO (see Figs. 2G and 2H).In addition to the extended FBO period, we also observed dramatic broadening of the light, as the maximum of σ 2 is far larger than that of typical BOs (see Fig. 2I).For ɅBO/ɅFL = 1, the evolution of light propagating from 0 to ɅFBO/2 cannot be cancelled with that propagating from ɅFBO/2 to ɅFBO.As a result, the condition for FBOs is destroyed and spreading occurs, where light exhibits ballistic spreading and is no longer localized (see Figs. 2J and 2K).The typical discrete diffraction pattern accompanied by oscillations is observed, as σ 2 oscillates around the gray-dashed where B1 is the first-order Bessel function (see Fig. 2L).
Next, we focus on the oscillation modes under a broad-beam excitation.The broad-beam excitation in the real space corresponds to a narrow excitation in the reciprocal space.In this case, the group velocity of beam motion in the lattices can be expressed as Vgroup(z) = −dβ(z)/dkx(z) = 2dc0sin(kx(z)d), and the transverse displacement Δx(z) of beam center is determined by Δx(z) = Here we define the weighted-average position of excitation at the distance z in such a discrete system as The excitation is located at the center of the lattice, i.e., x(0) = 0.During propagation, a rise (drop) of x(z) indicates that the light shifts toward the x (−x) direction.Here, we launched a 7-waveguidewide Gaussian beam at normal incidence to the edge of the substrate.This corresponds to a narrow spectrum centered at kx(0) = 0 in the reciprocal space.Under the broad excitation, the experimental results, respective simulations, and extracted trajectories of the beam x(z) (white dashed lines) for the scenarios considered in Fig. 1F are summarized in Fig. 3, where Fig. 3 (A and B), (C and D), (E and F), and (G and H) correspond to BOs, QBOs, SBOs-like oscillations, and spreading, respectively.Without modulation (A = 0), Fig. 3 (A and B) display the light evolution that corresponds to typical BOs, where the broad beam undergoes a sinusoidal oscillation with the BO period ɅBO.Similar to the breathing motion discussed previously, the oscillating motion exhibits diverse transport properties when the modulation is introduced.For ɅBO/ɅFL = 3, Fig. 3 (C and D) display the light evolution that corresponds to QBOs, where the trajectory of the broad beam follows a doubly oscillating function.The broad beam evolves along the x direction and returns to the initial position after propagating any multiple of the BO period ɅBO ~ 30 mm.For ɅBO/ɅFL = 4/3, Fig. 3 (E and F) display the light evolution that corresponds to SBOs-like oscillations, where the trajectory of the broad beam follows a giant doubly oscillating function with an extended period of ~90 mm.The maximal displacement of the broad beam for SBOs-like oscillations is observed at half of the FBO period, i.e., z ~ 45 mm.For ɅBO/ɅFL = 1, Fig. 3 (G and H) display the light evolution that corresponds to spreading.Although the trajectory of the broad beam follows an oscillating function, beam broadening is observed during propagation.As a result, the beam does not return to the initial state of excitation and the condition for FBOs is destroyed.
For both single-site and broad-beam excitations, the visual observations of fluorescence images and quantitative analyses have excellent agreement with the respective simulation results.
Therefore, our waveguide arrays are capable of accurately revealing BOs in Floquet lattices.
Fractal spectrum and fractional Floquet tunneling
We also made a quantitative analysis of FBOs.Firstly, we studied the dependence of ɅBO/ɅFBO on ɅBO/ɅFL.As shown in Fig. 4A, the theoretically predicted FBO period ɅFBO = LCM(ɅBO, ɅFL) determines that the FBO period spectrum follows the Thomae's function.One may find that the Thomae's function is a fractal structure composed of infinite discrete peaks, where the patterns exhibit self-similarity at increasingly smaller scales.Owing to limited sample lengths, we fabricated a set of samples with ɅBO/ɅFBO ≥ 1/6, fixed ɅBO = 30 mm, and varied ɅFL from 15 to 30 mm.As expected, we experimentally verified several peaks of such a fractal spectrum by fitting the measured and simulated variance σ 2 (z) under single-site excitation (see the Supplementary Materials).This fractal spectrum demonstrates the relationship between FBOs and SBOs.When ɅBO/ɅFL approaches 1, the Thomae's function can be approximated to a continuous linear function, implying that the FBOs reduce into SBOs with a period ɅFBO = ɅFLɅBO/(ɅBO − ɅFL).Moreover, the existence of FBOs is confirmed even for a large ɅBO/ɅFL, which goes far beyond SBOs.In agreement with theoretical prediction, such a spectrum reveals the fractal nature of the FBOs.
Secondly, under single-site excitations, we defined the FBO amplitude as σ 2 (ɅFBO/2) and studied the dependence of FBO amplitude on modulation amplitude A. We found that the harmonic modulation leads to a rescaling of FBO amplitude following the square of a linear combination of the Anger function J v
DISCUSSION
In summary, we report the first visual observation of optical BOs in a Floquet lattice and the investigation of Floquet-Bloch oscillations.In addition to the above-discussed cases with a harmonic modulation, we emphasize that Floquet-Bloch oscillations occur for arbitrary Floquet engineering M(z) and the corresponding experimental results are provided in the Supplementary Materials.The visual observation of Floquet-Bloch oscillations is a key to understanding the underlying transport mechanism, which has a significant impact on both fundamental research and practical applications.For fundamental research, our theoretical and experimental work enables the exploration of a branch of fundamental phenomena involving FBOs, such as the interplay between FBOs and binary lattices (16), non-Hermitian lattices (42), and optical nonlinearity (43).
For practical applications, the demonstrated manipulation of optical waves can be implemented in synthetic dimensions of time (44), frequency (45), and angular momenta (46,47), leading to applications in high-efficiency frequency conversion and signal processing.
ik0
2π z̃x (z)x -ik0 4π ∫ [ z̃x (τ)] τ z̃ and F(z) = -n ∂ z2 0 (z) .The additional term F(z) is determined by the combined trajectory and can be separated into two terms, i.e., F(z) = F BO + F FL , with FBO = n0/R and F FL = -n ∂ z2 M(z).Equation(2) indicates that the propagation of low-power light in the proposed lattice is analogous to the temporal evolution of noninteracting electrons in a periodic potential subject to an electric field, where the spatial coordinate z̃ acts as time t, F(z) plays the role of the electric field force F(t), and the refractive index profile [Δn(x , y , z) + F(z)x ] is related to the sign-reversed driven potential -V(t, x ).As sketched in Fig.1B, the effective potential -(Δn(x , y , z) + F FL x ) refers to a Floquet lattice, and the linear potential gradient FBO gives rise to BOs with a period ɅBO = λR/(n0d).As a result, our proposed scheme provides an experimental realization of optical Bloch oscillations in a Floquet lattice.
1 √N∑
e., for straight waveguide arrays, inserting a Bloch function ψ m,k x = e ik x dm |m m yields the single-band dispersion β(kx) = 2c0cos(kxd) (blue line in Fig. 1E), where β(kx) denotes the quasienergy and kx denotes the transverse Bloch momentum.According to the generalized acceleration theory (29), the presence of force F(z) leads to a shift of the transverse Bloch momentum k x is the reconstructed solution (see the Supplementary Materials).When PɅBO = QɅFL (Q, P are mutually prime integers), the extended least common multiple (LCM) of ɅBO and ɅFL is defined as LCM(ɅBO, ɅFL) = PɅBO = QɅFL, and β[kx(z)] is a z-periodic function with a period ɅFBO = LCM(ɅFL, ɅBO) (see the Supplementary Materials).Consequently, the integral of β[kx(z)] can be expressed as a sum of a linear function and a periodic function, i.e., ∫ β[k x (τ)]dτ z 0 = ε(k x )z + P(z) with P(z) = P(z + ɅFBO).As a result, the entire lattice can be mapped onto another Floquet lattice, since the Houston function can be reduced to Floquet states as order v = ɅFL/ɅBO, which refers to fractional Floquet tunneling (see the Supplementary Materials).Note that the rescaling of FBO amplitude depends largely on ɅBO/ɅFL, which provides a flexible way to manipulate the light.Figure4Bdisplays two examples of such Floquet tunneling, including QBOs (red line, ɅBO/ɅFL = 3) and SBOs-like oscillations (blue line, ɅBO/ɅFL = 4/3).Each curve is normalized to unity at its maximum.For the QBOs, the theoretically predicted FBO amplitude has a characteristic 2cos(π/3)E 1/3 -A/ɅFL.By contrast, the Floquet tunneling for the SBOs-like oscillations exhibits a different behavior, where the FBO amplitude has a characteristic 8J 3A/ɅFL.To verify our prediction, we fabricated two sets of samples with a varied modulation amplitude A and extracted the corresponding variance σ 2 (z) from the measured fluorescence images.As expected, one finds that the measured FBO amplitude has excellent agreement with its theoretical prediction.For the QBOs, with increasing amplitude A the FBO amplitude decreases before it reaches zero, indicating that the introduction of harmonic modulation will not broaden the FBO amplitude compared with the typical BOs (A = 0).For the SBOs-like oscillations, with increasing amplitude A the FBO amplitude first increases to its maximum around A = 22.5 μm and then decreases.The detailed experimental results are provided in the Supplementary Materials.
Fig. 1 .
Fig. 1.Photonic implementation and generalized acceleration theory.(A) Schematic of a onedimensional lattice composed of evanescently coupled waveguides with combined bending trajectory.(B) Schematic of a reduced Floquet lattice in the transformed coordinate frame.(C) Cross-sectional optical microscope image of the fabricated sample.(D) Top-view optical microscope image of the fabricated sample with a harmonic modulation.(E) Representation of F(z) induced wave vector shift according to the generalized acceleration theory.(F) z-dependent shift of the transverse Bloch momentum for several specific cases corresponding to typical BOs (A = 0, blue solid line), QBOs (ɅBO = 3ɅFL, orange dashed line), SBO-like oscillations (3ɅBO = 4ɅFL, red dashed line), and spreading (ɅBO = ɅFL, gray solid line).
Fig. 2 .
Fig. 2. Experimental visualization, simulation, and variance of the breathing modes for single-site excitation.(top) Fluorescence microscopy images of the wave evolution in curved waveguide arrays with a fixed circular bend radius R = 110.8cm (corresponding to ɅBO = 30 mm). (A) A = 0, corresponding to typical BOs; (D) A = 6 μm and ɅFL = 10 mm, corresponding to QBO; (G) A = 13.5 μm and ɅFL = 22.5 mm, corresponding to SBO-like oscillations; (J) A = 18 μm and ɅFL = 30 mm, corresponding to spreading.(middle) Simulated the wave evolution corresponding to those in (top).(bottom) Corresponding variances σ 2 of the measured (top) and simulated (middle) light evolution as a function of the propagation distance z.
Fig. 3 .
Fig. 3. Experimental visualization and simulation of the oscillating modes for broad-beam excitation.(top) Fluorescence microscopy images of the wave evolution in curved waveguides arrays with a fixed circular bend radius R = 110.8cm (corresponding to ɅBO = 30 mm). (A) A = 0, corresponding to typical BOs; (C) A = 6 μm and ɅFL = 10 mm, corresponding to QBOs; (E) A = 13.5 μm and ɅFL = 22.5 mm, corresponding to SBO-like oscillations; (G) A = 18 μm and ɅFL = 30 mm, corresponding to spreading.(bottom) Simulated wave evolution corresponding to those in (top).The trajectories of the beam x(z) extracted from the measured (top) and simulated (bottom) light evolution are marked as white dashed lines.
Fig. 4 .
Fig. 4. Fractal spectrum and fractional Floquet tunneling of FBOs.(A) Theoretical (blue stems) and measured (red dots) ratio ɅBO/ɅFBO as a function of the ratio ɅBO/ɅFL.The inset is a close-up spectrum at a finer scale, which show the property of self-similarity of this spectrum.(B) Normalized theoretical (lines) and measured (dots) FBO amplitude σ 2 (ɅFBO/2) as a function of the ratio A/ɅFL. | 4,956 | 2022-12-29T00:00:00.000 | [
"Physics"
] |
Improved Network and Training Scheme for Cross-Trial Surface Electromyography (sEMG)-Based Gesture Recognition
To enhance the performance of surface electromyography (sEMG)-based gesture recognition, we propose a novel network-agnostic two-stage training scheme, called sEMGPoseMIM, that produces trial-invariant representations to be aligned with corresponding hand movements via cross-modal knowledge distillation. In the first stage, an sEMG encoder is trained via cross-trial mutual information maximization using the sEMG sequences sampled from the same time step but different trials in a contrastive learning manner. In the second stage, the learned sEMG encoder is fine-tuned with the supervision of gesture and hand movements in a knowledge-distillation manner. In addition, we propose a novel network called sEMGXCM as the sEMG encoder. Comprehensive experiments on seven sparse multichannel sEMG databases are conducted to demonstrate the effectiveness of the training scheme sEMGPoseMIM and the network sEMGXCM, which achieves an average improvement of +1.3% on the sparse multichannel sEMG databases compared to the existing methods. Furthermore, the comparison between training sEMGXCM and other existing networks from scratch shows that sEMGXCM outperforms the others by an average of +1.5%.
Introduction
In human-computer interfaces (HCIs), hand movements commonly offer a natural way for users to interact with the computer [1].There are multiple ways to recognize hand gestures, such as vision- [2], WiFi- [3], and radar-based approaches with off-body sensors [4], as well as approaches based on biosignals such as surface electromyography (sEMG) and electroencephalography (EEG) [5,6].Among these approaches, the sEMG-based musclecomputer interface is attracting increasing attention due to its robustness to the deployment environment and its non-invasive nature [7].
With the recent advancement of deep learning techniques, a common method for sEMG-based gesture recognition is to translate the sEMG signals to images and then Convolutional Neural Network (CNN) [5,8,9] is trained for classification.However, these models only capture the spatial information of sEMG signals without considering the temporal information.To address this issue, recurrent neural networks (RNNs) [10] and the hybrid CNN-RNN [11,12] are adopted to extract both spatial and temporal features from sEMG signals and achieve better performances compared to CNN.However, RNN and CNN-RNN are rarely used in real-time HCIs due to their slow computation.Motivated by this fact, we propose an improved network, namely sEMGXCM (Figure 1).In this network, spatial and temporal features are extracted in parallel using 2D and 1D convolutional layers, respectively.After the extracted features are fused, a self-attention layer [13] is added to model the association across electrodes.To validate the effectiveness of sEMGXCM, we conducted a fair comparison between sEMGXCM and other three existing deep networks, GengNet [5], XceptionTime [9], and XCM [14].The performances of these networks are obtained by training them from scratch and adopting cross-trial gesture recognition accuracy as the evaluation metric.Despite the improvement brought by network design, cross-trial gesture-recognition performance is still far from optimal.A trial commonly represents a repetition of performing a hand gesture when wearing electrodes without removing them [15].Then, the cross-trial gesture recognition accuracy could indicate the performance of a trained classification model during the longtime use of an sEMG-based application.Thus, it is essential to build a classification model with high cross-trial gesture-recognition accuracies.Motivated by the cross-modal association between sEMG signals and hand movements [16], we aim to model another type of association across different trials within the same sEMG modality.Based on these two kinds of associations, we propose a novel scheme, called sEMGPoseMIM (Figure 2), to enhance the training of sEMG-based classification models, such as GengNet [5], XceptionTime [9], and XCM [14].Specifically, sEMGPoseMIM consists of two stages that, respectively, model the cross-modal (i.e., sEMG signals and hand movements) association and invariant information across different trials.In the first stage, inspired by the study of mutual information (MI) [17], we aim to train an encoder that generates trial-invariant representations.To do this, we sample pairs of sEMG sequences from different trials in the same time step.Then, the sEMG sequences of a pair are fed into the encoder, whose output is disentangled into a gesture-relevant representation and a trial-relevant representation.Subsequently, the mutual information between the two representations from a single sEMG sequence is minimized through a likelihood estimator to ensure the disentanglement, as Belghazi et al. do in [18].In addition, the cross-trial mutual information between the gesture-relevant representation and trial-relevant representation from the two respective sEMG sequences of a sampled pair is maximized to mitigate the impact across different trials.In this way, an encoder producing trial-invariant representations is obtained.In the second stage, we aim to leverage the invariance of hand movements across different trials.To this end, we adopt a common knowledge-distillation method [19] to align the feature spaces of two modalities (i.e., sEMG signals and hand movements).Firstly, a teacher network of the hand movements modality is supervisedly trained to classify hand gestures.Next, a student network based on sEMG signals is initialized using the parameters learned in the first stage and then is jointly trained through classification loss as well as Kullback-Leibler divergence loss to the output of the well-trained teacher network.We validate the effectiveness of sEMGPoseMIM by comparing the performance of training using the scheme sEMGPoseMIM with that of training from scratch.In addition, the effect of the components of sEMGPoseMIM is verified.
The main contributions of this paper are summarised as follows.
•
We design a new end-to-end convolutional neural network for cross-trial sEMG-based gesture recognition, namely sEMGXCM, that captures the spatial and temporal features of sEMG signals as well as the association across different electrodes.The parameter number of the self-attention layer increases as the number of electrodes increases, so sEMGXCM is utilized for sparse multichannel sEMG signals.
•
We present a novel two-stage training scheme called sEMGPoseMIM for cross-trial sEMG-based gesture recognition.Specifically, the first stage is designed to maximize the mutual information between the pairs of cross-trial features at the same time step to produce trial-invariant representations.And the second stage models the cross-modal association between sEMG signals and hand movements via cross-modal knowledge distillation to enhance the performance of the trained network.
Related Work 2.1. sEMG-Based Gesture Recognition
The sEMG signal is recorded using electrode contact with the skin during the contraction of skeletal muscles [24], which is non-invasive and robust to environmental conditions.Recently, sEMG-based gesture recognition has attracted much attention due to its broad potential in the area of sign language, medical rehabilitation, virtual reality, and so on [25].The approaches to tackle this classification problem can be categorized into conventional machine-learning-based approaches and deep-learning-based ones [5,7,9,11,26].The former usually consists of three steps, including preprocessing sEMG signals, handcrafted feature extraction, and classification using the extracted features.Various handcrafted sEMG features are adopted, such as temporal-spatial descriptors (TSDs) [27], Discrete Wavelet Transform Coefficients (DWTCs) [28], and Continuous Wavelet Transform Coefficients (CWTCs) [29].Given the extracted features, the conventional machine learning classifiers, such as the Support Vector Machine (SVMs) [30] and Random Forests [23], are employed for classification.However, handcrafted feature extraction often requires domain expertise and the manual engineering of features, which can be time-consuming and resource-intensive.In contrast, deep models could automatically learn relevant features from raw data, eliminating the need for explicit feature engineering.For example, Geng et al. [5] converts the sEMG signal into a grayscale image, and a network composed of multiple 2D convolutional layers is utilized to recognize it.However, 2D convolutional layers are hardly used to capture the temporal information of signals.Motivated by this fact, RNN [10] is specifically designed to handle sequential data, making it suitable for capturing temporal dependencies.Unlike traditional feed-forward networks, it possesses an internal memory that retains information about prior inputs.This memory enables RNNs to process data sequentially and consider the context of previous inputs when making predictions.Furthermore, the hybrid CNN-RNN [11] is proposed by combining the strengths of both CNNs and RNNs.CNNs excel at extracting spatial features through the use of convolutional filters, while RNNs specialize in handling sequential information.By integrating these two architectures, the hybrid model can simultaneously capture spatial and temporal features.Furthermore, XceptionTime [9] utilizes 1D convolutional layers to extract fine-grained temporal information in time-series sEMG data.
Besides the input of converted images, Côté-Allard et al. [31] employs spectrograms extracted from sEMG signals as the input of a CNN.Wei et al. [8] fed vectors of multiple handcrafted features into a multi-stream convolutional neural network, and the approach made significant improvements in cross-trial gesture recognition.During the collection of sEMG signals, data of other modalities may be collected simultaneously [20][21][22].Therefore, multimodal gesture-recognition methods that fuse the features of multimodal data are introduced to achieve further improvement [32,33].In contrast, Hu et al. [16] utilized the hand poses to model the cross-modal association via adversarial learning during the training phase and improved the cross-trial gesture recognition performance during the test phase, barely using sEMG signals.Our training scheme sEMGPoseMIM is also formulated as multimodal training but is a unimodal evaluation.
Mutual Information and Cross-Modal Learning
In this work, our target is to learn trial-invariant representations of sEMG signals and make use of multimodal data during the training phase.Recently, mutual information (MI) [34] has been widely used in representation learning such as subject-invariant braincomputer-interface [35] and view-invariant human-pose estimation [36].However, for the applications of muscle-computer interfaces (MCIs), mutual information is often utilized to select channels [37] or features [38].Unlike these approaches, the maximization of mutual information is used for trial-invariant representation learning in our work.
To model the inherent relationship between the sEMG signals and finger movements, cross-modal learning-based methods are reviewed next.We shall focus on the approaches designed for pattern recognition.Hu et al. [16] performed the cross-modal transformation of sEMG signals and hand movements to obtain a fused feature of these two modalities.Liu et al. [39] also utilized cross-modal transformation to obtain more discriminative imagined visual features from a single modality.In addition to cross-modal transformation, Gu et al. [40] mapped noisy data from RGBD and wearable sensors to accurate 4D representations of lower limbs to perform abnormal gait-pattern recognition via cross-modal transfer.Considering that finger movements are more discriminative and generalized for gesture recognition, we follow [41] to utilize cross-modal knowledge distillation instead of transformation to model the relationship between these two modalities.
sEMGXCM
In this subsection, we present our improved network for cross-trial sEMG-based gesture recognition (sEMGXCM).Specifically, we demonstrate the architecture of the network and then explain the novelty of sEMGXCM.
Existing networks for sEMG-based gesture recognition tend to only extract temporal features (e.g., XceptionTime [9]) or spatial features (e.g., GengNet [5]).The spatial information of sEMG signals could indicate the spatial arrangement of electrodes, such as ring-like and matrix-like arrangements, as well as the muscle activities of different muscle groups.On the other hand, temporal information could provide valuable insights into the dynamic nature of sEMG signals, and the temporal relationships between different signal segments allow for a more comprehensive understanding of the underlying physiological processes.These factors will lead to a more accurate classification of different hand gestures.Although handcrafted features can be extracted to cover both spatial and temporal scenarios [8], it is time-consuming to obtain them.Therefore, we follow the dual-stream architecture of XCM [14], which is designed for multivariate time series data classification, to simultaneously extract spatial and temporal features.
As shown in Figure 1, the temporal stream consists of two 1D convolution blocks, and the spatial one contains two 2D convolution layers and two 2D locally connected layers.The kernel size of the 1D convolution filters is set to W × C, where W and C denote the time window size and the number of electrodes, respectively.As 1D convolution filters slide over the time axis, the temporal stream shall capture the information across different timestamps.On the other hand, the spatial stream follows the architecture of GengNet [5] as shown in Figure 3.The locally connected layers of the spatial stream extract features of which electrodes indicate the specific hand gesture.Note that the hand movements are driven by specific muscle groups, and the features extracted by the spatial stream are explainable.Given the temporal and spatial features, a fusion operation is conducted, followed by a self-attention layer to learn the influence of different electrodes or time steps on gesture recognition.Specifically, inspired by [13], a four-head self-attention layer followed by a feedforward layer was adopted to not just focus on the current electrode or time step but also obtain information about the context.In the following step, we added the same aforementioned 1D convolution block and a 1D global average pooling to improve the generalization ability of sEMGXCM.Finally, we performed classification with a softmax layer.
Conv64 @3x3
Conv64 @3x3 LC64 @1x1 LC64 @1x1 In the field of image classification, 2D convolutional layers that apply multiple filters, each with different weights, can learn to extract different types of spatial information from images [42].On the other hand, 1D convolutional layers are mainly used to extract temporal information from time-series signals, such as audio and speech [43,44].In our network sEMGXCM, two streams that use 2D and 1D convolutional layers, respectively, to extract spatial and temporal features.In contrast, GengNet only uses 2D convolutional layers and XceptionTime only for 1D convolutional layers.Therefore, sEMGXCM can make use of both spatial and temporal information from the input sEMG signals.In addition to XCM [14], a self-attention layer is added to learn the information across different electrodes or time steps.
sEMGPoseMIM
In this subsection, we present a novel two-stage training scheme to enhance the networks for sEMG-based gesture recognition.As an instance of hybridization engineering [45], this training scheme (called sEMGPoseMIM, shown in Figure 2) is inspired by mutual information across different trials, as well as the inherent relationship between sEMG signals and hand movements.Specifically, we aim to generate trial-invariant representations from sEMG signals via maximizing cross-trial mutual information in the first stage.In the second stage, the initialized model is fine-tuned via cross-modal learning with another modality (i.e., hand movements).
In this work, mutual information maximization is applied during the training phase to learn a trial-invariant representation, which is significantly different from previous works, in which mutual information is used for channel or feature selection [46,47].In addition, cross-modal knowledge distillation is utilized to capture the inherent correlation between sEMG signals and hand movements, enhancing the learned trial-invariant representation.
Stage 1: Cross-Trial Mutual Information Maximization
Given an input sEMG sequence x, we aim to learn an encoder E semg to produce a trial-relevant representation v and a gesture-relevant representation u, while v and u are expected to be disentangled.In other words, two conditional distributions p(v|x) and p(u|x) are estimated by training the encoder E semg .Therefore, we can recognize the same gesture of one subject from different trials.
An anchor sEMG sequence x t i is constructed by capturing sEMG signals starting from time step t of the i-th trial.For each anchor sEMG sequence, we match a positive sEMG sequence x t j that is sampled from the same time step of another trial j.Then, an encoder E semg is employed to generate a trial-relevant representation v t i ∈ R d and a gesture-relevant representation u t j ∈ R d given an sEMG sequence x t i .To learn a cross-trial representation, E semg is trained via the maximization of cross-trial mutual information using the following objective Equation (1): where the first term is a conventional MI-based representation objective, and the second term maximizes the MI between the input sEMG sequence and its cross-trial counterpart.
In this way, the learned representation, i.e., (v t i , u t i ), could capture the gesture-relevant information maintained from different trials.
In fact, the gesture-relevant representation u and trial-relevant representation v are conditionally independent as they are assumed to be disentangled.To ensure this disentanglement between u and v, a regularization term L inter based on their mutual information is introduced.The information for u and v shall be made mutually exclusive by minimizing this regularization term L inter .
Considering that the contrastive log-ratio upper-bound MI estimator [48] is consistent with disentanglement, we leverage it to estimate the probability log-ratio between the positive pair log p(v|u) and the negative one log p(v |u).But the conditional relation between v and u is unavailable in our case.Hence, we utilize a likelihood estimator Q to predict a variational distribution q(v|u) for approximating p(v|u).Overall, the objective function of the encoder E semg can be formulated as Equation (2).min where q denotes the estimated possibility likelihood with the estimator Q.Meanwhile, Q is trained to minimize the KL divergence [49] between the true conditional probability distribution p(v|u) and the variational one q(v|u) as Equation (3).
In this paper, we assume that q(v|u) follows a Gaussian distribution, so Equation ( 3) can be solved via maximum likelihood estimation.
Overall Objectives Given the pairs of input sEMG sequences (x t i , x t j ), the overall objective in the first stage is shown as Equation ( 4): where u i and v t i denote the gesture-relevant representation and trial-relevant representation by feeding x t i into E semg .And u t j and v t j are similarly obtained by feeding x t j into E semg .In addition, lambda 1 and lambda 2 denote the weights of corresponding loss items.
Stage 2: Cross-Modal Knowledge Distillation
To further enhance the discrimination of the representation learned in the first stage, we leverage cross-modal knowledge distillation to model the relationship between sEMG signals and hand movements.Specifically, we utilize a typical knowledge-distillation [41] method to map the feature spaces between these two modalities.Our target is to learn the invariant information that hand movements carry across different trials and force the sEMG encoder to mimic it.The procedure for this stage is as follows: Firstly, a teacher network (i.e., E pose • C pose ) is trained with supervision to classify hand gestures using the modality of hand movements.Usually, the hand movements are captured using data gloves or artificially generated in accordance with the transition of a specific hand gesture.Secondly, a student network (E semg • C semg ), which is initialized in the first stage, is trained jointly with classification loss and Kullback-Leibler (KL) divergence loss [41] to the output of the teacher network E pose • C pose .
Objectives We denote the input of the Softmax layer in the teacher network and the student network as Z = (z 1 , z 2 , . . ., z N ) and Z = (z 1 , z 2 , . . ., z N ), respectively.The classification loss is computed via the cross-entropy loss between predictions and ground truth as Equation (5).
where I c is the indicator function for y i equal to c and N denotes the number of gestures to be identified.On the other hand, the formulation of KL divergence loss for the two modalities (i.e., sEMG signals and hand movements) is given as Equation ( 6).
In Equation (6), p(x c ) and q(x c ) are obtained by feeding Z and Z into the Softmax layer, respectively.Their formulations are displayed in Equation ( 7): ∑ j e z j T (7) where T denotes the temperature-scaling hyperparameter.It is commonly set to 1; a higher value makes the probability distribution over gesture labels softer [41].Then, the overall loss is computed as Equation ( 8): where α is the balance weight of KL divergence loss.
Datasets and Data Preprocessing
We conducted evaluations on seven sparse multichannel sEMG datasets [20][21][22][23] (denoted as NinaPro DB1-NinaPro DB7).The specific information of these seven datasets is displayed in Table 1.There are multiple trials in each NinaPro dataset, where a trial represents a repetition of performing a gesture with equipped electrodes.In some NinaPro databases (i.e., NinaPro DB1, NinaPro DB2 and NinaPro DB5), additional modalities such as acceleration and hand poses are recorded.However, hand poses are unavailable in the remaining NinaPro datasets.With regard to this situation, pseudo hand poses are generated by simulating the dynamic process of hand pose variation following [16].Specifically, the hand pose that a hand gesture ends with is estimated at first, and then a spherical interpolation between the neutral hand pose and the estimated ending hand pose is conducted to obtain hand movements aligned with sEMG signals.We adopt the same data-preparation procedure as the previous work [8, 9,16] for a fair comparison.To mitigate noise, a low-pass Butterworth filter and an RMS filter are utilized for NinaPro DB1 and the remaining datasets, respectively.Subsequently, each trial of sEMG signals is segmented using a sliding window over 200 ms to satisfy real-time usage constraints [50] following previous work [8].Lastly, µ-law normalization [51] is leveraged to normalize the filtered sEMG signals in terms of Equation (9).
where sign is an indicator function that equals 1 if the input is larger than 0 and otherwise is −1.And µ is set to 256 in this work.
Evaluation Metrics
The evaluation metric in this paper is cross-trial gesture recognition accuracy, which is the same as [8,16].Specifically, all the trials of each subject are divided into a training set and a testing set.The gesture-recognition accuracy is obtained by training our model on the training set and evaluating it on the testing set.Then, the mean gesture-recognition accuracy of all the subjects is computed as the evaluation metric.The specific split strategy of trials is described in Table 1.
Implementation Details
Our network sEMGXCM and training scheme sEMGPoseMIM are implemented with PyTorch, and their codes will be open-sourced online upon acceptance.In the first stage of sEMGPoseMIM, E semg is initialized using the Xavier Initialization method, and an SGD optimizer with a batch size of 128 is leveraged for all the datasets.The likelihood estimation network Q consists of two fully connected layers.E semg and Q are simultaneously trained, and their learning rates are initialized at 0.001 and 0.005, respectively.The training epochs of E semg and Q are both set to 30.In the second stage of sEMGPoseMIM, the architecture of E pose is derived from XceptionTime.Both C semg and C pose consist of a fully connected layer and a Softmax layer whose output dimension equals the number of gestures to be classified.An SGD optimizer with a learning rate set to 0.1 is employed and 28 training epochs are conducted while the learning rate is reduced by a factor of 0.1 at the 16th and 24th epochs.
Next, we present how to generate pairs of sEMG signals of the first stage.We need to align the trials of each subject, due to the fact that the time lengths of trials slightly vary.Note that all the trials of each gesture follow the same dynamic process, which consists of three phases, making, holding, and ending gestures; we can align the trials via their minimum length by dismissing the information of the ending phase.After that, given an anchor sEMG x t i from trial i, we randomly select another trial j and sample from it at time step t to obtain the positive sEMG x t j .
Comparison of Networks on Cross-Trial sEMG-Based Gesture Recognition
In this part, we conduct a fair comparison between four different networks, GengNet [5], XceptionTime [9], XCM [14] and the proposed network sEMGXCM, on seven sparse multichannel sEMG databases (i.e., NinaPro DB1-NinaPro DB7).We train these networks from scratch on these seven datasets using the evaluation metric of cross-trial gesture recognition accuracy.As shown in the parentheses of Table 2, the proposed improved network sEMGXCM outperforms the other three networks.Among these networks, GengNet achieves the lowest cross-trial gesture recognition accuracy, and it exhibits the highest performance on NinaPro DB1 while demonstrating the lowest performance on NinaPro DB3.Compared with the state-of-the-art network (i.e., XceptionTime), our network sEMGXCM achieves significant improvements of +5.4%, +2.9%, +1.2%, +7.5%, +5.6%, +5.7% and +5.3% on NinaPro DB1-NinaPro DB7.
Note that sEMGXCM is derived from XCM [14]; we compare their performances to validate the superiority of sEMGXCM for the specific task of sEMG-based gesture recognition.Table 2 shows that sEMGXCM achieves higher recognition accuracies than XCM on the evaluated datasets.On the other hand, we leverage the Wilcoxon signed rank test (p < 0.05) on each dataset to demonstrate the significance of the improvements brought by sEMGXCM.And the improvements and p-values (in brackets) are +0.7%(0.0176), +1.3% (0.0067), +0.5% (0.0218), +0.8% (0.0149), +1.1% (0.0097), +0.3% (0.0432) and +0.3% (0.0419), respectively.Thus, we can infer that the additional self-attention layer and the modified spatial stream contributed to the significant improvements.
Effectiveness of sEMGPoseMIM
To demonstrate the effectiveness of the proposed training scheme sEMGPoseMIM, we trained four networks, GengNet [5], XceptionTime [9], XCM [14], and the improved network sEMGXCM, using the training scheme sEMGPoseMIM.The experiments were also conducted on NinaPro DB1-NinaPro DB7, and cross-trial gesture recognition was adopted as the evaluation metric.The comparisons between training from scratch and training via sEMGPoseMIM are displayed in Table 2.We can see that sEMGPoseMIM outperforms the scheme of training from scratch regardless of the network architectures.The improvements achieved by training GengNet using sEMGPoseMIM are +1.1%,+9.2%, +16.0%, +2.6%, +4.9%, +3.7%, and +3.2%.As the performance of GengNet is far from optimal, the improvements brought about by sEMGPoseMIM are much more significant compared with the other three networks.With regard to the other three networks, sEMGPoseMIM could achieve improvements of at least +1.2% on the evaluated datasets.These results indicate the significant effectiveness of the proposed training scheme sEMGPoseMIM.
Furthermore, we compared the performance of training sEMGXCM using the training scheme sEMGPoseMIM with that of existing sEMG-based gesture recognition approaches.This comparison was also conducted on NinaPro DB1-NinaPro DB7, and the evaluation metric of cross-trial gesture recognition accuracy was adopted.As shown in Table 3, our method (i.e., sEMGXCM+sEMGPoseMIM) outperformed the state-of-the-art approach CMAM [16], where hand poses were directly generated using sEMG signals and then fused with the input sEMG.The specific improvements achieved using our method on NinaPro DB1-NinaPro DB7 were +1.3%, +1.5%, +0.8%, +2.6%, +1.7%, +0.8%, and +0.6%, which provides further evidence of the effectiveness of sEMGPoseMIM.
Variation on Each Stage
We also validated the effects of each stage in sEMGPoseMIM by comparing four training schemes, training from scratch, only on the first stage, only on the second stage, and on both stages (i.e., sEMGPoseMIM).To train sEMGXCM only on the first stage, we fine-tuned the sEMGXCM, whose parameters are initialized in the first stage via cross-trial mutual information maximization.With regard to the second stage, we initialized the sEMG encoder using the Xavier Initialization method and trained the sEMGXCM, as in the second stage.
As shown in Table 4, each stage of sEMGPoseMIM contributes to its performance improvement.Compared with training from scratch, the cross-trial mutual information maximization in the first stage brought improvements of +0.7%, +0.2%, +2.1%, +0.7%, +0.8%, +0.8% and +0.7% over NinaPro DB1-NinaPro DB7.The effects of cross-modal knowledge distillation in the second stage over NinaPro DB1-NinaPro DB7 are +0.5%,+0.8%, +1.7%, −0.9%, +0.6%, 0.0% and +0.2%.On most datasets except NinaPro DB4 and NinaPro DB6, training only in the second stage outperformed training from scratch.When both stages were utilized, the performance improvements over the evaluated datasets were significant and improvements of at least +1.0% were achieved.These experimental results indicate that both stages of sEMGPoseMIM are essential for enhancing the classification model.
Discussion
In the proposed network sEMGXCM, we used the GengNet architecture to extract spatial features.The reason why we chose it lies in a good trade-off between the number of parameters and performances on the NinaPro datasets.In addition, GengNet achieved superb performance on high-density sEMG-based gesture recognition [5], indicating that GengNet extracts more discriminative spatial features.
Furthermore, we compared the results of classic models on NinaPro DB1 and Ni-naPro DB2 in previous works with the experimental results of training four networks to gain a more comprehensive insight.As depicted in [20], Random Forests was adopted to train on the NinaPro DB1 and NinaPro DB2, and then recognition accuracies of 75.32% and 75.27% were obtained, respectively.We can see that sEMGXCM largely outperforms classic models, which further indicates the effectiveness of the proposed network.
Conclusions
In this paper, we propose a novel end-to-end convolutional neural network for crosstrial gesture recognition based on sparse sEMG signals, namely sEMGXCM.By capturing the spatial and temporal information of sEMG signals, as well as the correlation across different electrodes (i.e., channels), sEMGXCM achieves superior performances on seven sparse sEMG datasets (i.e., NinaPro DB1-NinaPro DB7).Additionally, we introduced a novel two-stage training scheme called sEMGPoseMIM to enhance the classification model.Specifically, a trial-invariant representation is learned using mutual information maximization in the first stage.Subsequently, the inherent relation between the sEMG signals and hand movements is modeled via cross-modal knowledge distillation to obtain a more discriminative representation.To the best of our knowledge, mutual information and cross-modal knowledge distillation are for the first time simultaneously employed for sEMG-based gesture recognition.Moreover, our training scheme sEMGPoseMIM is networkagnostic and can be applied to most convolutional networks for gesture recognition based on sEMG.
To validate the effectiveness of sEMGXCM and sEMGPoseMIM, we conducted comprehensive experiments on NinaPro DB1-NinaPro DB7.The comparison between sEMGXCM and existing networks for sEMG-based gesture recognition was performed by training these networks from scratch on the seven datasets.The experimental results show that sEMGXCM outperforms the state-of-the-art network for cross-trial gesture recognition based on sparse sEMG signals.Furthermore, the proposed training scheme sEMGPoseMIM is utilized to train four different networks (i.e., GengNet, XceptionTime, XCM and sEMGXCM) for validating the effectiveness of sEMGPoseMIM.The results demonstrate that sEMGPoseMIM can bring improvement for cross-trial gesture recognition based on sEMG.Furthermore, an ablation study on the effect of each stage in sEMGPoseMIM was conducted, and the results suggest that every stage is required, as skipping any stage leads to reduced performance.
Our future work will focus on the extension of the proposed training scheme on inter-subject or inter-session sEMG-based gesture recognition, which is much more difficult than cross-trial sEMG-based gesture recognition.We also plan to leverage a more effective approach to model the relationship between sEMG signals and hand movements, such as causal representation learning [53] and contrastive learning [54].Furthermore, our method may lack resilience [55] because the sEMG signals are sensitive to the electrodes.However, it is truly important for human-computer interfaces to retain the resilience.We will also pay more attention to it in our future work.
Figure 1 .
Figure 1.The architecture of sEMGXCM, which is end-to-end and double-stream, used as the backbone of E semg .
Figure 2 .
Figure 2.An overview of our proposed network-agnostic training framework, namely sEMGPoseMIM, for intra-subject sEMG-based gesture recognition.The positive sEMG x j is sampled from a different trial from that of the anchor sEMG x i at the same time window.
Figure 3 .
Figure 3.The architecture of the GengNet Module in the network sEMGXCM.Conv and LC, respectively, denote the 2D convolutional layer and 2D locally connected layer.The number following the layer name and the number after the ampersand denote the number of filters and the convolutional kernel size, respectively.
Table 2 .
Gesture-recognition performance of the four networks through training from scratch (shown in parentheses) and training using sEMGPoseMIM on NinaPro DB1-NinaPro DB7.The bold entries indicate the best performance on the corresponding dataset.
Table 3 .
Gesture-recognition accuracies (%) on the benchmark NinaPro sEMG databases.The reported performance was achieved with sEMG windows of 200 ms.The bold entries indicate the best performance on the corresponding dataset.
Table 4 .
Effects of each stage on gesture-recognition performance over NinaPro databases.The baseline method in this table refers to directly training sEMGXCM from scratch.The bold entries indicate the best performance on the corresponding dataset. | 7,200.2 | 2023-09-01T00:00:00.000 | [
"Computer Science"
] |
Effect of interactions on the quantization of the chiral photocurrent for double-Weyl semimetals
The circular photogalvanic effect (CPGE) is the photocurrent generated in an optically active material in response to an applied ac electric field, and it changes sign depending on the chirality of the incident circularly polarized light. It is a non-linear dc current as it is second-order in the applied electric field, and for a certain range of low frequencies, takes on a quantized value proportional to the topological charge for a system which is a source of nonzero Berry flux. We show that for a non-interacting double-Weyl node, the CPGE is proportional to two quanta of Berry flux. On examining the effect of short-ranged Hubbard interactions upto first-order corrections, we find that this quantization is destroyed. This implies that unlike the quantum Hall effect in gapped phases or the chiral anomaly in field theories, the quantization of the CPGE in topological semimetals is not protected.
Semimetals are materials which can support gapless quasiparticle excitations in two or three dimensions, in the vicinity of isolated band touching points in the Brillouin zone, thus possessing discrete Fermi points (rather than Fermi surfaces). They come in different varieties, for example, the Fermi points may appear at linear band crossings (e.g. graphene, Weyl semimetals), or at quadratic band crossings 1,2 (e.g. Luttinger semimetals). A more non-trivial example of such semimetal is the double-Weyl semimetal, which consists of two bands touching each other linearly along one momentum direction, but quadratically along the remaining directions. [3][4][5][6][7] Some of these three-dimensional (3d) semimetals (e.g. Weyl and double-Weyl semimetals) possess a nonzero Berry curvature at the Fermi nodes. In this paper, we focus on the 3d double-Weyl semimetals, 2,8,9 which, in the momentum space, have double the monopole charge of Weyl semimetals.
A double-Weyl semimetal can be realized by applying a Zeeman field to an isotropic Luttinger semimetal. 2 They are also predicted to appear 9,10 in the ferromagnetic phase of HgCr 2 Se 4 . Our aim is to study the circular photogalvanic effect (CPGE), also known as chiral photocurrent. The CPGE refers to the dc current, that is generated as a result of shining circularly polarized light on the surface of an optically active metal. [11][12][13][14] In fact, the CPGE refers to the part of the photocurrent that switches sign with the sign of the helicity of the incident polarized light. This is a non-linear response, as it is second order in the applied ac electric field, and at low frequencies, it depends on the orbital Berry phase of the Bloch electrons. Hence, CPGE is a measure of the topological charge at a Fermi node possessing a nontrivial Berry curvature.
The quantization of the CPGE has been demonstrated in earlier works for the topological Weyl nodes. 15,16 In this paper, we will consider the issue of quantization of CPGE for the double-Weyl nodes. Firstly, we will show that in the absence of interactions, the CPGE is indeed proportional to the topological charge of the node at low enough frequencies.
Secondly, we will examine the effect of Hubbard interactions on this quantized value.
II. THE CONTINUUM HAMILTONIAN FOR A DOUBLE-WEYL SEMIMETAL
The Hamiltonians describing a pair of double-Weyl nodes can be written in the form 2,8,9 Here, σ i (i = x, y, z) are the three Pauli matrices, and the "±" sign reflects the two opposite chiralities of the two nodes. The energy eigenvalues are: For each the given two-band Hamiltonians, we can define an U (1) Berry flux, which is analogous to a magnetic field in momentum space. This Berry flux is given by: It is easy to check that this magnetic field is divergenceless ∂ pj B j ± = 0 , as long as it is computed in regions away from the points of singularity where b ± = 0. The band touching point is such a singularity, where we have: Thus each double-Weyl node is a source of two Berry flux quanta. These nodes come in pairs, sourcing equal and opposite flux quanta, such that the sum of Berry flux quanta from both the double-Weyl nodes vanishes, which is the desired physical scenario as the Brillouin zone is a closed manifold without any boundary through which no net flux can emanate.
III. QUANTIZATION OF CPGE IN THE ABSENCE OF INTERACTIONS
The CPGE tensor is defined as: 15,16 where e A in the electric charge. To perform the integrals, we change variables as follows: Using the above, we get: All non-diagonal components β ij ± i =j evaluate to zero. Clearly, we see that where is ±2 the monopole charge of the corresponding double-Weyl node. The CPGE injection current is defined as the second order response to an electric field E(ω) = E * (−ω). Therefore, the CPGE is also quantized. Now let us compute the second-order photocurrent from the field-theoretic defintion, using Feynman diagrams. Firstly, we need the three components of the paramagnetic current operator (using δki ), which are given by: From now on we will drop the "±" subscript/superscript and concentrate only on the double-Weyl node with charge +2, unless stated otherwise. This is justified when the dc contribution to the photocurrent can be calculated separately for each node, such as when the nodes are well separated in the momentum space. The expression for the second-order photocurrent is given by: where Ω ≡ ω 1 + ω 2 , and the contributions χ jli 1 and χ jli 2 are given by Feynman diagrams of the type shown in Fig. 1. In the second line, we have used the relation between the electric field and the vector potential, which is: We compute the analytical expressions for χ jli 1,2 in the Matsubara formalism, such that where T is the temperature, n is an integer, and ε n = (2 n + 1) π T .
In the zero temperature limit, we can use T εn . . . → dε 2 π . . . . Furthermore, from the expression for χ ijl 1 (i ω 1 , i ω 2 ), we can obtain χ jli 2 by using the relation: In the absence of interactions, we can calculate the contributions from each node separately. The Green's function for the first double-Weyl node is given by: where we have introduced the projectors 1 ±b + (k) · σ onto the conduction ("+") and the valence ("-") bands, and have chosen the chemical potential µ to be negative for definiteness (i.e. µ < 0). Similarly, the Green's function for the second double-Weyl node is given by: where we have chosenμ > 0 for definiteness.
Performing all the integrals, we finally get: for T → 0 .
One can check that χ ijl 1 ∝ ε ijl , and hence the computation of χ 123 1 is sufficient to know all the nonzero components of χ ijl 1 . To find the actual physical response, we need to perform the analytical continuation of the above expression to real frequencies. This is a subtle procedure which should be carried out carefully. Choosing ω 1,2 > 0 for definiteness, the analytical continuation is performed by taking 17 The logarithms then transform according to We then need to set ω 1 = Ω − ω 2 with Ω → 0. After the analytical continuation, we find that in this limit, An identical contribution comes from χ 123 2 (on using Eq. (3.9)). Adding these together, we find that the current expression in Eq. (3.7) reduces to: In the time domain, this corresponds to This agrees with Eq. (3.5). This result from the non-interacting case has been obtained for the first double-Weyl node with the chemical potential µ. Analogously, for the second node, we would obtain:β Consequently, in the frequency range 2 |µ| < ω < 2 |μ|, only the first node contributes to the CPGE, while the contribution from the second node is zero due to Pauli blocking.
IV. CORRECTIONS TO THE QUANTIZED CPGE DUE TO SHORT-RANGED HUBBARD INTERACTIONS
In this section, we consider the first-order perturbative corrections originating from four-fermion interactions. The interaction Hamiltonian for short-ranged Hubbard interactions is given by: where λ is the Hubbard interaction strength (positive λ corresponds to the attractive interaction), and ψ ζ,s (k) denotes the fermion field with nodal index ζ and pseudospin index s. The first and the second terms describe the intranodal and internodal scattering processes respectively. These are shown diagrammatically in Fig. 2.
In the diagrams, we have used a solid line to represent the Green's function for the first double-Weyl node, and a dashed line to depict the Green's function for the second double-Weyl node. In the following subsections, we will compute the first-order self-energy and vertex corrections due to the Hubbard interactions. The contributions to the first-order self-energy correction are given by the Feynman diagrams shown in Fig. 3. For the short-ranged Hubbard interaction, scatterings between double-Weyl nodes of opposite chiralities have to be taken into account, which are given by the second term of Eq. (4.1). The analytic expression for Fig. 3(a) reads as: where N h > 0 is the number of holes below the double-Weyl point in the first node. In a similar fashion, the contribution from Fig. 3(b) evaluates to: with N e > 0 denoting the number of electrons above the double-Weyl point in the second node. Finally, the contributions from Figs. 3(c) and 3(d) evaluate to: resulting in the total self-energy The effect of this self-energy is to simply shift the chemical potential by an amount Clearly, this does not change the CPGE current, as it only modifies the frequency range where the quantized value of the CPGE is valid. The Feynman diagrams contributing to first-order vertex corrections are shown in Fig. 4. When the vertex i = x, with the external Matsubara frequency set to ω 1 for definiteness, Fig. 4(a) contributes as: Similarly, for i = y, Fig. 4(a) gives: (4.8) The only non-vanishing contribution from Fig. 4(a) comes for i = z, which gives: where Λ is the UV momentum cutoff. The contribution from the diagram in Fig. 4(b) is analogous, but has an overall opposite sign due to the opposite chirality of the second node, and with |µ| → |μ|: (4.10) Adding these two contributions together, we find that for the first node, the vertex with σ z (and external frequency ω 1 ) is renormalized according to: which is finite and does not contain the UV cutoff anymore.
This gives the correction: (4.12) Performing the analytical continuation i ω 1,2 → ω 1,2 + i δ, and setting ω 1 = −ω 2 = ω, we find that this contributes as: which leads to the correction for the current in the z−direction. Here, we have neglected the corrections to the chemical potentials, since they only change the frequency range within which CPGE for the the non-interacting case is nonzero.
In a similar fashion, we get: By symmetry in the xy−plane, we infer that Due to the intrinsic anisotropy of the problem, it is not surprising that the corrections for the current in the z−direction is different from that in the xy−plane.
V. SUMMARY AND OUTLOOK
We have computed the CPGE for the double-Weyl semimetal, first in the absence of interactions and then in the presence of short-ranged Hubbard interactions. In the non-interacting case, for low-enough frequency ranges of the applied electric field, the CPGE gets contribution only from one double-Weyl node and has a quantized value proportional to the topological charge of the corresponding node. However, switching on Hubbard interactions affects this result, destroying the quantization. This is similar to the results found for the case of CPGE currents in Weyl semimetals. 18 The only difference is that the corrections for the current in the z−direction is different from that in the xy−plane, due to the anisotropic dispersion of the starting Hamiltonian. These results imply that unlike the quantum Hall effect in gapped phases or the chiral anomaly in field theories, the quantization of the CPGE in topological semimetals is not protected.
In future, it will be interesting to look at the corrections coming from the Coulomb interactions. The computations will be cumbersome for this case compared to the Weyl semimetal, due to the anisotropic dispersion of the double-Weyl Hamiltonian. It will also be interesting to see the effect of short-ranged correlated disorder on the CGPE, using the well-known techniques. [19][20][21] | 2,902.2 | 2020-02-24T00:00:00.000 | [
"Physics"
] |
Evaluation Criteria for Interactive E-Books for Open and Distance Learning
The aim of this mixed method study is to identify evaluation criteria for interactive e-books. To find answers for the research questions of the study, both quantitative and qualitative data were collected through a four-round Delphi study with a panel consisting of 30 experts. After that, a total of 20 interactive e-books were examined with heuristic inquiry methodology. In the final phase, the results of the Delphi technique and the heuristic inquiry results were integrated. As a result, four themes, 15 dimensions, and 37 criteria were developed for interactive e-books. Lastly, the results and their implications are discussed in this paper and suggestions for further research are presented.
Introduction
Open and Distance Learning (ODL) strives to provide effective, efficient, engaging, and enduring learning opportunities which are dependent on improvements and developments in information and communication technologies (ICT). There have been attempts to employ ICT to eliminate the limitations that derive from physical and psychological distance among learners, learning sources, and learning environments. The influence of ICT resulted in online and digital solutions that increase interaction in the learning process. Beldarrain (2006) suggests that technology has played a critical role in changing the dynamics of each delivery option over the years, as well as the pedagogy in ODL. As new technologies emerged, instructional designers and educators had unique opportunities to foster interaction and collaboration among learners, thus creating a true learning community.
59
As a result of these developments, e-books and, following that, interactive e-books have gained a wide interest and have been used as a valuable and viable medium in both traditional education and ODL. By realizing the potential of digital books, institutions of higher education have begun to provide interactive e-books for learners to be able to deliver the information in a more effective and attractive way. According to Rothman (2006), for distance educators as well as traditional classroom educators, digital books would not only enhance student access to information, but would also help revolutionize the processes of reading, analyzing, and researching.
As a response to current developments in the e-book and interactive e-book landscape, interactive e-books are defined and their pros and cons are explained in this study. Following that, interactivity in interactive e-books is discussed and finally evaluation criteria of interactive ebooks are explained based on a mixed method study in which Delphi technique and heuristic inquiry were used.
Aim of the Study
Interactive e-books are used for providing flexibility and presenting enriched content by means of hard and soft technologies. As an emerging technology, it is a necessity to define evaluation criteria of interactive e-books and contribute to relevant literature.
On this basis, the main purpose of this research is to develop an evaluation criteria checklist for interactive e-books. Within this perspective, research questions for this study are as follows: • What are the core themes to evaluate interactive e-books?
• What dimensions and criteria should the determined core themes cover?
Interactive e-Books
Defining Interactive e-Books Books are defined as "the first teaching machine" (McLuhan, 1964, p.174) and they are indispensable in the teaching/learning process (West, Turner, & Zhao, 2010). For centuries, books have been the catalyst of dissemination and transmission of knowledge. They paved the way of improvement, helped to evolve humankind and have evolved themselves. The year 1971 was a milestone for electronic books (e-books). Michael Stern Hart initiated the Project Gutenberg that year to encourage the creation and distribution of e-books (Hart, 2004) and created the first digital version of Declaration of Independence as the first e-book in history (Hart, 1992). Other developments such as the first digital-born hypertext fiction Afternoon in 1980, DOS-based e-books and the Runeberg Project in 1992, PDF 1.0 in 1993, E-ink Corporation in 1997, first handheld e-book reader in 1998, copyright/copyleft and Creative Commons in 2001, Kindle e-book reader by Amazon in 2007, and tablet PCs and smartphones at the beginning of the new millennium triggered the evolution and acceptance of digital books (Bozkurt, 2013;Bozkurt & Bozkaya, 2013a).
When comparing definitions, conventional books (c-books) can be defined as a set of written and printed sheets that include text and visuals. As a digital version of c-books, Rao (2003) defines ebooks as text in digital form, a book converted into digital form, digital reading materials, a book in a computer file format, an electronic file of words and images displayed on a device screen intended for more than solely reading e-books, or an electronic file formatted for display on dedicated e-book readers.
In 2011, introduction of the next-generation digital book required a new definition: interactive ebooks. In his TED Talk (Technology, Entertainment and Design), Matas (2011) introduced one of the first known interactive e-books, Our Choice, and promoted it as a next-generation digital book. Our Choice was a clear indicator of the future of digital books as the first full-length digital book that utilized various creative and innovative features. Some features of this interactive ebook are given in Table 1. Talks) By examining 20 interactive e-books systematically, Bozkurt & Bozkaya (2013a;2013b) defined interactive e-books as an improved extension of digital books. According to their definition, interactive e-books are essentially digital book formats in which the user, the digital book, and the environment can interact reciprocally at a high level; digital book elements can communicate and interact among themselves and environment as well as users, and many communication channels are put in use at one and the same time. They also defined the digital book as a generic term that covers e-books, interactive e-books, and other digital book formats. A comparison of c-books, e-books, and interactive e-books are provided in Figure 1. (Bozkurt & Bozkaya, 2013a;2013b).
According to this definition, it is salient that advanced interactive e-books are at the forefront of the digital book evolution which is tightly connected to technological innovation. It can be also seen that as a result of e-books' dependency on technology, the distinction between interactive ebooks and software and mobile applications is being blurred. However, these blurring borders can become distinct by applying design principles of interactive e-books and determining the purpose of the application as it refers to the user's electronic reading (e-reading) experience. In the ereading experience with interactive e-books, there are four type of interactions: interaction between environments (real and virtual environments), interaction among the digital book elements, interaction with other users, and interaction with the user: • Interaction among the digital book elements: Interactive e-book elements can communicate and interact among themselves. This refers to the interconnectedness of interactive e-book elements (e.g. synchronously retrieving data within the book as in the interactive charts).
• Interaction between environments: Interactive e-books can further communicate with real and digital environments. The sensors of an interactive e-book reader (accelerometer, barometer, compass, fingerprint reader, gesture sensor, GPS, A-GPS, GLONASS, gyroscope, heart rate monitor, ambient light sensor, proximity sensor and more) can gather information from a physical environment, for example, geo-location information or data retrieved from online databases.
• Interaction between digital book and user: In addition to the invisible cognitive interaction which occurs while reading in any kind of book, it refers to the tangible interaction between an interactive e-book and a user (e.g. detecting user gestures for navigation or tracking eyes to keep the screen on in reading mode).
• Interaction with other users/online communities: Users can interact with a specific online community by tracking relevant hashtags related to the content within an interactive e-book or they can share all or specific parts of the book on their own online social networks.
Pros and Cons of Interactive e-Books
Interactive e-books have been preferred increasingly, especially from the beginning of 2000 onwards for the advantages they provide. Some of the advantages of interactive e-books are listed in Table 2. Table 2 Pros of Interactive e-Books
In terms of users/readers:
• interactive e-books are portable; you can carry a whole library on one device; • they are searchable; readers can find what they need easily; • they have an enormous capacity to store a great deal of information in a single book; • they have annotating support; readers can edit, add notes, add bookmarks or highlight without harming the original work; • they are customizable; users can tweak the style according to their needs; • they are durable; they do not have a short shelf-life span; • virtual libraries can be created; they are always available with a mobile device or through cloud computing; • if they do not have Digital Rights Management (DRM) restrictions, they are shareable; • they make reading accessible for the individuals with special needs; • they are cheaper than printed books; they are printable; they can be used like a c-book (if it is allowed); • they are convertible; they can be used in different formats; • they can be hyper-linked for additional sources; • they are easy to use with natural user interface and gesture based computing; • they are easy to read; they can be read in the darkness; they can be read aloud automatically with text to speech; • they are portable; there are no weight or bulk constraints; • they defeat attempts of censorship; • they support multimedia content and enhance e-reading experiences; • they are reusable infinitely, and they promote e-reading.
In terms of authors:
• interactive e-books are easy to publish; • they empower self-publishing; • authors can have feedback from readers and update their book instantly.
In terms of libraries and other educational institutions:
• interactive e-books reduce maintenance cost; • allow user statistics if featured; • can be protected with DRM options; • save physical space; • a single book can be used by many; • readers can take service from libraries 7/24; • infinite circulation is possible as interactive e-books do not wear out.
In terms of publishers and retailers:
• interactive e-books are environment friendly; • they can be delivered almost instantly; • they are convenient, they take up no physical space; • their publishing speed is faster and publishing cost is cheaper than c-books.
As well as having advantages, interactive e-books have some disadvantages. However, the following disadvantages in Table 3 mostly derive from external reasons, not from the inherent features of interactive e-books. Table 3 Cons of Interactive e-Books In terms of reading device: • resolution of the screens is a problem and longer reading time fatigue eyes; on the other hand, new display technologies promise real-like high quality display experiences; • compatibility of interactive e-books is another problem; there has yet to be a widely accepted universal format that allows all books to be read on any device; • reading devices need power so they cannot provide continuous e-reading experience and are limited to battery life.
In terms of piracy:
• they can be hacked and can be used easily in a manner that violates copyright.
64
In terms of tactile experience: • many readers simply enjoy and prefer the smell, weight, and page turning sound of print books.
Interaction in ODL
Interaction, as a complex and multifaceted concept in all forms of education, fulfills many critical functions in the educational process (Anderson, 2003). It appears to be one of the most critical instructional elements (Kearsley, 1995) and it was highlighted that interaction is a necessary ingredient for a successful learning experience in ODL (McIsaac & Gunawardena, 1996).
According to Dewey (1916), who used the word "transaction" instead of "interaction" to emphasize the relationship between organism and environment, interaction is the defining component of the educational process that occurs when the learners transform the inert information passed to them from another and construct it into knowledge with personal application and value. Moore and Kearsley (1996) focused on interaction in distance learning and described learner-content, learner-learner, and learner-instructor interaction, while Hillman, Willis, and Gunawardena (1994) additionally described learner-interface interaction. Moore and Kearsley (1996) further stated that effective teaching at a distance depends on a deep understanding of the nature of interaction and how to facilitate interaction through technologically transmitted communications. These ideas inspired instructional designers and educators not only to design interactivity in learning process, but also to design interactive learning tools such as interactive e-books.
Interaction Design: The Flow between Action and Reaction
Interaction design (IxD) and Human Computer Interaction (HCI) are terms that have been used interchangeably. Currently, there has been a growing interest in the structure and nature of interaction design in education and academia. Influenced heavily by ICT in education, interaction design became another discipline engaging in education, particularly in online and digital learning experiences.
According to Wagner (1994), within the ODL perspective, interaction is reciprocal events that require at least two objects and two actions, and it "occurs when these objects and events mutually influence one another" (p. 8). Silver (2007) defines interaction design as a blended endeavour of process, methodology, and attitude. Lowgren (2013) states that interaction design is about shaping digital things for individuals' use.
In essence, interaction design is a system view. All the elements in an interactive system should be designed for a purpose (Bozkurt & Bozkaya, 2013a). That's why interaction design uses five dimensions to have a broad view and cover all elements of an interactive system. On this basis, the first four dimensions of interaction design were introduced by Smith (2007) and the fifth dimension was added by Silver (2007).
• 1 st dimension (Words): Words are interactions that users employ to interact.
• 2 nd dimension (Visual representations): Visual representations, which include typography, diagrams, icons, and other graphics, are the things that the user interacts with on the interface.
• 3 rd dimension (Physical objects or space): It defines the space or objects with which or within which users interact.
• 4 th dimension (Time): Time is the dimension within which users interact. For example, content that changes over time such as sound, video, or animation.
• 5 th dimension (Behavior): Behavior defines the users' actions, their reactions to the interface and how they respond to it. Behavior is about including action, or operation, and presentation, or reaction.
According to Fischer and Coutellier (2005), there are three types of interaction: cognitive, sensorial, and pure physical interactions. It is important to define cognitive interaction since any type of books, including digital ones, are sources of information that require a cognitive interaction to process information and then construct knowledge. Interactive e-books in new generation mobile devices cover all these interactions. On this ground, in an interactive e-book design process, these three types of interaction should be designed in agreement with planned objectives. However, another important issue to decide upon is intended interaction level in interactive e-books.
Interaction Levels
The distinguishing feature to define and categorize book types (c-book, e-book or interactive ebook) is the level of interaction they exhibit. Interactive Multimedia Instruction (IMI), or as it is otherwise known, Interactive Courseware (ICW), has been developed over the years by the U.S Department of Defense (DoD) (1999). In the ICW model, there are four major levels of interactivity which are defined as the degree of student's involvement in the instructional activity. The four levels of interactivity identified by the ICW are provided in Table 4. Table 4 Interactivity Levels and Definitions
The student acts solely as a receiver of information.
The student makes simple responses to instructional cues.
The student makes a variety of responses using varied techniques in response to instructional cues.
The student is directly involved in a life-like set of complex cues and responses.
Related Research
The current study attempts to examine a topic which has not been extensively researched as interactive e-books are a recent emerging technology. Among the few available studies, Wilson and Landoni (2002) Previous research has generally focused on e-books which are usually in a single file format and include basic multimedia elements with low interaction. Therefore, it can be argued that there is a need to develop evaluation criteria for interactive e-books which include a combination of many multimedia components with high interactivity, and the current study intends to fill this gap.
Theoretical Framework
Throughout this research, four theories were used to establish a sound base to start and provide a clear lens through which we can look, enhance our interpretation, generate valid ideas, and give meaning to the research findings. The theories applied in this research were the theories of Independent Study, Transactional Distance, Multimedia Richness, and Multimedia Learning.
Theory of Independent Study: According to Wedemeyer (1981), the essence of distance education is the independence of the learners. Wedemeyer's Independent Study Theory emphasizes learner independence and adoption of technology as a way to implement that independence (Simonson et al., 2003). According to the theory, learning can occur in spite of the time-space barriers and learning should be individualized by providing wider choices to learners; learning responsibility belongs to learners themselves and they learn at their own paces.
Theory of Transactional Distance:
This theory is originally an extension of the Theory of Independent Study. Inspired by Dewey's term of transaction, the Transactional Distance Theory refers to the cognitive space between instructors and learners in an educational setting. Moore (2007) argues that transactional distance is a typology of all education programs having this distinguishing characteristic of separation of teacher and learner. According to Moore (1993), transactional distance is a psychological and communication space to be crossed, a space of potential misunderstanding between the inputs of instructor and those of the learner. The key concepts of the theory are dialogue, structure, and learner autonomy.
Theory of Media Richness:
Developed originally by Daft and Lengel (1984), the Media Richness Theory is based on the contingency theory and the information processing theory.
According to the theory, "the more equivocal a message, the more clues and data are needed to understand it, and media richness theory places communication mediums on a continuous scale that represents the richness of a medium and its ability to adequately communicate a complex message" (Carlson & Zmud, 1999, p. 155). The main idea of Media Richness Theory was expressed as "the more learning that can be pumped through a medium, the richer the medium" (Lengel & Daft, 1988, p. 226). According to the theory, the richness of the media is influenced by four criteria (Daft & Lengel, 1984): (1) Capacity for immediate feedback; (2) capacity to transmit multiple cues; (3) language variety; and (4) the capacity of the medium to have a personal focus.
Theory of Multimedia Learning: Proposed by Richard Mayer, this theory explains learning with multimedia from the perspectives of educational psychology and e-learning (Ataizi and Bozkurt, 2014) and claims that individuals learn better when multimedia messages are designed in ways that are consistent with how the human mind works (Clark & Mayer, 2011;Mayer, 2002).
The theory has three main assumptions (Mayer, 2002): (1) there are two separate channels for processing information: the auditory and visual channels (Dual Coding Theory); (2) each channel has a limited capacity (cognitive load); and (3) learning is an active process of filtering, selecting, organizing, and integrating information through association with previous experiences.
Research Design
The study was designed as an embedded mixed model research to provide a better understanding of the research problem. The purpose of the embedded design is to collect quantitative and qualitative data simultaneously or sequentially. In the embedded design, a secondary form of data is used to augment or provide additional sources of information not provided by the primary source of data (Creswell, 2004). Throughout this research, the Delphi technique (primary source of data) and heuristic inquiry method (secondary source of data) were applied to obtain and analyze data.
Because the topic was an emerging one dealing with different expertise areas such as instructional design and interaction design, and the number of the studies in the literature was insufficient, the Delphi technique was preferred as a primary source of data to be able to get expert opinions from different disciplines. The Delphi Method is based on a structured process for collecting and distilling knowledge from a group of experts by means of a series of questionnaires interspersed with controlled opinion feedback (Adler & Ziglio, 1996;Dalkey & Helmer, 1963;Koçdar & Aydın, 2013). As a highly flexible problem-solving process, it fits for situations where evidence based practice is dependent on expert opinion (Sandrey & Bulger, 2006). Expert panel members provide feedback, revise judgments, and contribute to the development of agreed-upon practicesall with complete anonymity (Flippo, 1998). Within this perspective, the basic assumption of the Delphi Method is that the informed, collective judgment of a group of experts is more accurate and reliable than individual judgment (Clayton, 1997;Ziglio, 1996). The key characteristics of the Delphi technique are defined as anonymity of respondents, controlled feedback process, and statistical response (Fowles, 1978).
User experience (UX) is the essence of the interaction design. Thus, in addition to theoretical and practical experiences of Delphi panelists, researchers employed heuristic inquiry to harness additional research findings which can be explored by directly engaging research questions. On this ground, heuristic inquiry was preferred as a secondary source of data. Heuristic inquiry is an experience-based technique for problem solving, learning, and discovery. Douglass and Moustakas (1985) define heuristic inquiry as a search for the discovery of meaning and essence in significant human experience. The heuristic inquiry is an adaptation of phenomenological inquiry, yet it requires the involvement of the researcher in a disciplined pursuit of research process (Hiles, 2001;Djuraskovic & Arthur, 2010).
Sampling
The selection of panel members is considered to be critical for the Delphi process, which is directly related to the focus or objectives of the research (Sandrey & Bulger, 2006). Interactive ebooks are a final product of different procedures and different expertise. Therefore, it is important to select experts for a purpose to apply their knowledge to a certain problem on the basis of criteria, which are developed from the nature of the problem under investigation .
For this research, the participants are required to be experts in one of the following areas: digital books, digital publishing, content design, instructional design, interface and layout design, or elearning. It is further required that they have a background in academic research or experience working in the field. Through literature review and snowball sampling, 55 experts were invited to the research. A total of 30 experts expressed their intention of participating. A highly representative Delphi panel of 30 experts, from well-respected institutions and companies, who have published research and/or had practical and theoretical experience, was constructed to contribute to the validity and reliability of research findings.
In heuristic inquiry, as the secondary form of data, 20 distinguishing interactive e-books were selected. A set of criteria were defined to be able to examine representative samples that had interactive features peculiar to interactive e-books. The interactive e-books included in the heuristic research were those that were most downloaded, awarded, and had positive reviews by the critics. Book samples that exhibited different features in terms of the interaction level and genre were selected to have maximum variation sampling (Appendix A).
Procedure
Before initiating Delphi rounds, a pilot study was carried out with three doctoral students. Delphi The third round was completed in two weeks with the participation of all experts of the Delphi panel.
• Fourth round: In the final Delphi round, experts of the Delphi panel were requested to reevaluate four themes, 19 dimensions, and 49 criteria that met the predefined agreement levels with five-point Likert items that emerged in the third round. Percentage, median, and IQR statistics used in the third round for consensus were also provided for each item to give statistical feedback. At the end of the final round, items with a mean below 90 percent, median of below 4, and IQR of over 1 were discarded again. As a result of the Delphi study, four themes, 14 dimensions, and 33 criteria were obtained. The final round was completed in two weeks with the participation of all 30 experts of the Delphi panel.
In the heuristic inquiry, a total of 20 interactive e-books were downloaded into a tablet computer.
The interactive e-books were selected purposefully from distinguished examples, most of which have notable awards and reviews (Appendix A). Each book was used and examined in four themes at the end of the Delphi study. First of all, the validity of the Delphi findings were checked by examining the applicability of the criteria which emerged in the Delphi rounds, using 20 interactive e-books. Following that, researchers systematically noted their experiences as well as the features observed. This process was conducted between March and May 2013. All the data gathered were coded, categorized, and put into themes using content analysis. The findings that matched with the findings gathered in the Delphi study were eliminated to assure that the same research findings were not replicated. It is salient that the four new criteria emerged from heuristic inquiry are mostly related with user experience which requires direct interaction with the products that are investigated. For that reason, it is believed that these four criteria didn't emerge during the Delphi rounds. Lastly, four new criteria were defined and associated with relevant themes and dimensions. At the end of the Delphi and heuristic research processes, a total of four themes, 15 dimensions, and 37 criteria were identified.
Data Collection and Analysis
In Delphi studies, qualitative data can be analyzed using content analysis techniques . The data in the first and second Delphi rounds of this study were analyzed using content analysis. The data were coded, categorized, and put into themes (see sample in Appendix B). The findings obtained were converted into short, simple sentences and presented as questionnaire items in the third and fourth Delphi rounds.
For the quantitative data in the third and fourth rounds, statistical methods were used. Consensus in a Delphi study is subject to interpretation. Consensus can be decided if a certain percentage falls within a prescribed range (Miller, 2006). Some researchers suggest that a level of at least 70 percent agreement is enough to call a consensus; on the other hand, there is no certain percentage defined and it changes according to the scope of the research topic. Other statistics used in Delphi studies are measures of central tendency (means, median, and mode) and level of dispersion (standard deviation and interquartile range) . To be able to get robust and reliable research results, a combination of different statistics were defined as the consensus level for the research. Percentage (80% for the third round and 90% for the fourth round), IQR, and median were used as statistical indicators of consensus (Table 5). In the heuristic inquiry, both qualitative and quantitative data were gathered systematically through real life experiences. The data were then analyzed using content analysis. The findings were used to support and to check the Delphi study findings and to discover new findings concerning the evaluation criteria of interactive e-books.
Limitations
The findings of this research is limited to current Web and mobile technologies in addition to capacities of mobile devices used in the research. For this reason, new themes, dimensions, and criteria can be added in time with the emergence of new technologies or new learning approaches.
Reliability
The data in the first and second round of Delphi were coded by one of the authors (Rater A) of this study. Another author (Rater B) who was experienced in the field also rated the data of first and second rounds. Cohen's Kappa (κ) was calculated to check the inter-rater reliability of the first two Delphi rounds. Inter-rater reliability between Raters A and B for the first round was κ = .918 (95% CI, .8551 to .9817), p<.0005 and κ = .951 (95% CI, .9040 to .9981), p<.0005 for the second round. Altman (1991) proposed that the extent of agreement can be qualified as poor (< 0.20), fair (0.21 to 0.40), moderate (0.41 to 0.60), good (0.61 to 0.80), and very good (0.81 to 1.00). Thus, the reliability of ratings for the first and second round Delphi data can be considered as very good.
Results
The overall findings of this study are presented in Table 6. In the following table, research findings were organized as themes, dimensions and criteria. The criteria were associated with the most relevant dimensions and themes, but these criteria may intersect and overlap with other dimensions or themes. Table 6 Interactive e-Book Evaluation Criteria
CONTENT Presentation
• Clear and fluent language usage • Effective narration features • Preparing content with a theoretical framework that supports learning objectives
Richness
• Richness of multimedia components • Balance of information density
Motivation and Attractiveness
• Attractiveness of the content • Content design appropriate for characteristics of the target audience
Assessment and Evaluation
• Providing mechanisms to users enabling them to assess their own learning process
Integrity , Coherence and Connectivity
• McLuhan's (1964) widely known phrase, "The medium is the message," means that the content of a mediated message is secondary to the medium (West et al., 2010). McLuhan did not ignore the importance of the content, but he deliberately pointed out the ability of the medium that shapes the message. In this study, the medium's ability, that is to say interactive e-books, to shape the message became apparent in four basic themes that emerged as a result of the data analysis: content, interface, interaction, and technology.
• Content: This theme presents the pedagogic perspective of the evaluation criteria. The criteria in this theme highlight the importance of learning/instructional design. It is clear that interaction is not provided with solid technologies but with design of the content.
From the selection of action verbs in the content to the theoretical framework applied, all are related with content theme.
• Interface: This theme is the intersection point where user and interactive e-book contact.
The theme of interface can be interpreted as the showcase or the face of the interactive ebook. The criteria in this theme consider aesthetic and visual design properties. The art of the designer appears in this theme and the criteria are related with usability features of the interactive e-book.
• Interaction: This theme determines the interactivity level of the digital book by using interaction design. The questions of interactivity such as what, when, where, how, why, and who are answered in this theme. However, it should be noted that interaction is not provided merely by using technology, but it is also provided by presentation of the content, design of the interface, and use of hardware capabilities of the e-book reader device.
• Technology: This is a challenging theme in contrast to the previous soft themes of evaluation criteria. Hardware features and their functions are related with this theme. On the other hand, in this study usually hardware features and functions rather than hardware were the locus of the study in interactive e-books, which is basically a software technology.
Comparing the results of the current study with and Diaz (2003), in addition to content and interface themes previously defined, two additional themes appeared: interaction and technology. E-books which were examined by and Diaz (2003) In other words, interactive e-books promise more than reading experience; they promise e-reading experience which includes cognitive, sensorial, and physical interactions.
It is also seen that the interactive e-book criteria that emerged in this research are coherent with basic principles of four theories which indicate interactive e-books' value for ODL: • Theory of Independent Study: From correspondence study to computer-based distance learning, books have been the primary learning material, whether they are conventional or electronic. The purpose of using distance learning materials is to provide autonomy to learners so that they can learn independently. By providing a range of choices to individualize the act of learning, interactive e-books stand as an effective option for distance learners.
• Theory of Transactional Distance: Interactive e-books serve in increasing the level of interaction to eliminate physical and psychological barriers between learners and learning process and also provide individualized learning experiences by utilizing educational technology. The premise of interactive e-books lies in their ability to act like a learning environment more than a knowledge source in which meaningful learning can occur.
• Theory of Media Richness: The essence of the interactive e-books lies in the interaction provided. Interactive e-books use various media to increase and support interaction of the content using hard and soft technologies. The Media Richness Theory offers a solid walkthrough in order to design interaction and communication to augment learning through interactive e-books.
• Theory of Multimedia Learning: Meaningful learning through multimedia requires instructional design. Principles of Multimedia Learning Theory provides a proven guideline for designers of multimedia learning and helps to improve design of an interactive e-book to facilitate learning.
Conclusion and Future Directions
In this study, through a mixed research design in which Delphi technique and heuristic inquiry were used to gather data, four themes, 15 dimensions, and 37 criteria were developed to be able to evaluate interactive e-books. As a significant learning material, this study revealed important aspects of interactive e-books to ease access to information and to increase interaction, which are needed for meaningful learning experiences for distant learners.
In this study, a set of criteria was proposed to evaluate interactive e-books. Interactive e-books as a learning material encompasses many facets such as interaction design, instructional design, and interface design. Furthermore, hardware and software capabilities are directly related with the proposed evaluation criteria. Thus, the criteria revealed in this research can be developed according to changing needs of learners, and new hard and soft technologies. It is believed that the findings gathered in this research will: • help develop and evaluate interactive e-books, • help instruction and learning designers, software developers, and content providers by means of themes, dimensions and criteria identified in this research, • be a guide to identify the strengths and weaknesses of an interactive e-book, and • be a base for future research.
As a result of the Delphi study, heuristic design and literature review, the following implications can be taken into consideration for future research: • Interactive e-books use multimedia elements which provide rich communication opportunities. However, as a learning material, there is a need to identify the principles of multimedia use and design in interactive e-books to enhance learning experiences and to provide effective, efficient, and engaging learning opportunities.
• Interactive e-books use different communication channels and new technologies to reach users. Individuals with special needs (visual impaired, hearing impaired and orthopedically handicapped) gain advantages with rich communication channels, gesture based interaction, and natural user interface of new generation devices. Thus, it is advised to implement future research using universal design principles to meet needs of individuals with special needs.
• Interactive e-books are technology-oriented learning materials. As a source of learning and practicing, semantic and adaptive systems can be integrated to increase independency and autonomy of learners. Therefore, future research can focus on semantic and adaptive systems in interactive e-books to help users track their progress, receive feedback, and better assess and evaluate the level of learning.
As a final remark, this study does not claim that interactive e-books are superior to c-books or ebooks; conversely, it claims that interactive e-books are a good and flexible alternative to be able to provide individualized learning opportunities. On this basis, the themes, dimensions, and criteria can be used as a checklist to identify strengths and weaknesses of interactive e-books and can be helpful to guide researchers who are interested in interactive e-books as a learning material. It is believed that well designed interactive e-books can significantly contribute to learning process in an effective and efficient way. | 8,283.8 | 2015-09-29T00:00:00.000 | [
"Education",
"Computer Science"
] |
Visualization and exploratory analysis of epidemiologic data using a novel space time information system
Background Recent years have seen an expansion in the use of Geographic Information Systems (GIS) in environmental health research. In this field GIS can be used to detect disease clustering, to analyze access to hospital emergency care, to predict environmental outbreaks, and to estimate exposure to toxic compounds. Despite these advances the inability of GIS to properly handle temporal information is increasingly recognised as a significant constraint. The effective representation and visualization of both spatial and temporal dimensions therefore is expected to significantly enhance our ability to undertake environmental health research using time-referenced geospatial data. Especially for diseases with long latency periods (such as cancer) the ability to represent, quantify and model individual exposure through time is a critical component of risk estimation. In response to this need a STIS – a Space Time Information System has been developed to visualize and analyze objects simultaneously through space and time. Results In this paper we present a "first use" of a STIS in a case-control study of the relationship between arsenic exposure and bladder cancer in south eastern Michigan. Individual arsenic exposure is reconstructed by incorporating spatiotemporal data including residential mobility and drinking water habits. The unique contribution of the STIS is its ability to visualize and analyze residential histories over different temporal scales. Participant information is viewed and statistically analyzed using dynamic views in which values of an attribute change through time. These views include tables, graphs (such as histograms and scatterplots), and maps. In addition, these views can be linked and synchronized for complex data exploration using cartographic brushing, statistical brushing, and animation. Conclusion The STIS provides new and powerful ways to visualize and analyze how individual exposure and associated environmental variables change through time. We expect to see innovative space-time methods being utilized in future environmental health research now that the successful "first use" of a STIS in exposure reconstruction has been accomplished.
Background
Geographic Information Systems are beneficial tools in modelling static representations of reality; however they fall short in their ability to handle time. The ability to store, visualize, and analyze both the temporal and spatial dimension of data continues to be a challenging task. Over the past decade, there have been several attempts to include time enabled capabilities into GIS. [1] and [2] proposed amendment vectors to extend the vector data model to the time dimension, while others enhanced the grid data model to represent snap-shots of raster data at different time intervals [3]. Although temporal extensions exist, e.g. [2] commercial GIS packages do not properly support temporal aspects of spatial data [4].
The importance of GIS for medical research and epidemiology has long been recognized [5][6][7], and GIS is frequently used for retrospective exposure reconstruction [8][9][10]. However the application of GIS to risk and exposure assessment has historically focused on the hazard as the object of interest -such as the locations of contaminated industrial sites with high concentrations of carcinogens -instead of the individual [3]. More recently exposure assessment using GIS has targeted individuals in their present homes, but relatively little attention has been placed on individual exposure reconstruction involving residential histories and past activities. This in large part is due to the poor ability of current GISs to handle multitemporal geographic information and the movement of individuals within the context of putative exposure sources whose locations and output change through time. Consequently, there have been few attempts to expand on the 'static map' to provide a more accurate view of exposure.
The ability to effectively represent, query, and model the temporal dimension is expected to significantly enhance researchers' abilities to undertake environmental health research with georeferenced data. Studying an individual's exposure over time is a key factor in determining risk, particularly for diseases with long latency periods such as cancer [3], because individual exposure to environmental contaminants (eg carcinogens) can change as people move through space over time. Exposure assessment characterizes the concentration of potential toxins, as well as the frequency and duration of contacts between individuals and those toxins. Therefore, accurate exposure assessment requires estimation of variation in contaminant concentration as well as changes in geographic proximity to contaminant sources over time. This requires models that can account for residential histories and how residential location influences ambient contaminant concentrations as well as exposure opportunities.
In this research we applied a STIS to visualize and analyze data from a bladder cancer case-control study. The objective of the epidemiologic research project is to identify a range of factors that have contributed to bladder cancer incidence in Michigan, with the focus on spatial and spatiotemporal patterns of exposure to naturally occurring arsenic in drinking water. Cases are recruited from the Michigan State Cancer Registry and diagnosed in the years 2000-2003. Controls are frequency matched to cases by age (± 5 years), race, and gender, and recruited using a random digit dialing procedure from an age-weighted list. To be eligible for inclusion in the study, participants must have lived in the eleven county study area for at least the past five years and had no prior history of cancer (with the exception of nonmelanoma skin cancer). The goal is to enroll 1400 participants in total. This is an ongoing five year project and only some preliminary spatiotemporal datasets, visualization tools, and results are shown here. Conclusive results will not be available for a few more years, until data has been collected and analyzed for all 1400 participants. The STIS is being developed at BioMedware, in Ann Arbor Michigan with funding from the National Institutes of Environmental Health Sciences and the National Cancer Institute. In this paper STIS is used to visualize and analyze data from a bladder cancer case-control study but it can also be used for health/environment interactions or marketplace sales trends. More information about the STIS and a free 30 day download can be evaluated at http://www.terraseer.com/ products/stis.html.
Results and discussion
Data from a case-control study of bladder cancer in south eastern Michigan was used to evaluate the efficacy of the STIS for documenting and visualizing space-time relationships between cases, controls and putative risk factors. Lifetime exposure to arsenic in drinking water (an element that has been associated with bladder cancer at high levels [12,13]) was reconstructed for each individual by incorporating spatiotemporal information about residential mobility (every address inhabited since birth), occupational history (every full time job since the age of 16), drinking water patterns, and concentration of arsenic in drinking water.
Space time information system
The motivation for this system comes from the idea that the 'what and where' of conventional GIS needs to be extended to the 'what, where, and when' of reality and spatiotemporal modelling. Based on similar spatiotemporal approaches (e.g. [4], [18], [19]), objects are implemented using the space time model: {object, space-time coordinate, attributes} where object identifies the modelled entity (e.g. person X); space-time coordinate is a spatiotemporal location which may be a space-time point (e.g. latitude, longitude, altitude, date, movement model) or a space-time polygon (e.g. polygon centroid, polygon boundary, date, movement model); and attributes are observations on objects (e.g. income).
Within the space time coordinate, in addition to the well known descriptors (e.g. latitude, longitude), we also specify a movement model that defines how the object moves through space as a function of time. Among the simplest of movement models is an instantaneous displacement such that the object ceases to exist at one location and immediately reappears at another location. We use this simple model to describe residential histories.
Morphing describes how the shape of geographic features (such as lines and polygons) changes through time. Here an object is comprised of multiple vertices changing shape through time by the addition, deletion and movement of vertices. This is called network morphing (for lines) and polygon morphing (for polygons). Morphing can be gradual, in which case the change in the object's shape occurs over a defined time interval; or it can be abrupt. In our research we utilize this approach to model cadastral systems and the realignment of administrative and political boundaries. This allows us to track, for example, how municipal water districts change through time, and to then estimate arsenic exposure from drinking water for individuals on municipal water supplies.
Attributes are observations on variables describing the modelled entity and its environment (e.g. case/control identifier, population size, ethnicity, etc.) Our data model assumes observations occur at discrete times at which the attributes of an object are quantified. Attribute change models describe how the values of attributes change between observation times. The simplest attribute change model is a step function that updates an attribute's value when a new observation is made on that attribute. More complex change functions that obtain values from nearby locations are used to interpolate values through space and time for both categorical and continuous data [14]. These include techniques from the field of geostatistics that provide a probabilistic framework for space-time interpolation by building on the joint spatial and temporal dependence between observations [15]. In this research we use the step function approach to model, for example, change in arsenic concentration in potable water when an individual's water supply source is switched from one source of supply to another. We also use geostatistics to model how arsenic concentration in ground water changes spatially and as a function of geology (described in [16]).
Study data
We reconstructed individual exposures by incorporating spatiotemporal data on residential mobility (where people have lived throughout their lives), water supplies (private well, city well water, or city surface water), and drinking water habits. Only locations in which the participants have lived or worked for longer than one year were collected and geocoded. Data about diet, smoking, and medical history were also collected by a phone interview or written questionnaire. A point file (where each point represents a participant) was then imported into the STIS along with associated database files containing attribute information such as address and primary source of drinking water. Table 1 is an example of the drinking water and residential mobility database. Even though information for only three participants is shown, seven different addresses and nine different sources of drinking water are represented. (Street addresses are not shown to protect participant's identity). Therefore, a change in address or primary source of water warrants a new row in the database. Other point files were imported including present and historical data on industries and contaminated sites in the study area. A township map and water supply boundary map were imported as polygons. In addition to temporal changes in attributes such as township population, source of community's water supply, and number of people served, town boundaries and water supply boundaries changed with time. New towns were incorporated, community systems expanded their borders, and occasionally, communities were combined and town boundaries dissolved. All of these temporal changes were handled using attribute change models and morphing.
Importing spatiotemporal datasets
We imported shapefiles describing the above data using the STIS data import facility that allows the variables to be time stamped. The user is prompted to import vector information into a new geography or an existing geography (if new information is to be added to an already existing geographic layer the latter will be chosen). The user must tell the system whether the data is (1) a time slice (similar to a collection of GIS static maps) where changes take place at specified times for all objects in the dataset, or (2) a time series where data varies asynchronously and objects move or change attributes at different times. For example Census data are time slice data -attributes remain constant for a decade (1980)(1981)(1982)(1983)(1984)(1985)(1986)(1987)(1988)(1989)(1990) and then all attributes are updated with the next decade's census information (1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998)(1999)(2000). On the other hand, data associated with tracking residential histories are time series data, with household moves occurring at different times for each individual. The system imports data at temporal granularities varying from seconds to years; and the data may then be analyzed at these different time scales.
Visualization procedures
Being able to visualize changes in boundaries and attribute values over time is an effective approach to better understanding and exploring data. Because time is a dimension of the data rather than an attribute all views of the data are easily animated. Analogous to a static GIS, attributes of data are visualized by specifying colour, shape, and size of graphical elements (e.g. symbols). However, in contrast to a GIS, the STIS easily facilitates visualization of changing polygon shapes and attribute values over time by animating maps, histograms, and tables simultaneously. Valuable information that might be lost in an atemporal GIS is captured and can become the focus of analysis in the STIS. There are four major visualization views -maps, graphs (histograms, scatter plots, box plots), tables, and time plots.
(1) The map view displays spatial data and the user interacts with the maps by zooming, panning, selecting, and querying. The added feature of the STIS is the animation toolbar. It is employed to show individuals changing place of residence through time; arsenic-emitting industries being founded, operating, and going out of business; municipal water supply districts growing and coalescing; and attribute values, such as arsenic concentrations, changing through time.
(2) In the STIS histograms, scatter plots, and box plots are also animated over time. An individual or group of individuals (e.g. cases vs. controls) may be selected at one point in time and the user can explore how that selection's values change through time. For example, we used this feature to explore how individual arsenic exposure changed over a participant's lifetime. We also used it to compare estimated arsenic burdens for the cases to those of the control population.
(3) Table views also are animated, as the given value of a variable (such as the arsenic concentration in a municipal water supply) will change through time. Tables thus show how data values change over time by updating a given objects value when it increases or decreases.
(4) The time plot graphs time on the x-axis and the value of a variable, such as estimated arsenic exposure, on the yaxis. Objects of interest, such as cases and controls, then map into this bivariate time plot to explore time dependencies in arsenic exposure. Unlike the other views, the time plot is not animated because it already shows the entire time range of the data on the x-axis.
A novel feature of the STIS is the ability to time-link visualization windows. Maps, statistical graphics, and tables may be time-linked so that all of the views are synchronized to the same point in time. Animating the timelinked windows then displays the views simultaneously changing through time. We use this feature to display the changing residential locations of the cases and controls along with the locations and emission volumes of arsenicproducing industries. All of this is done within the context of municipal water districts whose boundaries morph and whose arsenic concentrations are dynamic. While this map visualization is occurring we observe how the frequency distributions of modelled arsenic exposure are changing for the cases relative to that of the controls. Participants (cases or controls) are thus easily evaluated and compared to other participants in terms of their residential histories, and population-level characteristics, such as the mean and dispersion for arsenic exposure estimates, may be compared statistically as they evolve over time.
Statistical and Cartographic Brushing is employed to link together the views associated with a given dataset. This is made possible by using unique identifiers (such as the participant ID's of the cases and controls, or the names of the municipal water districts) to link together corresponding values on the maps and statistical graphics. Statistical brushing is used to select objects (such as the points on a scatter plot) and to then highlight the corresponding objects on maps and other statistical graphics. Carto-graphic brushing occurs when objects are selected on a map, and their corresponding values on the statistical graphics are highlighted. We used statistical brushing to select participants with high arsenic exposures, and to then identify their locations on maps of their residential histories. We use cartographic brushing to explore possi-Change in water supply systems over 50 years (1935,1965,1995) Figure 1 Change in water supply systems over 50 years (1935,1965,1995) Over the years many towns in Oakland County and Genesee County begin to purchase surface water (from Detroit).
Participant movement over 20 years Figure 2 Participant movement over 20 years Cases (circles) and controls (squares) continue to move in, out, and around the study area. In 1960 there were two cases and one control. By 1982 four more cases and two more controls moved into the study area and in 2001 the same number of cases and controls remain in the area however one case and one control have moved addresses.
ble associations between proximity to arsenic emitting industries and the local densities of cases relative to the controls.
Application of visualization procedures
We first investigate changes in the water supply systems (Figure 1). It is clear that over a 50 year interval (from 1935-1995) private well owners and some community ground water systems replaced their private wells or ground water systems with a purchased surface water system (hooking up to a larger system such as the Detroit Sewer and Water System). Visualizing this information over time is valuable as it shows areas that historically might have been associated with high arsenic levels. It also is used to help assign arsenic concentrations to previous residences. For some public ground or surface water systems historic arsenic concentrations have been recorded. For participants on such water supply systems we therefore can directly assign water source arsenic concentrations. Historic arsenic concentrations for well water supplies often are not available, and for these we interpolate arsenic concentration values using geostatistical procedures that account for values in nearby wells, spatial covariance in these values, and their dependency on predictors such as groundwater geology [16].
Visualizing the movement of bladder cancer cases and controls through time is crucial in our analysis of arsenic exposure and how it relates to the incidence of bladder cancer. Figure 2 presents participants at three different time points (1960,1982,2001). A case is represented by a circle and a control by a square. In 1960 there were two Box plot of arsenic exposure in 1988 for cases (left) and controls (right) Figure 3 Box plot of arsenic exposure in 1988 for cases (left) and controls (right) The median is the black line that bisects the box. The upper and lower quartiles, the medians of the upper and lower halves of the data, are the edges of the black box. The "whiskers" on the box, the bars at the top and bottom, are 1.5X the interquartile range.
cases and one control. By 1982 four more cases and two more controls moved into the study area and in 2001 the same number of cases and controls remain in the area. Note that one case and one control have moved residences. The animated map thus informs us regarding the residential mobility of the cases and controls. Spatial and temporal subsets of these populations can then be selected and statistically analyzed and summarized using other visualization windows and statistical methods.
Analysis of arsenic exposure
In this analysis we are interested in the temporal variability in arsenic exposure in cases versus controls as well as clusters of high arsenic values. Arsenic exposure was calculated by multiplying arsenic concentration (µg/L) by home consumption of water and beverages made with water (L/day) at each residence and for each change in water consumption. Data regarding water and beverage consumption was obtained via survey [17]. We utilize the box plot to look at means and interquartile ranges through time (Figure 3 for 1988). The windows are time linked and show cases (on the left) and controls (on the right). A more evenly distributed exposure to arsenic in the case subset is indicated by the large interquartile, and 1.5X interquartile range. The time plot is another visualization method and provides information over the entire time range (Figure 4 xaxis equals time, y-axis represents arsenic exposure). This graph shows general trends in this preliminary dataset. In the early 1960's arsenic exposure was actually greater for controls (bottom graph) than for cases (upper graph). We also notice that the highest arsenic value (51 µg/L) occurred for a control in 1964 and lasted until the end of the study period. The highest value for a case (38 µg/L) occurred later in the study period (1990). All records are linked to the map view and an investigation of geographical clustering can occur in tandem with the temporal analysis of the time plot.
In addition to the graphical analysis we employed statistical clustering methods to identify spatial clusters of homes with high arsenic concentrations in their water supplies. The Univariate Local Moran is a statistical method used to detect local spatial autocorrelation by decomposing Moran's I into contributions for each location. Here, each location refers to an arsenic value sampled at the home of each participant. Moran's I is a weighted correlation coefficient that is used to determine whether neighbouring areas are more similar than would be expected under the null hypothesis. In this study the local Moran statistic is used to detect where there are statistically significant clusters of high (or low) arsenic values in participants' drinking water. Data regarding arsenic in drinking water was collected at the kitchen tap of each participant from their present residence. Water samples were stored on ice, acidified with 0.2% trace metal grade nitric acid, and refrigerated until analysis. Water samples were subsequently analyzed for arsenic using an inductively coupled plasma mass spectrometer (ICP-MS, Argilent Technologies Model 7500 c) [17]. A map of arsenic values from participants' drinking water is shown in Figure 5. The Local Moran analysis was performed on this arsenic dataset resulting in a map of signif-Local Moran analysis at two spatial scales Figure 6 Local Moran analysis at two spatial scales Local Moran analysis with five nearest neighbours is on the left, and with ten nearest neighbours is on the right. Notice the appearance of the high-high cluster to the north, and the increase in size of the low-low cluster to the west as the size of the local neighbourhood is increased icant clusters (identifying areas as high-high clusters, lowlow clusters, low-high outliers, high-low outliers, and areas not significant from background), and a local moran scatterplot. Figure 6 is the result of the local Moran analysis using spatial weights of five (left) and ten (right) nearest neighbours, with 999 randomizations, at the alpha level of 0.05. Generally the two maps look similar, and this is corroborated by similar Global Moran's I values of 0.126279 for five nearest neighbours and 0.129596 for ten nearest neighbours. However, there are differences that arise from analysing spatial pattern at two different local spatial scales. For example, in the northern region of the ten nearest neighbour map we find high-high values indicating high arsenic values surrounded by other high arsenic values. We also see an area of low-low values in the western part of the map, around Lansing. Households in these low-low locations are generally on community water supplies where arsenic values are kept below 50 µg/ L to comply with Environmental Protection Agency standards. Conducting the Local Moran analysis at different neighbourhood sizes allows one to evaluate the sensitivity of clustering to different spatial scales.
Conclusions
In this paper we presented a novel application of a space time information system to analyze some preliminary data in an ongoing case-control bladder cancer study. This approach is significant in that it not only visualizes the movement and attribute changes of spatial objects (including cases, controls, arsenic producing industries, and municipal water supplies) but also allows the user to compare values of these objects over time by time-linking windows. This ability to handle high temporal resolution data is enabling new approaches to exposure assessment. In the near future the STIS will be able to integrate exposure assessment models using an Application Programmers Interface (API). Users will have the flexibility to program specific models outside the software and then visualize their outcome in the STIS using the API. For less technically sophisticated users, a methods toolbar will be included, where common modelling algorithms will be made available using a simple calculator-type interface.
Other plans for the software include importing and supporting raster files, exporting animated maps as movies (for presentations), visualizing geospatial lifelines [18,19] in a separate window once objects are selected, and adding spatiotemporal clustering statistics to the methods toolbar.
Publish with Bio Med Central and every scientist can read your work free of charge | 5,758.2 | 2004-11-08T00:00:00.000 | [
"Environmental Science",
"Geography",
"Computer Science",
"Medicine"
] |
Bioinspired Silica Offers a Novel, Green, and Biocompatible Alternative to Traditional Drug Delivery Systems
: Development of drug delivery systems (DDS) is essential in many cases to remedy the limitations of free drug molecules. Silica has been of great interest as a DDS due to being more robust and versatile than other types of DDS (e.g., liposomes). Using ibuprofen as a model drug, we investigated bioinspired silica (BIS) as a new DDS and compared it to mesoporous silica (MS); the latter has received much attention for drug delivery applications. BIS is synthesized under benign conditions without the use of hazardous chemicals, which enables controllable in situ loading of drugs by carefully designing the DDS formulation conditions. Here, we systematically studied these conditions (e.g., chemistry, concentration, and pH) to understand BIS as a DDS and further achieve high loading and release of ibuprofen. Drug loading into BIS could be enhanced (up to 70%) by increasing the concentration of the bioinspired additive. Increasing the silicate concentration increased the release to 50%. Finally, acidic synthesis conditions could raise loading e ffi ciency to 62% while also increasing the total mass of drug released. By identifying ideal formulation conditions for BIS, we produced a DDS that was able to release fi vefold more drug per weight of silica when compared with MCM-41. Biocompatibility of BIS was also investigated, and it was found that, although ∼ 20% of BIS was able to pass through the gut wall into the bloodstream, it was nonhemolytic ( ∼ 2% hemolysis at 500 μ g mL − 1 ) when compared to MS (10% hemolysis at the same concentration). Overall, for DDS, it is clear that BIS has several advantages over MS (ease of synthesis, controllability, and lack of hazardous chemicals) as well as being less toxic, making BIS a real potentially viable green alternative to DDS. concentration and pore size distributions for BIS synthesized with di ff erent amines, reactant concentrations, and pH values, SEM images, particle size measurements, and percent ionization data (PDF)
INTRODUCTION
Drug molecules currently on the market, while effective, can have a whole range of limitations which reduce the efficacy of the drug. Some limitations include poor solubility, in vivo degradation, and short systemic circulation times. 1 Due to these factors, to achieve efficacy, drugs may require higher doses, which can result in higher toxicity. 1 One method of improving drug efficacy is by developing drug delivery systems (DDS). 2 Aside from the obvious potential medicinal benefits of DDS, there are also large economic benefits to be gained as new DDS take significantly less time and investment to develop than a new drug molecule (3−4 years and approximately $20−50 million for DDS 1 vs 10−12 years and $500 million for a new drug 3 ).
Many materials have been investigated for the use as DDS, e.g., liposomes, polymeric nanoparticles (e.g., dendrimers), and "hard" nanoparticles, mainly consisting of metals (e.g., gold), metal oxides (e.g., iron oxide, titanium oxide, and silica) or carbon. 4−6 However, relatively few DDS are currently on the market. 7 The main limitations for any DDS becoming a clinical product are the long regulatory journey coupled with issues with biocompatibility, efficacy, and the manufacturing processes. Briefly, a DDS must first be proven to work and be safe in vitro and then in vivo, manufacture should be straightfor-ward, and it should provide significant benefits over risks before it can gain support from patents and financial backing. Next, human clinical trials are carried out and, if these are passed, then the product will go on to become commercialized. 7 This long, multistep process can create obstacles for new DDS and result in the failure of many of them. Due to the high failure rate of DDS, there is huge potential for new developments in this field.
Here, we focus on silica as a DDS because there has been increasing interest in the use of silica nanoparticles for the purpose of drug delivery since 2001, when Vallet-Regi et al. described the effective loading and release of ibuprofen from a type of mesoporous silica nanoparticle (MCM-41). 8 The successful use of a silica DDS over other systems (e.g., liposomes) has been attributed to its thermal and chemical stability as well as versatility compared to those of conventional drug delivery systems. 9−12 Further, silica offers a versatile platform for functionalization with biomolecules to tailor drug release as well as targeted delivery. One of the most common methods of controlling drug release is through functionalizing silica to create stimuli-responsive DDS. This opens up a wide range of external stimuli which can be used to manipulate these materials, ranging from magnetism, ultrasound, and light to the more conventional temperature and pH. 13−16 Functionalization has also shown promise in targeted drug delivery. For example, an interesting avenue is using silica functionalized with cellpenetrating peptides for targeted delivery directly into cytoplasm. 17 Silica can be functionalized with various chemical groups, making it compatible with a range of drugs. Examples of various drugs investigated with silica range from anti-inflammatories such as ibuprofen or aspirin, antibiotics such as gentamicin and erythromycin, and antimalarials and anticancer drugs such as doxorubicin and campothecin. 8−10,18−27 While a gold-coated silica product (Auroshell 28 ) is in the first stage of development as an anticancer agent, there are currently no silica-based drug delivery systems on the market, despite the fact that MS showed some promise as an effective DDS nearly 15 years ago. This delay is due to several limitations, including long and laborious synthesis (synthesis of MCM-41 can take between 10 and 146 h 29−32 ) and use of harsh chemicals, toxic surfactants, hazardous precursors, and harsh conditions (extreme temperatures and pH 31 ). These imply that drug loading can occur only postsynthesis, which adds another step (and extra time) to the synthesis of this type of DDS. Therefore, a greener, economical, scalable, and safer method of synthesizing silica with potential for in situ drug loading is highly favorable.
Biomineralization of silica is observed in several species of aquatic unicellular organisms such as diatoms (a class of algae) 33 as well as in more complex organisms such as some sponge species and even in some plants. 34−36 It was found that specific proteins and biomolecules were involved in the condensation of biosilica such as silicatein and silaffin. 34−36 By understanding the chemistry and role of these biomolecules, we developed analogues of these biomolecules ("additives", typically amines) which have been shown to rapidly condense silica under benign conditions. 37,38 As such, this has enabled discovery of bioinspired silica (BIS) which can be controllably synthesized at room temperature, at a neutral pH, in water, and within 5 min. 39 This also opens the possibility of in situ drug loading, thus allowing a one step, green DDS formulation. 40 Further, amine-ibuprofen interactions have been reported to be favorable for drug delivery, 41−43 which provides another potential benefit of BIS over MS: a possible additional function of the amine additives.
As yet, only five papers have been published on the use of BIS synthesis for drug delivery applications (including one from our group 40 ), suggesting a vast potential for future research. Li et al. utilized a so-called "biomimetic" synthesis route; however, this method retained all of the issues of synthesizing MCM-41 (i.e., long synthesis time, high temperatures, and requirement for calcination). 44 Begum et al. made use of surfactants to create porosity; thus, their system still requires an energy intensive calcination step as well as postsynthesis drug loading. 45 Sano et al. designed a drug molecule which had the dual function of pharmacological activity and silica condensation ability (not all drug molecules will have this dual ability), meaning that the system was limited to only a small set of drug molecules. 46 Lechner et al. linked their cargo molecule to a silica condensing peptide; however, they were not able to fully control drug release. Conjugation of a peptide with a drug has many other issues such as loss of drug activity, use of hazardous chemicals, and also an extra synthesis step. 47 Preliminary work from our group reported the green synthesis of silica with in situ drug loading of calcein (a hydrophilic druglike molecule). 40 This synthesis required no calcination, and the amine additive was separate from the drug molecule. BIS did not show any significant toxic effects to either fibroblasts or human monocytes in the resting state even at high silica concentrations. However, mesoporous silica particles showed substantially reduced cell viability even at low concentrations. For example, the silica concentration required to reduce cell viability to 50% (IC 50 ) was 5−10 times more for BIS than for MCM-41. Further, BIS did not induce secretion of inflammatory cytokines at the concentrations proposed for use in DDS. 40 From these results, it is evident that, despite the use of amine additives, BIS is safe and does not cause concerning cytotoxicity.
In the present study, we aimed to further extend BIS to a pharmaceutically active drug molecule (ibuprofen) and create a DDS formulation which, through carefully investigating and understanding the formulation chemistry, would have the ability to control the loading and release of pharmaceutically active drugs. Ibuprofen was chosen because it is a commonly used model drug for DDS development due to its small molecular size (1.0 × 0.6 nm 2 ), 8 stability, 48 ease of detection (UV absorbance at ∼220 nm), and available literature on ibuprofen-silica systems for comparison. The main aim of this research is to primarily understand in situ drug loading into the BIS system. Specifically, we plan to determine predictive rules and investigate the effects of the amine additive, drug interactions, and silica chemistry on DDS performance (drug loading and release profiles). Further, to make BIS a viable DDS, it should exhibit loading and release profiles for ibuprofen similar to or improved over those of the competitor MCM-41based DDS.
2.2. In Situ Drug Loading into BIS and Drug Release. To a solution of sodium metasilicate in deionized water was added a solution of amine additive (in water) followed by an ibuprofen solution (in 70% ethanol). Then, a known volume of 1 M HCl (the volume of HCl required varied depending on the amine additive used) was added to reduce the pH of the solution to the desired pH (7, unless otherwise stated). The concentrations of the reactants in the final solution were 30 mM sodium metasilicate, 1 mg mL −1 PAH, and 1 mg mL −1 ibuprofen: this ratio was termed 1:1:1. For a 50 mL batch of 1:1:1, 0.3182 g of sodium silicate, 0.05 mL of PAH, and 0.05 g of ibuprofen were used. When BIS was synthesized with other amines (DETA, TEPA, and PEHA), a molar ratio of 1:1 [Si]:[N] was used. This equates to 0.05155 g of DETA, 0.05678 g of TEPA, and 0.05809 g of PEHA for a 50 mL batch. Once acid was added, silica precipitated within seconds, and the solution was left for 5 min before being centrifuged at 8000 rpm for 15 min to stop the reaction. The supernatant was stored at 4°C to determine the drug loading efficiency (percent of drug loaded into the silica) and drug content (percent weight of drug in the silica-drug complex) via the method described in Section 2.4. The silica pellet was resuspended, washed in deionized water, centrifuged two more times (no detectable drug was observed in these supernatants), and finally dried at 45°C for at least 5 h.
Once dried, 10 mg of the silica was suspended in 1.4 mL of PBS (pH 7.4) and incubated at 37°C to measure drug release. At each time point (1, 3, 5, 7, and 24 h time points), samples were centrifuged at 8000 rpm for 15 min, and 1 mL of the supernatant was used for highperformance liquid chromatography (HPLC) analysis and replaced with fresh PBS to satisfy the perfect sink conditions for determination of the diffusion parameters. Release is expressed as the percent of the loaded drug which has been released from 10 mg of silica. Each sample was prepared in triplicate, and release profiles were measured from each sample in triplicate.
2.3. Synthesis of MCM-41 and Postsynthesis Drug Loading. MCM-41 was synthesized by first dissolving CTAB in 300 mL of 25% ammonia at 35°C. While being stirred, 20 mL of TEOS was slowly added. This solution was then stirred for 3 h and aged for 24 h at room temperature in a closed container to allow silica to form. The product was then vacuum filtered and washed with 1 L of distilled water and finally dried overnight at 85°C. To remove the surfactant (CTAB), MCM-41 was calcinated at 500°C for 5 h. This was based on previously published methods. 12 To load the drug, 10 mg of MCM-41 was immersed in a 1 mg mL −1 solution of ibuprofen (in 70% ethanol) at 37°C overnight. Samples were centrifuged at 8000 rpm for 15 min, and the supernatant was removed (and supernatant drug concentration was measured to determine loading efficiency) and replaced with fresh PBS for a release experiment. At each time point, samples were centrifuged at 8000 rpm for 15 min, and 1 mL of the supernatant was taken for HPLC analysis and replaced with fresh PBS.
2.4. Drug Detection via HPLC. Drug loading and release were determined via an HPLC analysis method. A DIONEX system was used with an autosampler (GINA50), pump (P580), and variable wavelength detector (UVD170S) along with an ACE 5 C-18 column (150 × 4.6 nm with 5 μm particle size) at room temperature. An isocratic reverse phase HPLC method was used with a 30 μL injection volume and a mobile phase of acetonitrile:0.1% formic acid (70:30) at a flow rate of 1 mL min −1 . Ibuprofen retention time was approximately 4.7 min and was detected at a wavelength of 220 nm (λ max wavelength of ibuprofen). Data were collected using Chromeleon V6.80 software, and peaks were integrated to determine drug concentration. Data were fitted with a single exponential (eq 1) where Y 0 is the final release (%), A is a constant, and R 0 is the slope. When A and R 0 were multiplied, the maximum rate of release (% release per hour) was deduced.
Silica samples were characterized using nitrogen adsorption in a micromeritics ASAP 2420 Accelerated Surface Area and Porosimetry system. Samples were first weighed and degassed in optimum pressure and temperature conditions (120°C). They were then held at the boiling point of nitrogen and evacuated, allowing nitrogen gas to enter the sample tubes while the pressure was monitored. Analyses of the data included BET (Brunauer−Emmett− Teller 49 ) theory, used to characterize the surface areas of the silica particles, and the BJH (Barrett−Joyner−Halenda 50 ) theory, which allowed for the characterization of the silica pore size distributions.
Silica samples were imaged by scanning electron microscopy (SEM) using a Hitachi SU6600 field electron SEM. Samples were mounted on sample holders using sticky carbon tape and then gold splutter coated under vacuum to prevent charging of the sample. Micrographs were taken using a 20 kV potential difference and a working distance of 8.7 mm.
2.6. Measuring Movement of Silica across the Gut Wall. Rats (200−250 g, male, Sprague−Dawley) were anesthetized via intraperitoneal injection with pentobarbitone (60 mg/kg) and sacrificed for the experiment. The small intestine was removed and washed with 37°C Krebs solution (made from distilled H 2 O, 16.09% (w/v) NaCl, Intestines were then inverted and bathed in Krebs solution while ensuring the 37°C temperature was kept constant. Small sections of gut (∼5−6 cm) were cut and tied closed at one end with thread and filled with 1 mL of fresh Krebs solution, and then the open end was also tied closed.
To verify the health of the sections of gut, a control experiment was set up which measured the passage of glucose across the gut wall. Sections of gut were either immersed in 6 mL of a 1 mM glucose solution or in a 1 mM DNP (dinitrophenol) solution (to inhibit the active transport of glucose 51 ) for 15 min at 37°C before a glucose solution (to make a final concentration of 1 mM) was added. Sections of gut were then incubated at 37°C for an hour before being cut open and their contents removed. Glucose concentrations were measured using a glucose (gluc-pap) assay kit purchased from Randox.
To measure the passage of silica through the gut wall, fluorescent silica was prepared using the same method as in Section 2.2 except that PAH-FITC was used as the amine additive, thus creating fluorescently labeled silica. Fluorescence was measured on a RF-530IPC fluorometer at an excitation wavelength of 495 nm and an emission wavelength of 515 nm. Tubes of inverted rat gut sections were incubated in a 1 mg/ mL silica solution (in Krebs) or a 1 mg mL −1 silica solution and 1 mM DNP for an hour at 37°C. Gut sections were then cut open, their contents removed, and their fluorescence measured; the sections were then fixed in a formalin solution (neutral buffered 10%) for 30 min followed by two PBS (pH 7.4) washes. The inside and outside surfaces of the gut sections were then imaged using a Carl Zeiss Axio Imager Z1 with 10×/0.30 lens. Sections of gut were mounted either by stretching the gut and pinning the edges or compressing gut sections under Immu-mount and coverslips.
2.7. Hemolytic Activity of Silica. To measure the hemolytic activity of silica, rats (Sprague−Dawley) were bled, and the blood was stabilized with heparin (100 μL of 1000 units ml −1 ). Four milliliters of heparin stabilized blood was diluted with 9 mL of Dulbecco's PBS and centrifuged at 2250g for 5 min. The supernatant was carefully removed, and the blood was washed five times with D-PBS. After the last wash, the red blood cells (RBC) were diluted with 40 mL of D-PBS. Diluted RBC (0.2 mL) were then added to 0.8 mL of silica suspension at the desired concentration to make a final silica suspension. Positive and negative controls were set up by adding 0.2 mL of RBC to either 0.8 mL D-PBS or 0.8 mL of 0.2% Triton X-100. All samples were prepared in triplicate and briefly vortexed before
ACS Biomaterials Science & Engineering
Article being left static at room temperature for 4 h. Samples were then vortexed again and centrifuged at 10000g for 2 min. Supernatant (10 μL) was used to test the absorbance of hemoglobin using an anthos2020 plate reader at 577 nm with a reference wavelength of 655 nm. Hemolysis was calculated as % hemolysis = [(sample absorbance − negative control)/(positive control − negative control)] × 100. 52
RESULTS AND DISCUSSION
In order for BIS to be developed as an effective DDS, one must understand the synthesis chemistry and the mechanisms that dictate the loading and release of drug molecules from the system. There has been little information published on the loading mechanics of the BIS system and it has been speculated, but not proven, that embedded amine, originally employed to facilitate silica condensation, also helps to functionalize the silica. 40 If this is the case, then the BIS DDS can be synthesized, functionalized, and drug-loaded all in one step, which is a vast improvement on the long multistep process involved in MS. All of these possibilities were investigated herein.
3.1. Effect of the Amine Additive on the Loading and Release of Ibuprofen. The effect of the choice of amine additive for the synthesis of BIS upon its ability to load and release calcein (a nonpharmaceutically active but "drug-like" molecule) has previously been reported. 40 As these effects are drug specific, we investigated them for an active drug molecule (ibuprofen) in the BIS system and compared earlier results for calcein with those for ibuprofen.
To screen the most suitable systems, four additives were investigated: three small amines and one polyamine. These were chosen based on their silica precipitation performance and previous investigations into BIS. 37,38,40,53 We measured the loading efficiency (amount of drug loaded on DDS when compared to the concentration used for loading), drug content in the DDS (amount of drug loaded per weight of DDS), and total amount of drug released (mg drug/10 mg DDS). DETA, a small amine, was immediately excluded for use as it had a
ACS Biomaterials Science & Engineering
Article loading efficiency of only <5% ( Figure 1A). The other amines used were PEHA, TEPA, and PAH, and they exhibited loading efficiencies of 20−30%, while MCM-41 showed ∼40% loading efficiency ( Figure 1A). These differences between BIS and MCM-41 are likely due to the different methods by which the drug was loaded into these two types of silica. For BIS, ibuprofen was loaded in situ, and the drug would have been entrapped within the silica particles followed by some surface physisorption. With MCM-41, only postsynthesis loading was possible, and drug loading was entirely reliant on physisorption (hence surface area and porosity is important in this system).
Focusing on drug release from these DDS, despite having loading efficiencies similar to those of BIS-PAH, BIS-TEPA and BIS-PEHA released <2% of the loaded drug, and as such, these amines must also be discarded (Figures 1B and C). Approximately 22% of loaded ibuprofen was released from BIS-PAH compared to the 39% released from MCM-41 ( Figure 1B and Table S1). The release data appeared to fit well using a single exponential equation with >0.9 R 2 values in all cases (Table S1). The fitting showed that BIS-DETA, BIS-TEPA, and BIS-PEHA all had very low release rates ( Figures 1B and C). However, the rates of release (Table S1) from MCM-41 and BIS-PAH were similar (15 and 17% per hour, respectively).
The drug loading efficiency on MCM-41 was found to be 41%, while the loading efficiency for BIS-PAH was 23% ( Figure 1A). Despite this, MCM-41 released around half the amount of drug when compared to BIS-PAH (0.12 mg compared to 0.28 mg for 10 mg DDS, respectively). This implies that for a dose of 1 mg of ibuprofen, a patient would have to take ∼83 mg of MCM-41 compared to only ∼54 mg of BIS-PAH. High doses of MCM-41 silica can result in serious toxicity issues, unlike with BIS, 40,52 which highlights a key benefit of using BIS.
The differences in release profiles between BIS synthesized with the different amines are likely to be due to the porosity and morphology characteristics of the synthesized silica ( Figures 1D and S1B). Adsorption of drugs is a function of pore size, pore volume, and surface area; particle size does not have any impact on release. 54,55 In the case of MCM-41 (a mesoporous silica), it is generally accepted that porosity is a major factor in controlling the release of drugs, and so further investigation was needed to determine whether this was the case for BIS. 43,56,57 BIS-DETA, TEPA, and PEHA all have a very small pore volumes (∼0.1 cm 3 /g) and low surface areas (∼20−40 m 2 /g) ( Figure 1D). The pore volume and surface area for BIS-PAH (0.74 cm 3 /g and 129 m 2 /g, respectively) were higher than those of silica synthesized with the other three amines. This suggests that silica particles synthesized with any of the small amines were dense when compared to BIS-PAH, which explains the higher release from BIS-PAH within the BIS series. These observations explain why the BIS-DETA, TEPA, and PEHA samples exhibit poor drug loading/release when compared with BIS-PAH. Interestingly, MCM-41 has a much larger surface area (989 m 2 /g) than any of the BIS samples but
ACS Biomaterials Science & Engineering
Article demonstrated loading efficiency comparable to that of BIS-PAH. SEM revealed that BIS-PAH particles were fairly uniform in shape and size, exhibiting a range between 72 ± 17 and 78 ± 18 nm (Figures S2A and B) without and with the drug, respectively, thus suggesting that the presence of the drug did not affect the particle sizes significantly. On the other hand, MCM-41 samples used herein were not only very large in comparison (3340 ± 1013 nm, Figures S1A and B) but also nonuniform with large variations in size and shape. Further, it is interesting to note that, despite the difference in particle size between MCM-41 and BIS-PAH, the amounts of drug released were often comparable. At this point in time, a direct comparison between these two DDS is not possible simply based on SEM results because of their distinctly different drug loading mechanisms, and further analysis in future is necessary.
Along with porosity altering the release of ibuprofen, it has been reported that the amine-ibuprofen interaction is important in loading. 40,42,58 Because BIS-TEPA and BIS-PEHA showed over 30% drug loading efficiency, it is possible that the amine additives facilitate ibuprofen loading through favorable aminedrug interactions as reported elsewhere 41−43,58 but that they also form nonporous silica by fully encapsulating ibuprofen within the dense silica particles, thus resulting in very low release. PAH, however, allows ibuprofen loading through favorable interactions with amine groups, and release occurs through the silica pores. These observations are consistent with the literature, where it has been reported that these small amines lead to the formation of dense and nonporous silica, while PAH forms porous silica. 37
Altering Reactant Concentrations To
Understand the Silica-Drug System. The main aim here is to understand the DDS and investigate how controllable it is with ibuprofen so that this knowledge can be implemented for other drugs. As such, our next step was to study the effects of reaction chemistry on DDS performance. There has been some evidence that altering reactant concentrations can alter the loading and release profiles of calcein from BIS synthesized with PAH; 40 however, the reasons behind this effect were not fully investigated. Therefore, a systematic approach by varying synthesis conditions and evaluating their effects on drug loading and release has been taken while keeping the starting concentration of ibuprofen in the reaction mixture constant (1 mg mL −1 ).
Figures 2A and Table S2 show that for MCM-41 (as reported in the section above), the loading efficiency was ∼40% and the drug content was ∼3 wt %. The loading efficiency and drug content for the 1:1 BIS-PAH sample (30 mM solution of sodium metasilicate and a 1 mg mL −1 solution of PAH) were ∼22% and 13 wt %, respectively. When the concentrations of silicate and PAH were doubled (2:2), there was a doubling of ibuprofen loading efficiency (Figure 2A). This was attributed simply to more silica being formed (Table S2) because the drug content did not change (Figure 2A). When only the silicate concentration was increased and the PAH concentration was kept at 1 mg mL −1 (2:1), there was a slight increase in ibuprofen loading efficiency (Figure 2A), but drug content remained unchanged, which was attributed simply due to an increased silica yield (Table S2). Producing more silica means that more ibuprofen was loaded (and so less was wasted by being left in the reaction mixture). Interestingly, when a synthesis ratio of 1:2 (increasing PAH concentration but maintaining silicate concentration) was investigated, drug loading efficiency increased 3-fold to 75% (Figure 2A). This loading efficiency (which was significantly higher than that found for MCM-41 (∼40%)) was produced from a significantly lower silica yield (Table S2). The drug content also increased substantially from ∼10% for 1:1 to ∼70% for 1:2. This is likely due to a drug-amine interaction, suggesting that the amine can have a dual function of facilitating silica condensation as well as acting as a functionalization agent to facilitate drug loading (see Section 3.3 for further discussion). These loading studies highlight that the synthetic conditions can readily modulate the loading efficiency of BIS and even reach loadings that are significantly higher than what is achievable with MCM-1.
Finally, the release of ibuprofen from these samples was investigated, and it was found that the overall release of ibuprofen from different silica varied. BIS-PAH (1:1) released 22% of the loaded ibuprofen, and 2:2 and 2:1 both achieved higher releases (45 and 50%, respectively), which were greater than the 39% released from MCM-41 ( Figures 2B and C). It is possible that release from 2:2 and 2:1 was higher than that from 1:1 due to faster silica condensation because the silica precursor concentration used was doubled. 59 This resulted in lower pore volumes and smaller pores ( Figures 2D and S3B), leading to less drug being entrapped within the silica and remaining mainly as surface bound, making release easier. In contrast, a 1:2 ratio released only 6% of loaded ibuprofen ( Figure 2B) despite a very high loading efficiency and larger pore size ( Figures 2D and S3B).
When the release profiles were considered ( Figure S3A), all but the 2:1 samples exhibited burst release, where the majority of drug was released over the first 5 h and very little release was observed after this point (Table S2). This suggests that the ibuprofen that is able to escape is mainly surface bound and any ibuprofen embedded within the silica particles is trapped and unable to be released. This idea is supported by Figure S3A where all the BIS release profiles were similar to the release profile of MCM-41, which only had surface bound ibuprofen loaded. However, the 1:2 system had a much lower maximum release rate than the other systems (Table S2) as well as low total release ( Figure 2B). Table S2 also shows that the mass of ibuprofen released from all the BIS systems were higher than from MCM-41, some BIS samples releasing 5x more drug per weight of silica than MCM-41. This is important since if more mg of drug is released then less silica will need to be administered to a patient.
3.3. Understanding Additive-Drug Interactions To Control DDS Formulation. Ibuprofen contains a carboxylic acid group, which is expected to interact with amines. Several studies have exploited these favorable amine-ibuprofen interactions by postsynthetically functionalizing MS. [41][42][43]58 In addition, from the results presented above, there was an indication that the PAH-ibuprofen interactions are important for drug loading and release. Therefore, we investigated whether drug loading and release could be controlled by tuning PAH-ibuprofen interactions by varying the synthesis pH (and in turn the protonation). In this study, silica was usually formed at pH 7 as silica formation is the quickest at neutral pH for this synthesis method. 40,59 BIS will not readily form outside the pH ranges of 5−9; hence, we focused on exploring drug loading under this pH range and monitored the effect of formulation pH on the drug release ( Figure 3).
When silica was condensed at pH ≥ 7, the loading efficiency was not altered (remaining at ∼20%, Figure 3A). When the synthesis pH was more acidic, on the other hand, ibuprofen loading efficiency could be enhanced up to three times (to 60%) at pH 5. A similar picture was observed for the drug content (wt %) shown in Figure 3A. The release for samples formulated at pH ≤ 7 was similar ( Figure 3C and Table S3), whereas DDS formulated at pH ≥ 7 had greatly diminished release. It should be noted that all release experiments were carried out in PBS at pH 7.2. Interestingly, despite the higher drug loading at pH 5, there was no corresponding higher release observed when compared with DDS formulated at pH 7 ( Figure 3B). Despite this, the total ibuprofen (mg) released per weight of silica for the pH 5 sample was 10 times higher than that for the MCM-41 sample ( Figure 3D) When release was plotted as a fraction of total release over time, two different release profiles became apparent ( Figure 3C). BIS-PAH synthesized at pH ≤ 7 exhibited a burst release profile similar to those observed for BIS samples reported above (also evident from high release rates, Table S3), where the majority of ibuprofen was released from the silica in the first 5 h, and very little was released after this. This burst release profile was similar to that seen for MCM-41, suggesting that the main mechanism for release in these systems was release from the surface. However, silica synthesized at pH > 7 appeared to have a slow and sustained release profile, which was also reflected in slow release rates (Table S3). Release did not plateau for 24 h, and ibuprofen maintained a slow release over the course of the experiment. This slow release suggested that the loaded ibuprofen was embedded within the silica rather than bound to the surface, making release more prolonged. While the total amount of ibuprofen released from these
ACS Biomaterials Science & Engineering
Article samples under the 24 h observation window was low, this system does show some promise as a prolonged release system. It is clear from the results presented that the DDS formulation pH controlled the loading and release of ibuprofen. This could be caused by differences in porosity, morphology, and/or additive-drug interaction. SEM results suggested that pH did not have a significant effect on the morphology or particle sizes of DDS (Figures S2A and B). When surface area and pore volume were measured for BIS-PAH DDS formulated at different pH conditions, there were no significant differences observed (Figures 3E and S4). The differences in ibuprofen loading in these systems can then likely be attributed to ionization of the three components present (silica, amine additive, and drug) in the reaction mixture as well as the silica formation pathways. A scheme showing how the proportions of ionized reactants vary as the reactant pH is altered can be seen in Figure 4 and Table S4. The results here suggest that the negative charge on silica can have an inhibitory effect on loading efficiency. Both the silica surface and ibuprofen are negatively charged at pH ≥ 7 (Table S4), and therefore silica and the drug will repel one another, thus explaining low loading efficiencies at pH ≥ 7 (only ∼20% of ibuprofen was loaded under these conditions, Figure 3A and Table S3). With DDS formulations prepared under acidic conditions, and particularly at pH 5, the silica and ibuprofen are both significantly less charged, thus allowing ibuprofen to be more efficiently loaded (30−60% of ibuprofen was loaded under acidic conditions, Figure 3A and Table S3).
It is clear that pH has a drastic effect on the loading efficiency of ibuprofen into BIS, with more acidic conditions resulting in increased loading. There is also strong evidence of an aminedrug interaction playing a major role in the ability of BIS to load the drug. This interaction, when too strong, can also inhibit drug release.
3.4. Biocompatibility of BIS. Due to the ease and noninvasive nature of administration, oral delivery of drugs is the most preferred route for patients. 60 Silica is an ideal material for oral drug delivery due to its stability under the conditions found in the gastrointestinal tract, especially the low pH found in the stomach (pH 1−3), and therefore it is able to protect the loaded drug molecules from the changes in pH as well as degradative enzymes and bile salts. 61,62 While amine functionalization is beneficial for drug loading and controlling release, exposure of amine-functionalized MS to cells has been reported to result in a higher cytotoxicity, 40 higher level of plasma membrane damage, and higher hemolytic activity. 52 BIS was reported to be either noncytotoxic, toxic only at extremely high concentrations, or when internalized into activated macrophages. 40 To further improve our understanding of BIS, it is important to uncover the fate of orally administered silica.
A simple and effective experiment was set up using sections of rat gut and measuring the movement of fluorescently tagged BIS-PAH (FITC-BIS-PAH) across the gut wall over an hour. FITC-BIS-PAH was synthesized using FITC-tagged PAH so that its movement through the gut wall could be measured. We observed that ∼22% of silica moved across the gut wall during the hour-long incubation ( Figure 5). This movement was through passive diffusion because it was not affected by the addition of an inhibitor of active transport (DNP). To further observe the movement of silica particles through the gut wall, fluorescence microscopy images of the inner and outer surfaces of the rat gut were taken ( Figure 5). It is clear that when no silica is present, there are no defined points of fluorescence, but in the gut sections exposed to silica and silica with DNP, defined points of silica are observed. Silica was clearly localized on both sides of the gut wall, confirming its movement. Due to
ACS Biomaterials Science & Engineering
Article the ability of BIS-PAH to pass through the gut wall, it became important to investigate its biocompatibility with other cell types, particularly RBC.
The effect of BIS on RBC was determined through hemolytic activity of BIS when exposed to RBC. Figure 6 shows that BIS-PAH had very low hemolytic activity, lysing only 2% of RBC at the highest concentration used (500 μg/mL) and only 0.6% at the concentration which passed through the gut wall (∼250 μg/mL). MCM-41 exhibited a higher hemolytic activity, rising to 10% at 500 μg/mL. The reasons behind this difference were initially unclear but may be related to the size of the particles. It has been reported that silica particle size affects hemolysis. 63 BIS-PAH particles were spherical (78 ± 18 nm in diameter, Figures S2A and B) and significantly smaller than the irregular MCM-41 particles used (3340 ± 1013 nm in diameter), which could partly explain the difference in hemolytic activity between BIS and MCM-41. SEM data also show that, although BIS primary particles were <100 nm, they form micrometer-sized agglomerates and rapidly precipitate (hence DLS was not possible or useful). It is thus expected that BIS particles are as toxic as MCM-41 simply based on their sizes, but this was not observed. Although further work is required on BIS to fully understand their biocompatibility, our present and previous results show that BIS is more biocompatible than MS.
CONCLUSIONS
Our primary aim was to develop an in situ drug loading and release system using BIS. The BIS system can be controlled using many factors such as the choice of amine additive, pH of synthesis, kinetics of synthesis, and eventual location of the drug within the silica (Figure 7). Our results identified that the ideal formulation is BIS-PAH synthesized with a reactant ratio of 2:2. Formulation under an acidic pH was found to be suitable for designing DDS for faster targeted release, while basic pH was preferred for sustained release (Figure 7). Although a small portion of BIS-PAH was able to pass through the gut wall into the bloodstream, due to its low hemolytic activity, that does not appear to be an issue, in contrast to MCM-41. Ultimately, BIS appears to have several advantages over MCM-41 (such as one step formulation, simple controllability, and lack of hazardous chemicals), and it was found that BIS has drug loading and release profiles similar to or improved over those of MCM-41 in addition to superior biocompatibility. These benefits give BIS real potential as a viable DDS to be further investigated. We believe that the understanding of the DDS formulation using BIS that has emerged from this work can enable the discovery and development of a wide variety of DDS.
* S Supporting Information
The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acsbiomaterials.6b00224.
Data for loading and release profiles, mathematical fitting of release data, release presented as percent of final
ACS Biomaterials Science & Engineering
Article concentration and pore size distributions for BIS synthesized with different amines, reactant concentrations, and pH values, SEM images, particle size measurements, and percent ionization data (PDF) | 9,247 | 2016-08-24T00:00:00.000 | [
"Biology",
"Materials Science"
] |
The Effects of Native Language on Requirements Quality
[Context and motivation] More and more often software development projects involve participants of diverse nationalities and languages. Thus, software companies tend to use English as their business language. Moreover, to better prepare for future jobs, students consciously choose university courses in English. [Question/problem] As a result there is an increasing number of software engineers who are working or studying in a language which is not their native language. The question arises whether native language has an effect on the quality of natural language requirements. [Principal ideas/results] From the analysis of the requirements formulated by 44 participants of our empirical study, it follows that native language may have a negative effect on requirements quality, e.g., ambiguity, variability, and grammar issues. Furthermore, different native languages might drive to different quality issues. [Contribution] In order to prevent quality issues, our findings might be used by educators to adjust their materials to cater to different language groups, while practitioners might use them to improve their requirements review process.
I. INTRODUCTION
S OFTWARE engineering is a diverse field, both in terms of research areas and worker backgrounds.This diversity is present in the industry, and companies are increasingly using English as their business language, no matter what country they are based in.University students are also globally mobile, with many who have the means often choosing to study all or part of their higher education abroad in English.This means that there is an increasing number of software engineers who are working or studying in a language that is not their native language.
Software engineers often use requirements specifications, either writing or developing systems from them, where the quality of the specification could determine the quality of the end product.The success of a software development project is said to depend on the quality of its requirements specification [1], [2].Requirements are often written in natural language and, thus, the language used in that requirement could also have an effect on the quality of the specification.
The purpose of this study is to analyze natural language requirements written in English to determine (1) whether a author's native language has an effect on the quality of these requirements, and (2) which qualities are affected.In this paper, the term "native language" is defined as being the language of the country in which a person is born, raised, and receives early years of education.In an agile context, natural language requirements can either be written in the Software Requirements Specification (SRS) style or as user stories.
The findings from this study could support industry practitioners, research, and requirements engineering education.Targeted teaching and training could be developed to improve not only the overall quality of requirements but also to focus on the qualities that native speakers frequently have problems with.The study outcome could also help companies with requirements review processes, and quality checklists definition to identify or avoid requirements issues early on in development.
II. BACKGROUND AND RELATED WORK
The IEEE Recommended Practice for Software Requirements Specifications [3] presents guidelines on how to produce "good" natural language SRS-style requirements.The guidelines detail eight characteristics that individual requirements should possess and five characteristics that a set of requirements should have.The recommended practice states that individual requirements should be: necessary; appropriate; unambiguous; complete; singular; feasible; verifiable; correct; and conforming (when applicable).A set of requirements should be: complete; consistent; feasible; comprehensible; and able to be validated.If an individual requirement or set of requirements violates one or more of these qualities, then it is not considered to be "good".
The INVEST criteria, originally discussed by Wake in 2003 [4], are specifically for evaluating the quality of user stories, rather than SRS-style requirements.According to the criteria, a user story should be: independent, negotiable, valuable, estimable, small, and testable [5].If the story does not meet one or more of these criteria, then it is not of good quality.
There is a large body of work on requirements quality, with some focusing on specific qualities of a requirements specification and others giving a broader overview of what quality might be.Kiyavitskaya et al. [1] and Fabbrini et al. [6] take a detailed linguistic approach to identify ambiguity in requirements specifications.Antinyan et al. [7] focus on different requirements quality and developed a metric to measure the complexity of a requirement.With a broader look at all of the potential qualities of a requirements specification, Knauss et al. [8] developed a GQM approach to improving requirements quality.Genova et al. [2] also had a wider view of which requirements qualities to consider when creating the framework and tool for improving the quality of a requirements specification.However, while these studies were conducted in English, none of them looked at the linguistic background of the participants.
III. RESEARCH METHODOLOGY
Research Questions.Our study aims to answer the following research questions: RQ1: Does the native language have an effect on the quality of natural language requirements?
• RQ1.1:Which requirements qualities are affected?• RQ1.2: Do any particular languages have greater effects on requirements quality?Participants and Data Collection.We aimed to find participants who had a software engineering background, and who could potentially be asked to write requirements.The participants were selected on the basis of convenience sampling.Survey participants were reached via the REFSQ 2022 conference, LinkedIn, Facebook, Twitter, Discord, and email, and via sharing the survey link with the students studying software engineering at the Universities the authors work for.Thus, the participants were a mix of students, researchers, and industry practitioners within software engineering.We created an online survey hosted on sosci.de.The survey was piloted by two representatives of the study participants.We decided to ask two students (those who might have the lowest experience with requirements) who gave feedback which was used to refine the survey questions.The first five questions in the survey were demographic questions.The sixth question was a simple domain description after which the participant was asked to write five natural language requirements (either SRS style or user stories) for the example domain.The survey questions and study material are available online [9].
Data Analysis.The qualitative data was analyzed using thematic coding as per Saldana [10] with two coding iterations.The thematic coding process used a coding dictionary that we created, which covered violations of any of a selected subset of the IEEE characteristics of individual requirements [3] or four of the INVEST criteria for user stories [4], [5].When analyzing SRS-style requirements, we used the 2018 IEEE guidelines [3] that detail what good individual requirements should possess: correct; ambiguous; verifiable; necessary; appropriate; complete; singular; and feasible.The characteristic of "conforming" was not included in the analysis as the participants in the case study were not given a set template or writing style to follow.We chose to exclude the five characteristics for a set of requirements as we only asked participants to provide a sample of requirements rather than a complete requirements specification, and we did the analysis on each individual requirement.We also looked at whether a requirement is vague because we felt that being imprecise might not necessarily mean the requirement is ambiguous or unverifiable -it may just need more details or explanation.
For user story analysis, we used the INVEST criteria [4], [5]."Independent" was excluded as it would require evaluation of the user stories as a set, while analysis was conducted on individual user stories.We also made note of whether the user story was correctly formed according to the Agile Alliance user story template [5].As SRS-style requirements and user stories have different purposes and quality criteria, we did not use the SRS-style characteristics to analyze user stories, and the INVEST criteria were not applied to SRSstyle requirements.The requirements in this study are in written form, and so we also considered language quality as a contributor to the overall requirements quality.Therefore, we applied codes for typos and grammar issues.
After the first author completed the first analysis pass, a sample of 10 randomly-chosen responses (a total of 50 requirements) was analyzed by the second author.Then, we came together to discuss any differences and how to improve the coding book.Coding was redone by the first author based on these discussions.Tab.I shows three examples of requirements received in the survey and the final codes that were applied.The final coding book with examples is available online [9].Fig. 1 gives an overview of the thematic codes.
IV. RESULTS
47 people answered the survey.However, three respondents did not complete the requirements writing task sufficiently; therefore, 44 survey responses were considered for the analysis with 220 requirements in total.For simplicity, and to aid comparison, we report percentages over all collected requirements (user stories and SRS-style ones), even though not all errors are applicable to all requirements.
Respondent Demographics.Fig 2 shows the native languages of our respondents.The majority of respondents had Polish as a native language, due to the third author sharing the survey link with the Master students of software engineering specialty.Swedish, Chinese, and English were the next most common native languages of respondents.Although there are many dialects and languages, the participants are known as students of Beijing University of Technology where the language of instruction is Beijing Mandarin.
In terms of roles within software engineering, 22/44 respondents were students of master-level studies who might be treated as novice requirements engineers.Industry practitioners were the next largest group with 8 participants, and there were also 5 Researchers.9/44 respondents had multiple roles within software engineering: 6 were both a student and an industry practitioner; 2 were both a student and researcher; one person was an industry practitioner and a researcher.
Among the 14 respondents who selected the industry practitioner role as either their only role or as one of their multiple job roles, 4 stated their roles as "Developer" and 3 "Software Developer".There was one answer each for the following roles: "Senior Software Engineer"; "software engineer"; "System Architect"; "Technical project manager"; 914 PROCEEDINGS OF THE FEDCSIS.WARSAW, POLAND, 2023 the 44 participants) given in the online survey, 233 codes were applied.This means that multiple codes were applied to some requirements.The four codes that were applied the most were: unverifiable (25.91% of all codes); ambiguous (21.82% ); grammar issue (18.64%); and incorrect format (11.82%).
Looking at Table II, the native Chinese speakers had by far the highest percentage of occurrence of unverifiable codes (46.67%).The native Arabic speakers had the second highest percentage (30%), and the native Polish speakers had the third highest percentage of unverifiable requirements with 28.24%.Native Arabic speakers had the highest percentage of ambiguity occurrences with 50% of the requirements given being coded as ambiguous.The Polish native speakers had the second highest percentage of ambiguous code occurrences with 28.24%.
Observation 2: There are four requirements qualities that were affected the most that are: verifiability, unambiguity, grammar correctness, and correct format.
Observation 3: Native speakers of Polish, Arabic, and Chinese introduced the highest number of errors.
Other Factors.In our survey, we collected data on other factors such as level of education, number of languages spoken, and mother tongue.We found that holding a Bachelor's degree as the highest level of education and speaking four or more languages had a negative effect on requirements quality.This data is omitted for space reasons, but results are available online [9].
V. DISCUSSION
All participants in the study did make requirements quality errors, regardless of their native language.However, being a native speaker of Chinese, Arabic or Polish may have a negative influence on the quality of requirements that are written by those speakers.Two of these three languages have a writing system that is entirely different from English, which uses the Roman alphabet.
Unverifiability was the most common error made by the study participants and is a quality that often concerns Non-Functional Requirements (NFRs).The second most common error was Ambiguity.Althouth, as mentioned in Section II, ambiguity is a widely-researched topic within software engineering [11], [12], [1], [13], [14], the results from the study in the present paper suggest that continuing research and education in this area seems still needed.
The third most common error-grammar issues-could also be considered to be connected to ambiguity in some cases.Introducing grammar-checking tools and proofreading into the requirements writing process might help in preventing these errors.Then, there was the incorrect format error type as the survey participants did not use what is considered to be the standard user story format [5], [4].Thus, using such frameworks and tools for improving user story quality [15], [16] might be valuable.
Chinese, Arabic and Polish appeared to have a greater negative effect on requirements quality than the rest of the languages in our studies.However, we cannot claim what is the root cause of this observation.It is necessary to investigate whether requirements quality is affected by the native language itself (linguistic differences), the level of English education, education within software engineering, or other factors.Future studies that discover the root causes might deliver guidelines for requirements for engineers and educators.
VI. THREATS TO VALIDITY
Internal: Thematic coding brings threats to validity due to being subjective in its nature and subject to the bias and experience of the person doing the analysis.In order to mitigate this and minimize the threat, the second author received the coding dictionary that we created and independently coded a sample of 20% of the requirements obtained in the study.The English level of participants was not taken as the variable in the study, but we had an inclusion criterion-the participants need to have enough knowledge and skills so that they are able to either study or work in English.External: The study may not have a large scope of generalisability as even though the survey was shared with nonstudents, a large portion of the data collection was reliant on students.However, it could be argued that the results from student data could be indicative of the software engineering industry as they frequently work and might be treated as novice employees.
VII. CONCLUSION
This study investigates whether native language has an effect on the quality of requirements.The results from the analysis of the online survey data suggest that native language may indeed have an effect on requirements quality as well as on the type of error introduced by the requirements writer.It follows from our study that more work and education need to be carried out on improving verifiability and ambiguity within requirements.Moreover, more training is needed also on how to write user stories so that they are well-formed.Grammar issues were also quite prevalent across all requirements.Our results might be used by practitioners to include quality checks of the errors in their review process and by educators to draw the attention of students to errors they might introduce and teach them how to prevent making those errors.Moreover, researchers might use our results to investigate the root causes of why native speakers of some languages make more errors than native speakers of other languages. | 3,424.4 | 2023-09-17T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Mathematical Formulation of the No-Go Theorem in Horndeski Theory †
We present a brief mathematical-like formulation of the no-go theorem, useful for bouncing and wormhole solutions in Horndeski theory. The no-go theorem is almost identical in the cases of flat FLRW geometry and static, spherically symmetric setting, hence, we generalize the argument of the theorem so that it has consise and universal form. We also give a strict mathematical proof of the no-go argument.
Introduction and Summary
Let us study the most general scalar-tensor theory with second-order equations of motion, namely, the Horndeski theory [1].Despite the fact that the Null Energy condition (NEC) can be violated in a healthy way within this theory (see e.g., Ref. [2]), it turns out that non-trivial solutions without singularities, both cosmological and gravitational, still involve instabilities or strong coupling, provided that one considers the entire space-time manifold.The inevitable presence of pathologies was shown for both static, spherically symmetric, and time-dependent homogeneous solutions.The corresponding statements were formulated as no-go theorems [3][4][5][6].Generally, one finds that the quadratic action for linearized perturbations in Horndeski theory necessarily acquire wrong signs or zeroes in coefficients (in some cases, the latter fact signifies a strong coupling regime).Let us illustrate the issue of stability within a cosmological setup.For a spatially flat metric of the FLRW type, the quadratic action in unitary gauge (i.e., with vanishing scalar field perturbations) is as follows [7,8]: where a is a scale factor, h T ik stands for transverse, traceless tensor perturbations, while ζ is the only dynamical degree of freedom in the scalar sector.Let us note that the quadratic action above is of the most general form in the FLRW background of the Horndeski theory, modulo the specified gauge choice.The coefficients G T , F T , G S , and F S are some expressions in terms of Lagrangian functions, which are constrained by the stability requirement of the linearized theory.Positive G T and G S guarantee the absence of ghosts in tensor and scalar sectors, while positive F T and F S are required for the absence of gradient instabilities.Now, the Infinum of these coefficients naively corresponds to a strong coupling scale, which is why we require G T , F T , G S , and F S in the following to be greater than some positive constant value .
The behaviour of perturbations in a static, spherically symmetric setup is in full analogy to the cosmological one and the corresponding stability analysis gives similar expressions for the constraints.Let us note that we do not give a specific form of any coefficients involved in the quadratic action, since the no-go theorem relies on the general behaviour of the structures involved only.The only important aspect about the coefficients G T , F T , G S , and F S is that both G T and F T are linear combinations of Lagrangian functions and, hence, must be regular.We also use the non-trivial relations between the G S , F S and G T , F T in a setup of the theorem.
In the following section, we formulate the no-go theorem using the notations of the cosmological setup, but as it was explicitly shown in Ref. [9], the structure of stability conditions for the spherically symmetric case is the same; hence, the argument below applies to both homogeneous and inhomogeneous settings.The motivation for restating the theorem in a formal way was to have it in a clear and concise form in order to avoid any future misunderstanding of the concept behind it.We emphasise that we consider strong coupling, i.e., G T → 0, to be a dissatisfying feature.In the next section, we show that due to the particular constraints imposed on the coefficients of the quadratic action in the Horndeski theory, any complete, healthy regular solution has G S and F S , which are singular everywhere.
No-Go Theorem
Theorem 1. Set up: We consider functions on R 1 with the following relations (it is exactly the case of the general Horndeski theory):
Statement:
The only relevant function choice to satisfy the assumptions is Θ = 0 everywhere.
Proof.Suppose Θ = 0 at some point P, due to the continuity Θ = 0 in the vicinity of P. Let us consider the whole interval where Θ = 0.It is limited either by points q ± where Θ = 0 or q ± = ∞.In this interval, functions G S , F S , and ξ are continuous as well, since: therefore, ξ → ∞ on the right boundary (q → q + ) and ξ → −∞ on the left boundary (q → q − ).Finally, since ξ is continuous, ∃ point P where ξ = 0.This contradicts with positive G T and a.Therefore, Θ = 0 everywhere.
Discussion
It should be noted that we did not restrict ourselves to the finite G S and F S .Indeed, taking Θ = 0 at a glance results in singular behaviour of both G S and F S [10][11][12].However, it was checked explicitly 1 Coordinate q stands for time coordinate in a cosmological setup and radial coordinate in a static, spherically symmetric case, for instance, in a wormhole setup. in Ref.
[13] that in the flat FLRW case, the relation F S /G S , which corresponds to the sound speed squared in the scalar sector, remains perfectly finite.The latter fact signifies that the singularities, which arise when Θ hits zero value, are present only on the level of the linearized equations, but not the solutions.Hence, there is nothing wrong with Θ = 0.The statement has its direct analogue in a static, spherically symmetric case-see Ref. [9].
As it was specified in the introduction, the no-go theorem above applies to the Horndeski theory in the homogeneous case with a generally flat FLRW form of metric.In the homogeneous cosmological case, Θ ∝ H, where H is the Hubble parameter (see Refs. [8,10,13] for details).It is worth noting that the empty Minkowski space is a special case of the Horndeski theory in a homogeneous setting, and gives the example of a solution where Θ = 0 always , since a(t) = const and, hence, H = 0, while the rest of the terms involved in Θ vanish.Thus, Minkowski space is an example of a solution which satisfies the no-go theorem.
Generally, the theorem is quite universal when it comes to cosmological applications, since Horndeski theories are the generalization of the numerous scalar-tensor theories with second-order equations of motion.Another direct application of the formulated no-go theorem, as we mentioned earlier, is available in the static, spherically symmetric setting, such as a wormhole setup [6,14,15].
We would also like to briefly comment on possible loopholes here.As , which is involved in the no-go assumptions, is a strong coupling scale, we require it to be strongly positive-yet one can consider asymptotically strong coupling but make sure that scale of the classical evolution stays beyond the strong coupling scale.Another possibility is to consider geodesically incomplete solutions, i.e., asymptotically singular scale factor a. Coordinate q might correspond to time (cosmological scenario, e.g., bounce) or a radial coordinate (spherically symmetric solution, e.g., wormhole).Hence, the no-go theorem indeed summarizes both static and homogeneous cases. | 1,681.4 | 2019-02-02T00:00:00.000 | [
"Mathematics"
] |
Stability of a convex order one periodic solution of unilateral asymptotic type
In this paper, we consider semi-continuous dynamical systems with linear impulsive conditions, which have a convex order one periodic solution of unilateral asymptotic type. By constructing a sequence of switched systems and using the square approximation of the order one periodic solution, some stability criteria of the order one periodic solution are obtained. Compared with the continuous dynamical system, these criteria are very similar and also easily to be applied in the research of practical problems.
However, for the stability of the solution of an impulsive semi-dynamical system, there is still very limited results. Besides the famous Analogue of Poincaré Criterion [13,14] which has been popularly used, researchers have been attempting to obtain more available methods. Tian et al. [15] studied the stability of the positive order one periodic solution for a solvable semi-continuous dynamical system by using geometric approach. E. M. Bonotto et al. and his partners considered Lyapunov stability of closed sets and Poisson stability in impulsive semi-dynamical systems [4,5] and also got a different version of the Poincaré-Bendixson theorem [3]. Successor functions are also directly applied to analyze the stability of the order one periodic solution [10,11,16,17]. Furthermore, several researchers have made attempts to generalize the stability theory of continuous dynamical systems into impulsive semi-dynamical systems [18][19][20][21]. Although there are so many researchers taking part in this work, there is still little result that can be easily applied to show the stability of a solution for impulsive semi-dynamical systems. Even the well-known Analogue of Poincaré Criterion is limited to use because the stability is closely related to the initial value of the periodic solution. Previously, researchers [18][19][20][21] mainly focused on getting related stability results for some specific impulsive semi-dynamical systems. However, to the best of our knowledge, there were no general stability criteria for the order one periodic solutions of impulsive semi-dynamical systems. The purpose of this paper is to establish the stability theory for order one periodic solutions of impulsive semi-dynamical systems.
In this paper, we mainly discuss the stability of a convex order one periodic solution of unilateral asymptotic type. The paper is organized as follows. In Sect. 2, some notation and definitions of the semi-continuous dynamical systems are given. In Sect. 3, we mainly discuss the stability of the order one periodic solution by using the square approximation of switched systems. An applied example is given in Sect. 4 and the paper ends with a brief conclusion.
Preliminaries
In this section, some notation and definitions of semicontinuous dynamical systems are given. They will be used in the following discussions. (1) The dynamical system consisting of the solution mappings of the system (1) is defined as a semi-continuous dynamical system which is denoted by ( , f, ϕ, M). The initial point P is required not in the set M{x, y}, that is, P ∈ = R 2 + \ M{x, y}, and ϕ is a continuous mapping that satisfies ϕ(M) = N . We call ϕ the impulse mapping. M{x, y} and N {x, y} stand straight lines or curves in R 2 + , and we call M{x, y} and N {x, y} the impulse set and phase set, respectively. ) is called an order one periodic solution of system (1) with period T ( see Fig.1, denoted byÃB). The orbit of the order one periodic solution is called an order one cycle ( see Fig.1, denoted byÃB ∪ B A).
is an order one periodic solution of system (1). If for any ε > 0, there must exist δ > 0 and t 0 ≥ 0, such that for any point For order one circles (denoted byÃB ∪ B A for convenience of description ), suppose the trajectoryÃB is not tangent to the impulse set M, that is to say, point B is not a point of tangency. For any pointD in the phase set N near point A, we are interested in the position of its successor pointĒ. According to the position relationship of pointsD ,Ē and the order one periodic solutionÃB, all of order one cycles can be classified into the following three types: Type 1 the order one circleÃB ∪ B A is convex, and pointsD ,Ē are at the same side ofÃB. We call this type of order one periodic solution a convex order one periodic solution of unilateral asymptotic type (see Fig. 1a); Type 2 the order one circleÃB ∪ B A is not convex, but pointsD ,Ē are still at the same side ofÃB (see Fig. 1b); Type 3 pointsD ,Ē are at different sides ofÃB (see Fig. 1c). In the next section, we mainly discuss the stability of an order one periodic solution of the type 1 for a kind of semi-continuous dynamical system with linear impulse functions.
Main results
Consider the following dynamical system with impulsive state feedback control where the impulse functions are linear: SupposeÃB is a convex order one periodic solution of unilateral asymptotic type with period T of system (2) (see Fig. 2a). We denote it by and supposeÃB is not tangent to the impulse set x = h. For any point D in the −neighborhood of A, there must exist a point D in the phase set N such that the trajectory through D passes through point D. If for any point D, both of whose corresponding pointsD andĒ are above point A and F(D) < 0, then the order one periodic solution is unidirectional stable by Theorem 2.5 (see Fig. 2b).
What we need to do in the following is to find a method to verify that the successor function F(D) < 0. So far, however, there is still no available method for the calculation of the successor function of order one periodic solutions. In this paper, with the aid of square approximation of the order one periodic solution and the stability analysis of hybrid limit cycles of a kind of switched system, we will give a computing method for the successor function of order one periodic solutions which is similar to the method for continuous dynamical systems.
For the order one periodic solutionÃB of system (2), we denote A by A(x a , y a ) and B by B(x b , y b ). Since point B is mapped to point A by the impulsive mapping, the time spent by this behavior is 0. In order to use the square approximation of the order one periodic solution, we assume that the time spent by the impulsive mapping is T /n ( see Fig. 2c) and construct the following systems and Fig. 2 The order one periodic solution of system (2) and its square approximation These two systems motivate us to consider an approximation of switched systems for the semi-continuous dynamical system (2). Hence, in order to study the stability of the order one periodic solutionÃB of system (2), we formulate the following hybrid system (switched system whose switching law is determined by states): For simplicity, we introduce the following denotations then the system (5) can be rewritten as or where For system (2), we assume the closed curve consisting of curveÃB and line segment B A is an order one periodic circle. Arbitrarily choose a point S 0 in the phase set x = (1 − α)h which is near point A, then there exists a range of points: where S 1 is the successor point of S 0 , S 2 is the successor point of S 1 , and so on (see Fig. 3a). Establish a coordinate system at the phase set such that the coordinate of A is 0. Let s 0 , s 1 , . . . , s k , s k+1 , . . . be the coordinates of points S 0 , S 1 , . . . , S k , S k+1 , . . . , respectively.
Lemma 3.1 For any point S 0 in the phase set which is near point
i.e., the sequence s k → 0, k → ∞, then the order one periodic solution is asymptotically stable (unidirectional).
then the fixed point s = 0 is stable (unstable).
Proof We just prove the stability when |s s | ≤ 1 − ε, otherwise, the discussion is similar. Select η > 0 small enough such that for any s in the noncentral neighborhood U 0 (0, η) of the fixed point s = 0 satisfies For arbitrary point range {s k } ⊂ U 0 (0, η) which are obtained by the transform starting from point s 0 , we can easily get |s 1 | < δ|s 0 |, |s 2 | < δ|s 1 |, · · · , then we have |s n | ≤ δ n |s 0 | and |s n | → 0 when n → ∞, which means the fixed point s = 0 is stable. The proof is completed. H (x, y) has continuous partial derivatives with respect to x and y on R 2 , x and y are Fig. 4 The periodic solution of the switch system (7) functions of t, S is a closed curve that starts from point A in the direction as indicated by the arrow (see Fig. 4) and the period is T , then
Lemma 3.3 Assume
Since the necessary and sufficient condition of line integral independent of the path is if the two second order mixed partial derivatives ∂ 2 H ∂ x∂ y and ∂ 2 H ∂ y∂ x are continuous in the area of interest, then they must be equal and we can easily get The proof is completed.
In Fig. 2, we see the periodic solution n whose period is (1 + 1 n )T (see Fig. 2c) as the square approximation of the order one periodic solution whose period is T (see Fig. 2a), then for any continuously differentiable function D(x(t), y(t)), we have the following result For the order one periodic solutionÃB of system (2), we have given the successor point of any point near A on the phase set. Correspondingly, we also consider the periodic solution n =ÃB ∪ B A of the switched system (7) and give the successor point of any point a near A ( see Fig. 3b).
Arbitrarily choose a point a in the phase set which is near point A, then there exists a range of points: 1 is the successor point of a, a + 2 is the successor point of a + 1 , and so on (see Fig. 3b).
In order to get the explicit expression of the successor function of the point near the order one periodic solution , we firstly give the expression of the successor function of the point near the square approximate periodic solution n . Here, calculation methods to solve the successor function for continuous systems are supposed to be applied. To this end, we transform We still denote the convex order one periodic solution of unilateral asymptotic type of system (2) by =ÃB and the periodic solution of the square approximate switched system (7) by n =ÃB ∪ B A ( see Fig. 5 ). We want to calculate the successor function F(S k ) of any point S k near point A. For this purpose, we firstly establish coordinate system at the phase set N , and the coordinate of any point in the phase set is its coordinate on y axis. Suppose the coordinate of point S k is y S k , the trajectory passing through point S k intersects the pulse set at a point b. Point c is the phase point of point b and its coordinate is y c , then the successor function of point S k is F(S k ) = y c − y S k < 0 (see Fig. 5).
According to Theory 2.5, the necessary and sufficient condition for the unidirectional stability of the order one periodic solution is: for any point S k above point A, So what we need to do is finding a method to calculate the value of F(S k ).
Along the direction of the trajectory ofÃB, we introduce the curvilinear coordinates (s, n), where s is the arc length starting from point A, and its increasing direction is consistent with the increasing direction of time t; n is the length of the normal, and its positive direction is to the left side when traveling along the periodic orbit ( see Fig. 5 ). The trajectory through point S k intersects n axis at point a and intersects impulse set M at b, while the trajectory through point c intersects n axis at point d. We define the successor function of point S k in the curvilinear coordinate system is According to Theory 2.5, the necessary and sufficient condition for the unidirectional stability of the order one periodic solution is: for any point S k above point A, In order to study the stability of the convex order one periodic solution of unilateral asymptotic type, we assume P(x, y) and Q(x, y) in system (2) have derivatives of any order. Suppose the equations of orbital curveÃB are x = f (t), y = g(t), t ∈ [0, T ], which is also consistent with that the period of the order one periodic solution is T . For the curvilinear coordinate system (s, n) , take arc length s as a parameter, the equations of the orbital curveÃB are For the switched system (7), the orbital curve segment B A is the orbital curve of system (4), and its equations are Assume that the curvilinear coordinate of A is ( (s), (s)) (here, (s) and (s) are not smooth at points A and B, so we need make a smoothing approximation for (s) and (s) at points A and B by drawing new curve in a small enough neighborhood of points A and B, see Fig. 4 ), then for point a , the relationship between its rectangular coordinates (x, y) and curvilinear coordinates (s, n) is: Let Z 10 (x, y), Z 20 (x, y) represent the value of Z 1 (x, y), Z 2 (x, y) at periodic solution n , that is, (s), (s)), Z 20 (x, y) = Z 2 ( (s), (s)).
According to system (7), we can easily get
s), (s) + n (s)) Z 1 ( (s) − n (s), (s) + n (s)) and
Suppose Z 1 , Z 2 have continuous partial derivatives, then F(s, n) has continuous first-order partial derivative with respect to n and (10) can be rewritten as After simple calculations, we have where Z 1x0 , Z 1y0 , Z 2x0 and Z 2y0 denote partial derivatives of Z 1 Theorem 3.1 Assume that γ is the length of the periodic curve n =ÃB ∪ B A of system (7), then the periodic solution n is stable provided Proof Consider the trajectory abcd (see Fig. 5), the coordinates of a and b in the coordinate system (s, n) is denoted by n 0 and n, respectively. According to (13), if γ 0 H (s)ds < 0, then we have |n(γ )| < |n 0 |. By Lemmas 3.1 and 3.2, the periodic solution n is stable. The proof is completed.
Corollary 2 (Dilibereto) Along the periodic solution n , if H(s) < 0, then the periodic solution n is stable.
Let ds = Z 2 10 + Z 2 20 dt, then the left of the inequality (14) can be rewritten as (7) satisfies
Theorem 3.2 If the integral along the periodic solution n of system
then n is orbital asymptotical stable.
Furthermore, according to (3) and (4), we have then we can easily get (7) satisfies
Theorem 3.3 If the integral along the periodic solution n of system
then n is orbital asymptotical stable.
Since n → , by Lemma 3.4 we can get Theorem 3.4 If the semi-continuous dynamical system (2) has a convex order one periodic solution = AB of unilateral asymptotic type with period T , and the integral along the periodic solution satisfies then the order one periodic solution is orbital stable (but not necessarily orbital asymptotical stable ).
Corollary 3 If the semi-continuous dynamical system
(2) has a convex order one periodic solution =ÃB of unilateral asymptotic type with period T , and the region G that contains the periodic solution satisfies then the order one periodic solution is orbital stable.
Applied example
In this example, a cooperative system with state feedback impulsive harvesting is presented. Let x(t) and y(t) be the densities of two different populations at time t, respectively. There is an adjustable constant threshold value h for the density of the first population, and it will be harvested with proportion α when its density x reaches h. Then the system is where r 1 and r 2 are intrinsic growth rates, a and d are density dependent coefficients and the population interaction is governed by b and c. Without impulsive effect, we can easily get the equilibria of the ordinary differential system that consists of the first two equations of system (15). There are always three boundary equilibria: an unstable node O(0, 0) and two saddle points A(r 1 /a, 0) and B(0, r 2 /d). If ad − bc > 0, there is another interior node (x * , y * ) that is globally stable in the first quadrant, where x * = r 1 d+r 2 b ad−bc , y * = ar 2 +cr 1 ad−bc . We assume h ≤ r 1 d+r 2 b ad−bc when ad − bc > 0. In fact, if h > r 1 d+r 2 b ad−bc , the population level of x will not be in a high state to be harvested because it will tend to r 1 d+r 2 b ad−bc eventually without human intervention.
To discuss the existence of the order one periodic solution of system (15) Since r 2 + cx − dy = 0 is the horizontal isocline, variable y decreases above this horizontal isocline in the vector field and increases in the lower half of the vector field. Consider the trajectory of system (15) starting from point E, it must intersect the impulse set x = h at a point E , and after impulsive effect, the point E is mapped to a point E 1 which is in the phase set x = (1 − α)h. Since y E 1 = y E < y D = y E , the successor function of point E satisfies F(E) = y E 1 − y E < 0. Furthermore, the trajectory starting from point G must intersect the impulse set x = h at a point G , and after impulsive effect, the point G is mapped to a point G 1 in the phase set x = (1−α)h. Since G is sufficiently close to point C, y G > y C and y G 1 = y G > y G , the successor function of point G satisfies F(G) = y G 1 −y G > 0.
According to Lemma 2.2, there must exist a point N between points E and G on the phase set x = (1 − α)h such that F(N ) = 0, that is to say, there must exist an order one periodic solution passing through point N . The proof is completed. Proof Obviously, the order one periodic solution of system (15) we have given in Theorem 4.1 can be classified into Type 1, that is, it is a convex order one periodic solution of unilateral asymptotic type. Now we use the results we have obtained in Sect. 3 to show the orbital stability of the periodic solution.
Since the divergence of the system (15) is not a constant, and we cannot determine it is positive or negative. Let B(x, y) = 1 xy , then according to the Dulacs theorem, Theorems 3.3 and 3.4, we know the order one periodic solution of system (15) is orbital stable. The proof is completed.
Conclusion
In this paper, we studied a kind of semi-continuous dynamical system with linear impulsive conditions. The focus has been mainly on the stability analysis of the order one periodic solution. To the best of our knowledge, the calculation of the successor function in semi-continuous dynamical systems is not easy. Because the dissmoothness at the pulse point, stability criteria of continuous dynamical systems cannot be applied directly. Although researchers in the recent years have created several methods to prove the stability of an order one periodic solution, these methods always have no generality and applied only to particular models. Even the famous Analogue of Poincaré Criterion is not convenient in practical use for the stability of the order one periodic solution can only be judged with the aid of the initial value.
In order to give a general stability criterion of order one periodic solutions which can be used easily, we firstly classified all order one periodic solutions into three types. In this paper, we just studied the type 1, that is, the closed convex order one periodic solution of unilateral asymptotical type. To make use of theoretic results of continuous dynamical systems, we constructed a sequence of switched systems, each of which has a hybrid limit cycle. These hybrid limit cycles can form a square approximation for the order one periodic solution. Similar to the stability analysis in continuous dynamical systems, we got stability criteria for these hybrid limit cycles, then obtained the stability results for the order one periodic solution by using square approximation. The classification method of order one cycles is first proposed in this paper, and for the type 1 order one cycle, we successfully generalized the stability criteria of continuous dynamical systems into impulsive semi-dynamical systems.
Our ultimate goal is to solve the stability of the order one periodic solution of all the three types, but the current method we introduced in this paper is only applicable to closed convex ones of unilateral asymptotical type. The study of the other two types is under our future explorations. | 5,198.2 | 2017-07-11T00:00:00.000 | [
"Mathematics",
"Engineering"
] |